The EU AI Act emerges as a significant piece of legislation, aiming to regulate the rapidly evolving AI technologies within the EU. This Act sets forth a legal framework, outlining objectives, principles, and requirements designed to ensure the ethical and safe use of AI. On the other side of this regulatory landscape are the AI ISO standards, which delve into the technical aspects of AI trustworthiness. These standards provide guidelines and requirements focusing on the operational and technical aspects of AI systems, aiming to foster trust and reliability. However, there is a noticeable gap between the comprehensive legislative ambitions of the EU AI Act and the technical coverage offered by these ISO standards. The ISO standards, while thorough in their own right, do not thoroughly align with the specific objectives, principles, and requirements set out by the EU AI Act, indicating a need for more comprehensive and actionable standards that bridge this divide.
Nevertheless, it is important to recognize the AI ISO standards as a valuable starting point for AI providers. They offer a solid foundation in understanding and implementing trustworthy AI practices. By adhering to these ISO standards, AI providers can equip themselves with a framework of trustworthiness that addresses key aspects of AI system management, such as risk assessment, data quality, and system robustness. As the AI regulation evolves, these ISO standards can serve as a stepping stone, guiding AI providers towards compliance with the more stringent and specific requirements of the EU AI Act.
Where Do Standards Meet The EU AI Act?
In what follows, we detail where the AI ISO standards intersect with the EU AI Act and their limitations concerning the main objectives of the upcoming regulations.
ISO/IEC 23894, a standard focused on AI risk management, serves as a starting point for AI providers in understanding their risks, and provide them with guidelines on risk management. However, this ISO primarily addresses organizational risks but overlooks specific risks to fundamental rights, health, and safety, which are core to the AI Act. In addition, ISO/IEC 23894 provides guidance rather than specific, actionable requirements needed for effective AI risk management. On the other hand, CLAIRM, proposed by CEN-CENELEC JTC 21, aims to offer detailed technical requirements for managing AI risks, focusing on practical and concrete risk management requirements.
Data Quality and Governance
ISO/IEC 5259 series covers several aspects of data quality and governance which are essential for every AI provider to align with the upcoming regulations. However, to fully align with the AI Act, additional implementation requirements are necessary, including a focused approach to data quality and reducing implementation overhead by aligning more closely with the AI Act’s legal requirements. For instance, while Part 2 of the ISO/IEC 5259 series catalogs numerous data quality attributes relevant to the AI Act, it defines data quality broadly, focusing on organizational requirements, and only superficially covers the data quality attributes crucial for complying with the AI Act.
ISO/IEC 42001 currently addresses logging and record-keeping in AI. Despite its limited extent, this standard clarifies the essential activities to comply with the record-keeping and traceability requirements outlined by the Act.
Transparency in Standards
Transparency is a key aspect of the EU AI Act. Current standards like ISO/IEC 42001, ISO/IEC 12792 Transparency taxonomy of AI systems, and ISO/IEC CD TS 6254 Objectives and approaches for explainability of ML models and AI systems touch on transparency. However, apart from the published ISO/IEC 42001 which lists a few requirements applicable to the transparency obligations in the Act, the rest are in their early stages.
Human Oversight in AI
Human oversight is a key component of the EU AI Act and is anticipated to be extensively covered in future standardization work, addressing organizational and technical measures, training measures, and the design of human-machine interfaces to facilitate oversight. Currently, ISO standards such as ISO/IEC CD TS 8200 Controllability of automated artificial intelligence systems are helpful in better understanding different aspects of human oversight but are not as comprehensive as required by the Act.
Robustness in International Standards
The ISO/IEC 24029 series offers partial coverage ofthe robustness of AI systems, mainly suggesting the best practices, including conventional measures and metrics for assessment of robustness. Although these series provide the fundamentals of accuracy and robustness, they lack detailed requirements for AI providers regarding the selection and justification of accuracy and robustness criteria, metrics, and thresholds.
CertX Bridges The Gap
As a reputable certification body and an active Swiss delegate in ISO/IEC JTC 1/SC 42 on Artificial Intelligence, CertX possesses a deep understanding of the AI ISO standards. This expertise uniquely positions CertX to assist AI providers in navigating the intricate path towards compliance with the EU AI Act. Through our knowledge and experience, we offer tailored guidance and support to AI providers, ensuring they not only understand the nuances of current ISO standards but also are well-prepared for the upcoming regulations set by the EU AI Act. By collaborating with CertX, AI providers can benefit from insightful advice and practical solutions that align with both international standards and European legal requirements, facilitating a smooth and efficient transition to a compliant and trustworthy AI future.