The EU AI Act is designed to promote the development of human-centric and trustworthy AI while safeguarding public interests such as health, safety, fundamental rights, and the environment. A critical aspect of the Act is the classification and regulation of “high-risk” AI systems. In this blog post, we will delve into the technical intricacies of high-risk AI systems, the mandatory safety and compliance requirements, and the potential penalties for non-compliance.

 

Created by Midjourney

Definition of High-Risk AI Systems

 

High-risk AI systems are those that, either because of their intended purpose or the manner in which they are used, pose a significant risk of causing harm. Specifically, AI systems that are safety components of products, or are themselves products covered by the Union harmonization law (listed in Annex II of the Act), are classified as high-risk. Additionally, AI systems involved in the management and operation of critical infrastructure (e.g., water, gas, electricity) are also considered high-risk, as their failure or malfunction could lead to significant disruptions in social and economic activities. It is important to note that the Regulation explicitly states that components intended solely for cybersecurity purposes do not qualify as safety components. This distinction is crucial for organizations to understand when assessing the risk profile of their AI systems. A clear example of a high-risk AI system is an AI-based diagnostic tool used in healthcare. Such a tool falls under the scope of Regulation (EU) 2017/745 (Medical Devices Regulation) and is required to undergo a third-party conformity assessment related to health and safety risks before being placed on the market.

 

Technical Robustness and Safety Requirements

 

The EU AI Act mandates that high-risk AI systems must exhibit technical robustness and safety throughout their entire lifecycle. Technical robustness entails that the AI system must be resilient against both system limitations (e.g., errors, faults, inconsistencies) and malicious actions that could compromise its security and result in harmful or undesirable behavior. Providers must implement measures to minimize unintended and unexpected harm, ensure the system’s robustness against unintended problems, and make the system resilient against attempts to alter its use or performance by malicious third parties.

 

As previously mentioned, an AI-based diagnostic tool in healthcare is a prime example of a high-risk AI system. This tool could be an algorithm that analyzes medical images, such as X-rays or MRI scans, to detect abnormalities or diseases. Since incorrect diagnoses can lead to severe health consequences, it is crucial that these AI systems are rigorously tested and validated for accuracy and reliability. Additionally, the AI system must be developed with high-quality data that is representative of the population it will serve, and any biases that could lead to inaccurate diagnoses or discriminatory outcomes must be diligently identified and mitigated.

 

In the machinery sector, a high-risk AI system could be an AI-powered robot used in a manufacturing plant. For example, an autonomous robotic arm that assembles heavy machinery parts or an AI system that controls critical parameters of a manufacturing process. The malfunctioning of these systems could lead to significant production losses, costly damages, or even pose a risk to the safety of human workers. Therefore, it is essential to ensure that these AI systems are designed with robust safety features, such as fail-safes or emergency stop functions, and are thoroughly tested and validated under various operating conditions. Additionally, the AI system must be resilient against cyber-attacks or malicious attempts to alter its performance, which could lead to catastrophic outcomes.

 Autonomous vehicles are another classic example of high-risk AI systems. These vehicles rely on a multitude of AI systems for object detection, path planning, and decision-making. The failure or malfunctioning of any of these systems could lead to accidents, endangering the lives of passengers and other road users. Therefore, these AI systems should be developed following the highest safety standards and are rigorously tested and validated under various real-world scenarios. Additionally, measures must be implemented to ensure the resilience of these AI systems against cyber-attacks or malicious attempts to alter their performance. For example, the AI system should be able to detect and respond to attempts to tamper with its sensors or manipulate its decision-making process.

 

In all of these examples, it is essential to consider the entire lifecycle of the AI system, from its development and deployment to its operation and eventual decommissioning. Organizations must implement robust processes for monitoring the performance of these AI systems and for updating them as necessary to address emerging risks or changes in the operating environment. Moreover, it is crucial to maintain comprehensive documentation of the AI system’s development, testing, and risk assessment processes to demonstrate compliance with the EU AI Act and other relevant regulations.

 

Human Oversight

 

Human oversight in the context of high-risk AI systems refers to the necessary measures that ensure that human intervention is possible and meaningful throughout the life cycle of the AI system. Human experts should be involved in the development process of the AI system to ensure that the system is designed and trained in a manner that aligns with ethical principles, legal requirements, and societal values. This includes the selection and processing of training data, the design of the model’s architecture, and the setting of its parameters. Even after development, human experts should actively monitor and evaluate the AI system’s performance to ensure that it behaves as intended in real-world scenarios. This may involve the use of validation datasets, simulation environments, or controlled trials. Finally, human operators should always have the ability to intervene in the operation of the AI system. This may involve the use of a “stop” button or similar procedure that allows the system to be halted in a safe state if necessary. It may also involve the use of monitoring tools that provide real-time feedback on the AI system’s performance and alert operators to any anomalies or potential risks. Finally, there should be a feedback loop where the human operators can provide feedback to the AI system to improve its performance over time. This could involve the adjustment of parameters, the re-training of the model with new data, or the tweaking of decision-making thresholds.

 

Security by Design

 

Security by Design is a principle that advocates for the incorporation of security measures throughout the entire lifecycle of an AI system rather than treating it as an afterthought. In particular, during development, it is important to ensure that the AI system is resilient against both internal and external threats. This could involve the use of secure coding practices, thorough testing of the system for vulnerabilities (e.g., adversarial attacks), and the incorporation of encryption and authentication protocols to protect data and system integrity. Moreover, security measures should be implemented to ensure that the AI system can only be accessed and operated by authorized personnel. This could involve the use of access controls, two-factor authentication, and secure communication protocols. Finally, regular security audits and penetration testing should be conducted during operation to identify and address any emerging threats or vulnerabilities. Anomalies in system behavior should be monitored and flagged for further investigation.

 

Compliance and Documentation

 

Compliance with the EU AI Act involves more than just technical robustness and safety. Providers must maintain comprehensive documentation of the AI system’s development, testing, and analysis to demonstrate compliance with the Regulation. This includes documenting the identification, reduction, and mitigation of reasonably foreseeable risks, as well as any remaining non-mitigable risks after development.

 

Penalties for Non-Compliance

 

Non-compliance with the requirements of the EU AI Act concerning high-risk AI systems can lead to severe penalties. Specifically, non-compliance with the requirements may result in administrative fines of up to EUR 20 million or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

 

Conclusion

 

Organizations that develop or deploy AI systems within the EU must carefully assess whether their systems fall under the high-risk category and take the necessary steps to ensure compliance with the technical robustness and safety requirements of the Regulation. Failure to comply with these requirements may result in significant penalties, highlighting the importance of a proactive and thorough approach to compliance with the EU AI Act.

 

CertX, as an accredited certification body with experts in AI, cybersecurity, and functional safety, can be an invaluable partner. With years of experience in high-risk and safety-critical domains, we can provide expert guidance and support to organizations in assessing the risks associated with their AI systems, implementing the necessary measures to ensure compliance with the Act, and ultimately obtaining the required certifications.