The European Union Artificial Intelligence Act presents an opportunity to develop comprehensive regulations for General Purpose Artificial Intelligence systems. Applying Operational Design Domains, originally used in autonomous vehicle safety engineering, as a regulatory mechanism can contribute to managing potential risks and promoting responsible Artificial Intelligence development. Operational Design Domains define the specific operational conditions and functionalities of AI systems, ensuring conformance with existing regulations, ethical principles, and human rights protection. Implementing an Operational Design Domains-based approach is essential to address the challenges raised by General Purpose AI systems and can help achieve accountability, transparency, and resilience across the AI product lifecycle. Enlisting the support of expert organizations like CertX is vital in defining, assessing, and improving AI systems within a robust regulatory framework, future-proofing AI technology investments.
In recent months, General Purpose Artificial Intelligence (GPAI) has gained significant attention as a transformative force within the technology landscape. As Large Language Models like ChatGPT, the new Bing, Bard, and newer entrants like European LAION’s Open Assistant emerge, it is essential to understand their implications within the EU Artificial Intelligence Act.
GPAI systems is a category acknowledging AI tools with multiple applications, such as generative AI models. These can perform several functions such as image and speech recognition, audio and video generation, pattern detection, question answering, and translation. They depart from the conventional Machine Learning systems given the broad context in which they can be applied as well as their scale of use.
As these tools will become more ubiquitous, it is worth investigating where they stand in the European Union Artificial Intelligence Act.
The current debate
In November 2022, the Council of the European Union called for new provisions to account for situations where AI systems can be used for different purposes, in other words, GPAI, and where this technology is subsequently integrated into another high-risk system (Committee 2022). As posed by Helberger and Diakopoulos (2023) these systems differ from conventional systems by application context and scale of use. No wonder therefore that given its dynamic nature the EU proposed a rather different approach departing from a categorization solely based on purpose rather on a more holistic approach. However, as pointed out in the open letter “Five considerations to guide the regulation of GPAI in the EU’s AI Act”, such a risk assessment should be carried out in the entire product lifecycle rather than on the usage as it is outlined by today’s amended AI Act version (AI Now Institute 2023).
In light of these considerations, it is therefore clear that what is suggested by Helberger and Diakopoulos (2023) in other words to resort to an approach à-là art.34 of the Digital Service Act (DSA) may be sub-optimal. In fact, under the Digital Service Act, large online platforms and large search engines have to monitor their algorithmic systems regularly for any actual and foreseeable negative effects on fundamental rights and societal processes (Parliament and of the European Union 2022).
Stating the provisions of the DSA is clear where it falls short. Instead of mere monitoring, a GPAI system necessitates a
thorough domain assessment, aiming at assessing the entire value chain of the system itself. To give the reader a perspective of what constitutes the complete picture of a GPAI, we can refer to the figure below, which illustrates the W shaped development cycle for learning assurance proposed by the European Union Aviation Safety Agency (EASA
and Daedalean 2020).
Operational Design Domains as General Purpose Artificial assessment tool
In this regard, this type of assessment can benefit from the best practices in the automotive safety engineering field. More specifically, these inspections are carried out in autonomous driving through Operational Design Domains (ODD) (NHTSA 2017).
ODD refers to the specific conditions under which a system or technology, like an Autonomous Vehicle (AV), is designed to function safely and efficiently. An ODD includes characteristics such as:
- Geographic location: roads, highways, or regions where the system is intended to operate.
2. Environmental conditions: weather and light conditions such as daytime, nighttime, fog, rain, or snow.
3. Traffic conditions: types of other road users (vehicles, pedestrians, cyclists), traffic density, and road infrastructure.
4. Operational constraints: legal restrictions, speed limits, or other rules that the system must adhere to.
Defining an ODD helps ensure that a safety-critical system operates within its intended boundaries and is robust enough to manage potential risks associated with its use.
Using ODDs as a regulatory mechanism offers a more comprehensive approach to managing the risks associated with General Purpose AI systems. ODDs represent specific sets of operational conditions in which a technology performs as intended. They are originally part of autonomous vehicle regulation, defining their limitations and operating parameters.
Applying the concept of ODDs to General Purpose AI systems would require defining parameters and boundaries for each system’s functionality. This encompasses data inputs, algorithmic processes, and the scope of the AI system’s outputs. ODDs allow regulators to maintain a more granular understanding of an AI system and assess its risk potential within certain contexts.
Implementing an ODD-based approach can enhance the risk assessment process for General Purpose AI systems. It can help ensure that AI implementations are conformant with existing regulations, ethical principles, and human rights frameworks. Additionally, it can enable a fair distribution of responsibilities and liabilities among different stakeholders in the AI value chain.
However, defining ODDs for General Purpose AI models can be challenging due to their multifaceted functionality and flexibility. Thorough and dynamic assessments would be needed to reflect the nature of these systems and understand their impact across different application contexts. Nevertheless, ODDs can provide a structured way to manage the complexities of General Purpose AI by ensuring accountability, transparency, and resilience throughout the
AI product lifecycle.
In conclusion, applying the concept of Operational Design Domains to General Purpose AI systems regulation within the European Union AI Act offers a comprehensive approach to managing potential risks and fostering responsible AI development. By defining and specifying the operational conditions, boundaries, and limitations of such systems, ODDs move beyond a purpose-based categorization and deliver a more granular understanding of the AI system’s context and impact.
The implementation of ODDs can lead to better conformance with existing regulations, ethical principles, and human rights protection, as well as enable a fair distribution of responsibilities and liabilities among AI stakeholders. While defining ODDs for General Purpose AI systems can be challenging, their integration into the regulatory framework can improve accountability, transparency, and resilience across the entire AI product lifecycle.
In this regard, CertX can help you and your partners in defining Operational Design Domains (ODDs) for your General Purpose AI systems, ensuring a safer and more responsible AI product lifecycle. With CertX’s Swiss quality and extensive expertise in functional safety, cybersecurity, and artificial intelligence, you can trust our tailored solutions for your business.
Don’t leave the safety and reliability of your AI systems to chance. Act now and enlist the aid of CertX’s
experienced team. Together, we can develop, assess, and improve your AI systems within a robust regulatory framework, guaranteeing compliance with emerging regulations and ethical standards. Contact CertX today to learn more about our services or visit our website to explore how we can support your AI development journey. Secure your AI future with CertX!
Committee, Permanent Representatives. 2022. “Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts – General Approach.” Edited by Council of the European Union. <https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf>.
EASA, and Daedalean. 2020. “Concepts of Design Assurance for Neural Networks.” Tech. rep. <https://www.easa.europa.eu/sites/default/files/dfu/EASA-DDLN-Concepts-of-Design-Assurance-for-Neural-Networks-CoDANN.pdf>.
Helberger, Natali, and Nicholas Diakopoulos. 2023. “ChatGPT and the AI Act.” Internet Policy Review 12 (1). Internet Policy Review, Alexander von Humboldt Institute for Internet and Society. <https://doi.org/10.14763/2023.1.1682>.
Institute, AI Now. 2023. “General Purpose AI Poses Serious Risks, Should Not Be Excluded from the EU’s AI Act: Policy Brief.” AI Now Institute. <https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act>.
NHTSA. 2017. “Automated Driving Systems: A Vision for Safety.” DOT HS 812 442. Washington, D.C.: US Dept. of Transportation.
Parliament, European, and Council of the European Union. 2022. “Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and Amending Directive 2000/31/EC (Digital Services Act).” <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:02022R2065-20221019>.