Robustness in Artificial Intelligence systems, defined as the ability to maintain performance under various perturbations and adversarial inputs, is a critical aspect of Artificial Intelligence reliability and resilience. This article explores the concept of robustness and its implications within the context of the European Artificial Intelligence Act, and the connection between explainability and robustness. Furthermore, the article provides recommendations for fostering robust practices and highlights the role of CertX in promoting and certifying robust Artificial Intelligence systems.
Introduction
With the rise of Artificial Intelligence (AI) systems and their pervasive impact on different industries, achieving robustness in AI has become crucial. As AI models become more complex, understanding their properties and limitations is of utmost importance to ensure reliability and improve resilience against potential vulnerabilities. We are way far from the early days of AI, and ensuring robustness now requires a holistic approach, addressing not only the algorithm and
model but also the whole system it operates within. The European Artificial Intelligence Act (EU AI Act) is an important regulatory step that sheds light on the need for AI robustness and establishes key principles to guide industries in promoting, developing, and implementing robust AI systems.
In this blog post, we will discuss the concept of robustness in AI systems and its implications in the context of the European Artificial Intelligence Act. We will also explore the relationship between explainability and robustness and provide some suggestions for fostering robust AI practices.
.
Defining Robustness in Artificial Intelligence Systems
Robustness can be referred to as the ability of an AI system to maintain its performance against various perturbations and adversarial inputs.
According to the EASA and Daedalean report of 2020 this quality encompasses two main aspects that are foundational in AI applications:
– Algorithm robustness: measures how robust the learning algorithm is to changes in the underlying training dataset;
– Model robustness: quantifies a trained model’s robustness to input perturbations.
According to Atkinson, Riani, and Cerioli 2010, the term “robust” was first introduced for procedures that are less affected by outliers, and ever since, the meaning and scope of robustness have evolved to encompass various aspects.
Nowadays this dichotomy is not valid anymore and robustness cannot be relegated solely ta data outliers. Take for example the case of adversarial attacks, where perturbations are introduced to AI models to exploit their vulnerabilities and induce misclassifications or erratic behaviors. These adversarial examples can be considered latent feature space outliers rather than data outliers, since they exploit the vulnerability of the model’s learned latent representations, rather than being anomalous data points.
This subtle example is rather indicative of the complexity involved in achieving robustness in AI systems today. Designing AI models that are resilient to both internal and external perturbations, and understanding the whole system’s robustness and dependencies on various factors, is critical in ensuring the trustworthy and reliable operation of AI applications.
Robustness in the EU Artificial Intelligence Act
The European AI Act emphasizes the importance of technical robustness and safety for high-risk AI systems. These systems are required to meet an appropriate level of accuracy, robustness, and cybersecurity in accordance with the generally acknowledged state of the art (Committee 2022).
However, the EU AI Act focuses more on system robustness rather than model robustness, pointing out that AI systems should be resilient against errors, faults, inconsistencies, and malicious actions that may compromise their functioning, safety, and fundamental rights (Committee 2022).
Although its importance, it’s essential to recognize the correlation between model robustness and system robustness, as poor robustness in one might negatively affect the other. Therefore, a comprehensive approach to robustness in AI should take into account both aspects, addressing vulnerabilities and ensuring resilience at all levels.
However navigating the state of the art of Robust AI can be daunting as a variety of techniques and approaches, from adversarial training to formal verification, have been proposed to improve the robustness of AI systems. However, this wide range of options might confuse practitioners and stakeholders, making it difficult for them to decide which methods are most appropriate for their particular use cases.
Common sense suggests that adopting a multi-faceted approach is likely to be more effective than relying on a single technique. By combining different robustness-enhancing strategies, companies can maximize the benefits of having a more robust AI system while minimizing potential vulnerabilities and risks.
Impact of Robust Artificial Intelligence on Industries
Having Robust AI systems can provide various advantages, such as improved performance, greater resilience against adversarial attacks, and reduced likelihood of system failures. Moreover, explainability and robustness are closely connected, as understanding a system’s mechanisms can help in guaranteeing its reliability (Centre. 2020).
Some “low-hanging fruits” that companies can adopt to foster robust AI practices include:
1. Implementing adversarial training: this involves training neural networks against an adversarial model of attack, making them more robust to adversarial examples;
2. Leveraging formal verification methods to derive worst-case robustness bounds;
3. Ensuring that performance metrics used for assessing model robustness capture relevant properties, such as adversarial robustness or generalization.
By following these steps, companies can improve the stability, reliability, and security of their AI systems, leading to numerous benefits in the long run.
The Role of CertX in Promoting Robust Artificial Intelligence
CertX provides assessment and certification services for AI systems. By offering relevant training programs and expert guidance, we aim to unlock the potential of Robust Artificial Intelligence. Our services encompass the following areas:
1. Professional training and development programs focused on robust AI concepts, techniques, and best practices;
2. Expert advice and support for the implementation of tailored robustness improvement strategies;
3. In-depth assessments to evaluate the robustness and compliance of AI systems with the European Artificial Intelligence Act;
4. Collaboration with industry stakeholders, regulators, and academia to advance the state of the art in robust and explainable AI.
As Artificial Intelligence continues to revolutionize industries, it’s critical to ensure that AI systems are robust and reliable. Through our expertise and services, CertX is committed to helping organizations navigate the complexities of implementing robust AI and complying with the European Artificial Intelligence Act requirements. Together, we can unlock the full potential of Robust Artificial Intelligence and pave the way for a more resilient, secure, and innovative future.
References
Atkinson, Anthony C., Marco Riani, and Andrea Cerioli. 2010. “The Forward Search: Theory and Data Analysis.” Journal of the Korean Statistical Society 39 (2). Springer Science and Business Media LLC: 117–34.
Centre., European Commission. Joint Research. 2020. Robustness and Explainability of Artificial Intelligence: From Technical to Policy Solutions. Publications Office.
Committee, Permanent Representatives. 2022. “Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts – General Approach.”
EASA, and Daedalean. 2020. “Concepts of Design Assurance for Neural Networks.”