The European Union’s AI Act, first proposed by the European Commission in 2021, is characterized by a “risk-based approach.” This approach, favored in various EU regulations, requires safeguards commensurate with the level of risk associated with different applications of artificial intelligence (AI). Similar to the General Data Protection Regulation (GDPR), which implements data processing safeguards based on risk levels, the AI Act imposes obligations on AI operators according to the risk category of their AI applications. The objective is to mitigate AI risks while fostering innovation to harness the transformative potential of AI technology.
The Underpinning Philosophy of Risk-Based Regulation
The EU’s risk-based regulation model acknowledges the dual nature of technology—it can be both beneficial and risky, depending on its applications. By focusing on risk regulation, the AI Act aims to govern the usage rather than the technology itself. This approach is especially relevant for AI, an emerging technology with vast and varied applications, from trivial to existential.
Understanding Risk in the AI Act
The AI Act’s understanding of risk hinges on two primary definitions:
- Risk Definition: Article 3(2) defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm.”
- Product Presenting a Risk: As defined in Article 3(19) of Regulation (EU) 2019/1020 on market surveillance, this term refers to products with the potential to adversely affect health, safety, or other public interests beyond reasonable levels during their intended use.
The AI Act aims to promote human-centric and trustworthy AI, ensuring high protection levels for health, safety, and fundamental rights within the EU.
Categories of AI Systems Based on Risk Levels
The AI Act classifies AI systems into several risk categories:
- Prohibited AI Systems: These systems present an unacceptable level of risk and are banned outright. Examples include:
- Systems using subliminal techniques or social scoring.
- Predictive policing based solely on profiling.
- Untargeted facial recognition database expansion.
- Emotion recognition systems in workplaces and schools.
- Real-time remote biometric identification systems, with certain exceptions for law enforcement in specific scenarios.
- High-Risk AI Systems: Central to the regulatory framework, high-risk AI systems are identified based on their specific purposes. Criteria for high-risk classification include:
- Systems intended as safety components of products covered by Union harmonization legislation.
- AI systems listed under Annex III, including those used in critical infrastructure, education, employment, law enforcement, and more.
- AI Systems with Transparency Risks: Article 50 mandates transparency obligations for providers and deployers of AI systems interacting with natural persons or generating synthetic content. This includes emotion recognition systems, biometric categorization, and systems generating deepfakes.
- General-Purpose AI Models: Defined in Chapter V, these models serve a variety of purposes and can be integrated into different AI systems. General-purpose AI models with systemic risk are those with high impact capabilities based on computing power and other technical benchmarks.
Practical Implications and Compliance
Implementing the AI Act’s provisions will be complex, given the detailed classification of AI systems and the specific obligations tied to each risk category. Tools like Connected Risk from Empowered Systems can aid in modeling and managing AI risks effectively. Connected Risk offers a comprehensive platform for understanding and mitigating the risks associated with AI, ensuring compliance with the AI Act and promoting safe and innovative AI deployment.
Conclusion
The EU AI Act exemplifies the sophistication of the risk-based approach to regulation. By categorizing AI systems based on their risk levels and stipulating corresponding obligations, the Act aims to balance the promotion of AI innovation with the need to safeguard public interests. As AI technology continues to evolve, tools like Connected Risk will be invaluable in navigating the complexities of compliance and ensuring that AI systems are deployed safely and responsibly.
This comprehensive guide provides a deep dive into the EU AI Act, offering insights into its risk-based framework and practical examples of how different AI systems are regulated. By leveraging tools like Connected Risk, organizations can navigate the regulatory landscape effectively, ensuring compliance while fostering AI innovation.