The European Parliament passed one of the first comprehensive regulations governing artificial intelligence (AI) technology and its applications worldwide. The Artificial Intelligence Act (AI Act), officially titled the “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” is a groundbreaking legislative framework that will shape how AI is used, governed, and regulated in the European Union (EU).
Initially proposed in 2021, the AI Act drew global attention due to its ambitious scope and the profound implications for companies developing or deploying AI-driven technologies. Now, three years later, it has been formally approved and is set to take effect at the end of the legislative session in May 2024, with full applicability expected 24 months after its official publication.
For businesses operating within or partnering with entities in the EU, the AI Act will have major implications for third-party risk management (TPRM), particularly for companies outside of Europe that want to engage in AI-related activities within its jurisdiction. This article provides a detailed breakdown of the AI Act and its potential impact on TPRM strategies.
What Is the EU AI Act?
The AI Act establishes a comprehensive governance and compliance framework for AI applications, ensuring that AI-driven technologies align with EU standards on safety, transparency, and accountability. The legislation introduces strict rules to regulate how AI is developed and used, with an overarching goal of safeguarding fundamental rights, mitigating risks, and fostering ethical AI innovation.
Core Objectives of the AI Act
The AI Act is designed to:
- Identify and mitigate risks posed by AI applications.
- Define high-risk AI applications and establish stringent requirements for their development and use.
- Enforce compliance obligations on AI providers and users.
- Establish a conformity assessment process before high-risk AI systems are deployed.
- Create a governance structure at the European and national levels for AI oversight.
The Act also includes clear restrictions on specific AI use cases deemed too risky or ethically problematic.
Banned and Highly Regulated AI Applications
The AI Act categorizes AI systems based on a risk-based approach, classifying them into four tiers:
1. Unacceptable Risk (Banned AI Systems)
AI applications considered a clear threat to safety, fundamental rights, or democracy are strictly prohibited. These include:
- Biometric categorization based on sensitive characteristics (e.g., race, gender, religion).
- Untargeted facial recognition data scraping from online sources or CCTV footage.
- Emotion recognition in workplaces and schools.
- Social scoring systems that classify individuals based on behavior or personal characteristics.
- Predictive policing based solely on profiling or behavioral assessments.
- AI systems that manipulate human behavior or exploit vulnerabilities, such as AI-driven children’s toys encouraging harmful behavior.
2. High-Risk AI Systems
These AI applications have strict regulatory requirements before they can be used. Examples include AI used in:
- Employment and hiring processes.
- Law enforcement and border control.
- Access to education, credit, or public services.
- Critical infrastructure such as healthcare and transportation.
Organizations deploying high-risk AI must ensure:
- Robust risk assessments and mitigation strategies.
- High-quality datasets to prevent biased decision-making.
- Traceability through logging and documentation.
- Clear and transparent information for AI system users.
- Strong human oversight to prevent automation biases.
- Security, accuracy, and reliability measures.
Additionally, general-purpose AI models, such as generative AI like ChatGPT, will need to adhere to specific transparency standards, including:
- Disclosing that content was AI-generated.
- Preventing the generation of illegal content.
- Publishing summaries of copyrighted data used in training.
- Reporting serious AI-related incidents.
3. Limited Risk AI Systems
This category covers AI applications with moderate potential impact, such as AI-driven chatbots or recommendation systems. Organizations using these technologies must ensure transparency by informing users when they are interacting with AI.
4. Minimal or No Risk AI Systems
AI applications with negligible risk, such as spam filters or AI-powered video game mechanics, are largely unrestricted. These represent the majority of AI applications in current use.
Implications of the AI Act for Third-Party Risk Management (TPRM)
New Compliance Burdens for Third-Party Vendors
For businesses that rely on third-party AI vendors or suppliers in the EU, compliance with the AI Act will become an essential part of vendor due diligence and risk assessments. Much like GDPR’s impact on data privacy regulations, companies that do business in Europe or use AI technologies sourced from EU-based providers must align with the Act’s transparency and accountability standards.
This means organizations will need to:
- Evaluate third-party vendors’ AI risk classifications.
- Assess AI models used in their supply chain for compliance risks.
- Incorporate AI-related questions into vendor risk assessments.
- Ensure vendors comply with AI transparency and data governance obligations.
- Prepare for high non-compliance penalties, which can reach up to 7 percent of global revenue or €35 million.
Given the broad definition of “high-risk AI,” organizations should increase scrutiny on vendors by asking specific questions about their AI models, data sources, and compliance measures. Industry-standard vendor risk assessments, such as SIG and SIG Lite questionnaires, are already evolving to include AI-related content—companies should ensure these assessments cover AI Act compliance.
Key AI-Related Risk Factors in TPRM
Beyond regulatory concerns, organizations must also account for several core risks when using AI-powered third-party solutions:
- Data Quality and Bias: AI models are only as reliable as the data they are trained on. Biased or low-quality data can exacerbate discrimination and lead to flawed decision-making.
- Transparency Challenges: Many AI models operate as black boxes, making it difficult to trace how decisions are made. Organizations must demand explainability and documentation.
- Cybersecurity and Privacy Risks: AI-driven tools can be prime targets for cyber threats and data breaches. Ensuring security controls and compliance with data protection laws is critical.
- Human Oversight Shortfalls: Overreliance on AI without adequate human monitoring can lead to errors or unintended consequences.
- AI Talent and Skills Gaps: Few professionals have deep expertise in AI governance, bias mitigation, and compliance, creating a major talent shortage.
Next Steps: Preparing for AI Compliance and Risk Management
The passage of the EU AI Act signals a shift in how AI technologies are regulated worldwide. As businesses increasingly integrate AI into their operations, governments will continue tightening oversight on AI governance, transparency, and risk management.
For third-party risk management teams, the key takeaways are:
- Review AI use within your vendor ecosystem and assess regulatory exposure.
- Update vendor risk assessments to include AI Act compliance checks.
- Monitor AI-related regulatory developments in other jurisdictions.
- Evaluate internal AI governance policies and implement responsible AI practices.
- Adopt industry standards, such as NIST AI risk management frameworks, to strengthen AI oversight.
The AI Act’s full implementation may still be two years away, but businesses should start preparing now. Adopting a proactive approach to AI governance, both internally and across third-party relationships, will ensure a smoother transition into the new era of AI regulation.