Navigating New Frontiers: Understanding and Adapting to the EU’s AI Act for Enhanced Third-Party Risk Management

In a landmark move, the European Parliament has recently given the green light to one of the pioneering regulations dedicated to the governance of Artificial Intelligence (AI) on a global scale. This regulation, known formally as the “Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” first caught the public’s attention back in 2021, drawing remarks not only for its comprehensive scope but also for its notably lengthy title. Fast forward three years, and this significant piece of legislation has now been ratified by the European Parliament, setting a new course for the application and management of AI technologies within the European Union. With its enforcement slated to commence at the tail end of the May 2024 legislative session, the AI Act is poised to bring about considerable changes, particularly affecting how third-party risk management (TPRM) programs are handled, especially for those international entities aiming to conduct business within Europe.

Understanding the EU AI Act

At its core, the EU AI Act is tailored to establish a structured framework for the oversight and compliance of AI within the EU. Its overarching aim is to introduce certain safeguards regarding AI usage in Europe, delineating the responsibilities of companies that either intend to develop AI tools or seek to implement AI technologies in their current operations.

The AI Act’s regulations are meticulously crafted to address several critical aspects:

  • Identification and mitigation of risks specifically associated with AI applications.
  • Enumeration of high-risk applications and setting forth stringent requirements for AI systems designated as high risk.
  • Definition of explicit obligations for both users and providers of high-risk AI applications.
  • Introduction of a conformity assessment prerequisite for AI systems before their deployment or market introduction.
  • Establishment of enforcement mechanisms post-market placement of AI systems.
  • Formation of a governance framework at both the European and national level.

Particularly noteworthy is the legislation’s approach towards certain AI use cases, which are either outright banned or subjected to rigorous regulation within the EU. These include biometric categorization based on sensitive characteristics, indiscriminate facial recognition data scraping, emotion recognition in work and educational settings, social scoring, predictive policing based on profiling, and AI systems designed to manipulate human behavior or exploit vulnerabilities, among others.

Categorizing AI Risks

The EU’s legislative framework adopts a risk-based methodology, delineating four categories of AI risks:

  1. Unacceptable Risk: AI systems considered a definitive threat to EU citizens’ safety, livelihoods, and rights are banned.
  2. High Risk: AI systems in crucial societal roles must comply with stringent requirements, including risk assessment and mitigation, data quality assurance, traceability, and detailed documentation.
  3. Limited Risk: AI applications with minor impact must meet transparency obligations, allowing users informed interaction.
  4. Minimal Risk: This category covers AI systems with negligible risk implications, constituting the majority of AI applications within the EU, such as video games and spam filters.

Implications for Third-Party Risk Management

For businesses, particularly those with ties to third-party vendors or suppliers within the EU or those seeking to engage in European markets, the enactment of the AI Act heralds significant compliance imperatives. Similar to the GDPR’s data privacy mandates, the AI Act necessitates adherence to specified transparency and regulatory standards, impacting vendor and supplier interactions profoundly.

The potential penalties for non-compliance—up to 7% of global revenue or €35 million ($38 million), whichever is higher—underscore the critical nature of adapting to these new regulations. This adjustment involves not only incorporating AI-related inquiries into vendor assessments but also aligning one’s operational and AI development practices with both the AI Act and other pertinent European technology laws.

Navigating the Future of AI Governance

As AI continues to be an integral part of operational strategies across the globe, the EU AI Act represents a significant step towards ensuring that this integration proceeds in a manner that is safe, transparent, and respectful of individual rights and societal norms. For third-party risk managers and businesses at large, the period leading up to the AI Act’s full applicability in 2026 will be crucial for adjusting strategies, enhancing compliance measures, and fostering a more conscientious approach to AI utilization and management.

In essence, the EU AI Act not only sets a precedent for AI regulation but also offers a blueprint for managing the complex interplay between technological innovation and ethical governance. As companies and regulatory bodies navigate this new terrain, the focus will inevitably shift towards creating AI applications that not only drive progress but also safeguard the fundamental rights and well-being of individuals and communities alike.

Like this article?

Share on Facebook
Share on LinkedIn
Share on XING

Talk to an Expert

"*" indicates required fields

Are you looking for support?

If you're looking for product support, please login to our support center by clicking here.

First, what's your name?*
This field is for validation purposes and should be left unchanged.

Submit a Pricing Request

"*" indicates required fields

First, what's your name?*
This field is for validation purposes and should be left unchanged.

Submit an RFP Request

"*" indicates required fields

First, what's your name?*
Which solution does your RFP require a response on?*
Drop files here or
Accepted file types: pdf, doc, docx, Max. file size: 1 MB, Max. files: 4.
    This field is for validation purposes and should be left unchanged.

    GDPR Cookie Consent with Real Cookie Banner Skip to content