Final Approval of Ground-breaking EU AI Act
On May 21, 2024, a monumental stride in the realm of artificial intelligence regulation was achieved with the Council of the European Union’s final approval of the EU Artificial Intelligence Act (AI Act). This ambitious regulation spans over 420 pages and seeks to establish a “global standard for AI regulation,” promoting a framework rooted in trust, transparency, and accountability. The EU’s pioneering act is sector-agnostic and holds extra-territorial implications, aiming to regulate the use and development of general-purpose AI models as well as high-risk and low-risk AI systems, while outright prohibiting certain AI systems. The enactment of this comprehensive law articulates the EU’s commitment to fostering the adoption of safe and trustworthy AI technologies, although implementing its provisions might present substantial challenges for stakeholders.
Extra-territorial Scope Encompassing Various Participants in the AI Value Chain
The AI Act’s broad definition of “AI systems” mirrors the principles set forth by the Organization for Economic Cooperation and Development (OECD) for Trustworthy AI, particularly focusing on “general-purpose AI models” which include the increasingly utilized large language or foundation models. Notably, the Act’s jurisdiction extends globally to any entity marketing these AI systems in the EU, or deploying them within the bloc, regardless of the entity’s geographical location. This signifies a notable expansion in regulatory oversight, affecting an extensive range of stakeholders within the AI value/supply chain including product manufacturers, importers, and distributors. However, specific exemptions apply, such as AI systems solely employed for military purposes or exclusively designed for scientific research.
Varied Obligations Based on AI System Risk Levels
Employing a risk-based approach, the AI Act delineates obligations for AI systems according to identified risk levels:
- Unacceptable risk: This category bans AI practices deemed harmful or contrary to EU values, such as those threatening human dignity, freedom, and other fundamental rights.
- High-risk AI systems: Such systems necessitate robust data governance, risk management, and safety protocols, including mandatory registration in a public database. High-risk categories span across various applications, with certain narrow exemptions existing.
- Limited risk AI systems: Systems interacting with humans or involving emotion recognition, among others, must adhere to transparency obligations, providing users with pertinent information.
- Minimal risk: For AI systems posing minimal risk, no mandatory requirements are imposed, but adherence to voluntary codes of conduct is encouraged.
In addition, all providers of general-purpose AI models are subject to new obligations, with specific, stricter requirements imposed on models identified as having systemic risk. These encompass performing evaluations, mitigating potential risks, reporting incidents, and ensuring cybersecurity.
New Regulatory Bodies and Enforcement Mechanisms
To facilitate the implementation and oversight of the AI Act, the European Commission has established a new AI Office, responsible for evaluating general-purpose AI models and supporting national authorities in market surveillance. Moreover, the creation of the European Artificial Intelligence Board, comprising representatives from EU member states, will offer advisory support.
The AI Act’s phased introduction will commence approximately in December 2024 for prohibitions against certain AI systems, with broader applicatory provisions rolling out progressively until mid-2026. This landmark regulation may redefine the landscape of AI development and use within the EU and beyond, setting a precedent for global AI governance. However, as the EU navigates the complexities of this groundbreaking legislation, stakeholders are poised at the threshold of a new era in AI regulation, balancing innovation with ethical and legal considerations.