On December 8, 2023, after months of negotiations, the European Commission, the European Parliament and the European Council reached political agreement on the terms of the European Union Artificial Intelligence Act (the “EU AI Act” or the “Act”). We previously covered the EU legislative process behind the Act here.
Importantly, the final text of the Act is not yet available, and may still be amended as details are finalized. However, public statements from the Parliament, Council and Commission among others have confirmed that provisional agreement has now been reached on the most contentious aspects of this landmark legislation. This development, which came amid mounting speculation that disagreements between the Council and Parliament could lead to the legislation being derailed altogether, means that the EU AI Act is back on track to come into force in early 2024.
If this timeline is met, the first provisions of the Act to come into force (the prohibition on “unacceptable risk” AI systems) will take effect in late 2024, followed by the requirements related to “high risk” systems in early 2025, and the remaining provisions in 2026. However, given the significant and wide-ranging impact which these changes will have, there are several steps which businesses may want to consider taking now in order to prepare.
What has been agreed?
The General Approach
The Act will maintain the risk-based tiers outlined in previous drafts, while (a) narrowing the universe of AI models and uses that will be subject to substantial regulatory burdens, (b) moving much of the burden of compliance away from model developers to users and vendors/providers, and (c) relying on transparency and disclosure to address certain key issues, including IP/copyright concerns.
Scope of the EU AI Act
The legislation will have broad extraterritorial reach, applying to the sale and use of AI systems available in, or affecting individuals located in, the EU, including non-EU entities who sell AI products into the EU. However, it remains unclear whether and how the Act will apply to the sale and use of AI products by EU entities outside of the EU.
As for what is covered by the term “AI”, the Act is expected to align with the OECD’s definition of an “AI system”, namely:
“a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
However, there will (reportedly) be an exemption to the Act inter alia for AI systems used solely for research, development and prototyping activities. No other final definitions have been confirmed or released at this time.
Risk-based approach
In line with earlier drafts, the final version of the EU AI Act is expected to divide AI systems into four categories:
- Prohibited (“Unacceptable Risk”) AI Systems. These are considered to pose a clear threatto the fundamental rights of people and will be outright prohibited in the EU. This ban is set to come into effect six months after the Act is passed. According to the latest Commission guidance, these systems are likely to include:
-
- Emotion recognition systems used in the workplace/educational institutions, unless for medical or safety reasons;
- Untargeted scraping of facial images from the internet or CCTV;
- Individual predictive policing;
- Systems that allow “social scoring” by governments or companies;
- Biometric categorisation to infer sensitive data, such as sexual orientation, political opinions or religious beliefs; and
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
- “High Risk” Systems. AI systems or applications that negatively affect the safety or fundamental rights of individuals will be permitted in the EU, but will subject to additional requirements and obligations. It is currently unclear how much of the original definitions related to the scope of high-risk AI has been preserved in the final agreement, but they are expected to include:
-
- Profiling of natural persons;
- Biometric identification, categorisation and emotion recognition systems (outside the prohibited categories);
- Creditworthiness evaluation of natural persons;
- Risk assessment and pricing in relation to life and health insurance;
- Medical devices;
- Certain critical infrastructures for instance in the fields of water, gas and electricity;
- Systems to determine access to educational institutions or for recruiting people;
- Certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes.
Similarly, limited details have been released on the scope and content of the additional requirements for high-risk systems. It is expected that certain users of high-risk AI systems will have to conduct a fundamental rights impact assessment prior to its use, or upon a substantial modification. High-risk AI system providers will also be required to complete a conformity assessment prior to putting the system into service. Other requirements are expected to include:
-
- Registration in an EU database (for certain users);
- Risk-mitigation systems;
- Quality control measures (for both data and performance);
- Recordkeeping (including ensuring that training data is traceable and auditable, and appropriately documented);
- User instructions;
- Human oversight and system monitoring;
- Ensuring that high-risk systems are trained and tested with sufficiently representative datasets to minimise the risk of unfair bias;
- Bias detection and correction measures, including ensuring false positive/negative results are not disproportionately affecting protected groups; and
- Cybersecurity protections, including ensuring that the AI system is technically robust and fit-for-purpose.
- Systems subject to transparency requirements. Certain uses of AI, including generative AI, are likely to be subjected to additional transparency requirements. For example, obligations that individuals must be informed if they are interacting with an AI system (e.g. a chatbot), a requirement that AI-generated content is labelled as such and an obligation to inform users when biometric categorisation or emotion recognition systems are being used.
- Minimal risk systems. These are systems which present a minimal risk to individuals’ rights or safety and is likely to include most AI systems currently in existence. Systems which fall into this final category will not be subject to new restrictions or obligations under the EU AI Act, though obligations under other, existing laws may still apply (such as discrimination and data protection laws).
General purpose AI systems
A key development from the latest round of negotiations has been in relation to “general purpose” AI systems. The final version of the Act appears to adopt a narrow approach to the regulation of most general purpose AI systems – focusing on their compliance with copyright law and ensuring that they are not used to create illegal content.
The Act only adopts are more stringent approach in a relation to a limited universe of “systemic” AI systems, a category which (as with the U.S. executive order) is delineated by reference to the magnitude of computing power used for the model. Based on our current understanding of this test, it seems likely that very few, if any, current general purpose AI models will meet these thresholds. Nonetheless, providers are such models will be required to assess and mitigate risks, report serious incidents, conduct state-of-the-art tests and model evaluations, ensure cybersecurity and provide information on the energy consumption of their models. The AI Office will publish Codes of Conduct containing further details.
Penalties and enforcement
Member states will be empowered to lay down penalties, including administrative fines, for violations of the EU AI Act, based on the scale of the infringement and the size of the offending company:
- For the most severe violations in relation to “Prohibited Systems”: the higher of 7% of the undertaking’s global annual turnover in the last financial year or €35 million.
- For system & model provider violations: the higher of 3% of the undertaking’s global annual turnover in the last financial year or €15 million.
- For the supply of inaccurate information: the higher of 1.5% of the undertaking’s global annual turnover in the last financial year or €7.5 million.
However, the Council and Parliament have provisionally agreed that more proportionate penalty caps will be applied for breaches of the EU AI Act by small and medium-sized enterprises, and start-ups.
The enforcement of the Act at an EU level will be overseen by the EU Commission’s new “AI Office”. This new regulatory body will be responsible for overseeing the most advanced AI models as well as contributing to fostering standards and testing practice, and will be supported on the technical side by a panel of independent scientific experts, an AI Board, and an AI Advisory forum.
In addition, at a Member State level, each jurisdiction will designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities. To date, Spain is the only member state to announce the creation of a specific AI regulator, with others simply opting to use existing regulatory bodies, including Data Protection Authorities.
How to prepare
As discussed above, the EU AI Act is expected to be passed into law in early 2024, with the first tranche of measures (the ban on “prohibited systems”) coming into force just six months later. As we await the final text of the legislation, there are a number of steps that businesses might still want to consider taking now:
- Monitor for developments. Given the high-level nature of the current press reports, we will not know the true scope and content of the obligations, and how onerous they are to comply with in practice, until we see the details of the final text. Businesses should monitor coverage of the Act’s progress through the final stages of the legislative process, both for further developments to its substantive content and for updates on its predicted completion date.
- Create an inventory of AI Systems subject to the EU AI Act. Businesses should consider reviewing their current AI systems to confirm whether they will fall within the jurisdictional and material scope of the EU AI Act, and therefore be subject to the law. Businesses can then create a pre-determined list of AI systems covered by the Act, which can be analysed for compliance.
- Conduct a preliminary review ofcurrent AI systems. Businesses may want to consider conducting an initial review of their relevant AI systems to determine whether, on the basis of the currently available information, they would qualify as “prohibited”, “high” or “transparency” risk. This is particularly important for potentially prohibited systems, given the relatively short amount of time before they will be banned. Similarly, parent entities of group companies may also want to consider extending this review to any subsidiaries, especially if they have a large European footprint. For example, this could be conducted through some form of AI survey, similar to those which are often used to manage cyber risk. Private equity firms may consider conducting a similar survey with their portfolio companies.
- Ensure technology procurement and investment teams are aware of the incoming requirements. The EU AI Act could have a significant impact on businesses’ technology procurement opportunities. Similarly, the Act will have an impact on the value of certain portfolio investment opportunities, especially if an entity’s products fall into the “prohibited” or “high” risk categories. To this end, businesses may want to ensure that this increased regulatory burden is kept in mind when scoping new procurement or investment opportunities. To achieve this, businesses may want to offer relevant employees a short training on the Act and AI risk more broadly.
This post comes to us from Debevoise & Plimpton LLP. It is based on the firm’s article, “The EU AI Act: Political Agreement Secured, We Await the Final Text,” dated December 17, 2023, and available here. Melissa Muse and Samuel Thomson contributed to the article.