The Council of the European Union and the European Parliament reached a provisional agreement on a new comprehensive regulation governing AI, known as the “AI Act,” late on Friday night (December 8, 2023). While the final agreed text has not yet been published, we have summarized what are understood to be some of the key aspects of the agreement.
To recap, the AI Act classifies AI systems based on risk levels, imposing stringent monitoring and disclosure requirements for high-risk applications compared to lower-risk ones. AI systems that are not likely to cause serious risks are not subject to certain monitoring and disclosure requirements. For more information, see our earlier briefing on the AI Act here.
Following the latest negotiations, the EU lawmaking bodies have reached a provisional agreement on the rules that will be included in the AI Act. In particular, they have reported that:
- Certain unacceptable-risk AI systems will be banned from the EU marketplace;
- Biometric identification systems will be permitted in public areas for law enforcement purposes (a contentious topic in the negotiations) but will be subject to certain safeguards;
- High-risk AI systems will be subject to revised fundamental rights impact assessments and transparency obligations;
- General purpose AI systems (systems that can be used and adapted to a wide range of applications) will be subject to specific transparency and other obligations;
- An EU “AI Office” and EU “AI Board” will be created and the establishment of regulatory sandboxes will be encouraged;
- Citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights; and
- The final text is expected to be adopted in early 2024, with a 24-month implementation period for most obligations (with reduced implementation periods applying to unacceptable-risk and high-risk systems).
A more detailed discussion of the key points coming out of the latest negotiations follow below.
Definitions and scope
Lawmakers have refined the definition of AI systems, adopting an approach in line with the OECD’s criteria. This is intended to improve the precision with which AI systems can be identified, and to distinguish them from simpler software systems (a key criticism of the initial text proposed by the European Commission). Hopefully, the agreed upon definition will be precise enough to avoid capturing systems that the AI Act is not intended to regulate, while being flexible enough to capture new developments as technology evolves.
The AI Act is reported to also contain exemptions for systems used for research activities and AI components provided under open-source licenses, intended to stimulate AI innovation and support small businesses.
Under the AI Act, a wide range of high-risk AI systems would reportedly be authorized, but subject to a set of requirements and obligations to gain access to the EU market (such as a requirement to undertake impact assessments and meet transparency obligations). The Council has stated that these requirements have been clarified and adjusted in such a way that they are more technically feasible and less burdensome for stakeholders to comply with, for example as regards the quality of data, or in relation to the technical documentation that should be drawn up by SMEs to demonstrate that their high-risk AI systems comply with the requirements.
Some uses of AI are reported to be banned by the AI Act, including (amongst others): (i) cognitive behavioral manipulation; (ii) the untargeted scraping of facial images from the internet or CCTV footage; (iii) emotion recognition in the workplace and educational institution, with some exceptions (such as detecting if a driver is falling asleep); (iv) social scoring; (v) biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion and/or political orientation); and (vi) some cases of predictive policing systems (based on profiling, location and/or past criminal behavior).
General purpose AI systems and foundation models
The concepts previously introduced by the European Parliament have reportedly been retained in the agreed version of the AI Act, namely:
- General Purpose AI System. An AI system that can be used in or adapted to a wide range of applications for which it was not intentionally and specifically designed; and
- Foundation Model. An AI model trained on broad data at scale, designed for its generality of output and that can be adapted to a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code.
According to the European Council, foundation models will need to comply with specific transparency obligations before they are placed on the EU market, and a stricter regime will be introduced for “high-impact” foundation models. High-impact foundation models are those that are trained with a large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain. Factors that may be taken into account when designating high-impact models include the number of floating point operations (FLOP), number of users and other attributes.
The AI Act will provide for the establishment of an AI Office at the EU level tasked with overseeing the most advanced AI models, contributing to standards and testing practices, and enforcing common rules in all member states. A scientific panel of independent experts will advise the AI Office about general-purpose AI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high-impact foundation models, and monitoring possible material safety risks related to foundation models.
An AI Board (like the European Data Protection Board or EDPB) will also be created, comprised of member states’ representatives. It will act as a coordination platform and an advisory body to the European Commission, working on the design of codes of practice for foundation models.
Law enforcement exceptions
The provisional agreement is reported to introduce tailored modifications for the use of AI in law enforcement, balancing the need for operational confidentiality with safeguarding fundamental rights. It will include an emergency procedure allowing law enforcement agencies to deploy high-risk AI tools that have not passed the conformity assessment procedure in urgent scenarios, alongside mechanisms to protect against misuse.
The European Council has also stated that the AI Act will permit the use of real-time remote biometric identification systems in publicly accessible spaces by law enforcement agencies in exceptional cases, including in relation to the location of victims of certain crimes; prevention of genuine, present, or foreseeable threats, such as terrorist attacks; and searches for people suspected of the most serious crimes. Additional safeguards will be in place such as the need for judicial authorization.
Measures in support of innovation
The AI Act is reported to encourage the establishment of regulatory sandboxes for the development, testing and validation of innovative AI systems before deployment. These AI regulatory sandboxes will allow testing of AI systems in real-world conditions, under specific conditions and safeguards. To alleviate the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators and provides for some limited and clearly specified derogations.
The administrative fines for violations of the AI Act are to be set as a percentage of the company’s global annual turnover or a minimum sum, whichever is higher.
This will reportedly be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7.5 million or 1.5% for the supply of incorrect information, with lower penalties for SMEs and startups.
It is understood that, unlike Article 82 of the GDPR, the AI Act will not include an individual right to compensation. However, the European Commission published its proposed draft AI liability directive (“AI Liability Directive”) on September 28, 2022, which is intended to introduce new rules specific to damages caused by AI systems and ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU. It is unlikely that the AI Liability Directive will be agreed before the European Parliament elections in 2024, however.
Work will now begin on finalizing the agreed text, which is expected to be formally adopted in early 2024. After the AI Act enters into force it is understood that a number of transition periods will apply:
- 6 months for the provisions banning certain uses of AI to apply;
- 12 months for the transparency and governance requirements applicable to high-risk AI systems and powerful AI models to apply; and
- 24 months for all other requirements of the AI Act.
However, the European Commission will be launching the AI Pact, seeking the voluntary commitment of industry to start implementing the requirements ahead of the legal deadline.