Posted

The EU’s “Third Way” to AI Regulation

EU-artificial-intelligence1138358728-300x198With artificial intelligence (AI) becoming more and more embedded in our everyday lives, there has been a corresponding need for regulations that foster AI development and adoption in a responsible manner. The question is how governments should approach such regulation.

Earlier this year, the European Commission published its proposed legal framework for AI. It is the first of its kind and is in keeping with what some have dubbed the EU’s “third way” approach that lies between a free-market U.S. model and an authoritarian and surveillance-oriented Chinese model of digital development and regulation.

What Technologies Are Covered?
The proposed law would apply to “AI systems”—a term that the EU has defined broadly to cover software that (a) is developed with machine learning, logic/knowledge-based or statistical approaches, and (b) which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with. The EU’s intent with such a definition was to be technology-neutral and as wide-reaching as possible to ensure the proposed law remains relevant to future technological changes.

Taking a Risk-Based Approach
The proposal takes a risk-based approach, with AI systems being categorized within one of four levels of risk, with corresponding varying levels of restrictions and requirements impose on the use of such AI:

Unacceptable Risk. The proposed law contains black-listed technologies that create “unacceptable risk” and would be banned in the EU. The “unacceptable risk” category includes AI systems used for the following practices:

    • subliminal techniques to materially distort a person’s behavior;
    • exploiting any vulnerabilities of a specific group due to their age, physical or mental disability to materially distort their behavior;
    • social scoring by governments; and
    • live remote biometric identification systems (e.g., face recognition systems) in publicly accessible spaces used for law enforcement purposes, subject to narrow exceptions including (amongst others) targeted searches for specific potential victims of crime, including missing children.

High-Risk Systems. The proposed law would apply new controls to “high-risk” AI systems which may have an adverse impact on safety or fundamental rights. The “high-risk” category includes AI systems used in the following areas:

    • biometric identification and categorization of natural persons;
    • management and operation of critical infrastructure;
    • education and vocational training;
    • employment, worker management and access to self-employment;
    • access to and enjoyment of essential private services and public services and benefits (e.g., eligibility for unemployment assistance, credit checks, emergency services);
    • law enforcement;
    • migration, asylum and border control management; and
    • administration of justice and democratic processes.

The proposed law would introduce mandatory requirements for all high-risk AI systems which could be prove burdensome. Such requirements would include the implementation of a risk management system, provisions regarding data and data governance, transparency requirements, human oversight, and a requirement that high-risk AI systems are designed and developed to achieve accuracy, robustness and cybersecurity. In case of a breach, the requirements would allow national authorities to have access to information needed to investigate whether the AI system complied with the law, such as the training, validation and testing datasets used by the provider as well as, in some cases, the source code of the AI system itself. Where such information is insufficient to ascertain whether a breach has occurred, authorities may organize testing of the system through technical means. Information provided to national authorities in such instances would be subject to specific confidentiality obligations contained within the proposed law which refer in particular to the need to protect intellectual property rights, confidential business information, and trade secrets.

Core obligations would apply to providers of high-risk AI systems. These would include implementing quality management systems, ensuring high-risk AI systems are compliant with the proposed new law, and undergoing a mandatory conformity assessment procedure for such systems. Obligations would also be placed on importers and distributors (focused predominantly on ensuring that the provider and the high-risk system complied with their respective requirements). Importers, distributors, users and other third parties would be subject to the core provider obligations in the event they placed a high-risk AI system on the market under their name or trademark, made substantial modifications to a high-risk AI system, or modified its intended purpose.

Limited Risk. For AI systems that fall within the “limited risk” category, specific transparency requirements would be imposed. An example of a “limited risk” AI system would be where there is a clear risk of manipulation (e.g., the use of chatbots or deep fakes).

Minimal Risk. All other AI systems can be developed and used subject to existing sector specific legislation (such as existing product safety and medical device regulations) and privacy laws such as the GDPR and ePrivacy Directive, without additional legal obligations. However, while the proposed new law doesn’t mandate additional requirements for minimal risk systems, it does encourage the drawing up of codes of conduct with the intention of fostering voluntary adherence. Providers of minimal risk systems may choose to voluntarily apply these requirements or comply with the guidelines for trustworthy AI produced by the EU Commission’s High-Level Expert Group on AI.

Which Businesses Will Be Impacted?
The proposed law would apply broadly to public and private organizations inside and outside the EU as long as the AI system is placed on the EU market, or its use affects people located in the EU. It can concern both providers (e.g., a developer of a resumé-screening tool) and users of high-risk AI systems (e.g., a bank buying this resumé-screening tool). It does not apply to private, non-professional uses, however. For example, personal use of an automated tool to identify appropriate job opportunities and produce a first draft resumé.

Penalties for Non-Compliance
The proposed law permits EU member states to promulgate the rules on penalties applicable to infringements, but also establishes that the maximum amount of penalty for the use of banned AI systems and breaches of the data and data governance requirements should not exceed €30M or 6 percent of total worldwide turnover (whichever is higher), and that penalties for breach of any other provision should not exceed €20M or 4 percent of total worldwide turnover (whichever is higher).

The penalty associated with the supply of incorrect, incomplete or misleading information to notified bodies in response to requests for information is subject to fines of up to €10M or 2 percent of total worldwide turnover (whichever is higher). When deciding on the amount of the fine in each individual case, national authorities must give due regard to the nature and gravity of the infringement. A blanket failure to comply with a request for information may therefore be treated more harshly than an incomplete response.

A “Third Way”
The proposed law is in keeping with what some have dubbed the EU’s “third way” approach that lies between a more liberal U.S. model and a more authoritarian and surveillance-oriented Chinese model of digital development and regulation. It is interesting to note that the proposal bans the use of social scoring—a tool used mainly in China—which may signal the EU wants to avoid AI being used for authoritarian surveillance.

Some in the U.S. (including former CEO of Google Eric Schmidt) have criticized the EU for going its own way with AI regulation, arguing the EU should instead work closely with the U.S. to counterbalance China’s growth in this area. However, as was the case with the GDPR, if the proposal becomes law, it will likely serve as a blueprint for similar laws in the U.S. and countries around the world.

Things may well still shift as the proposed law must now be debated and approved, a process which is expected to take over 12 months. That said, the EU’s proposal is simply one step in what will become a global effort to manage the risks associated with AI. Businesses and public organizations that produce, distribute or use AI systems should take steps now to identify risks associated with the use of AI systems in their business operations. This is especially true given the stiff penalties proposed by the EU for non-compliance and that in order to comply with the proposed law and other regulation globally, many organizations will need to implement a comprehensive AI risk-management program with a clear governance structure. Understanding and documenting the AI systems in use and the decisions made in the development of those systems will undoubtedly be key to complying with the new regulatory landscape.


RELATED ARTICLES

Everything in Moderation: Artificial Intelligence and Social Media Content Review

Retooling AI: Algorithm Bias and the Struggle to Do No Harm

Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence