Sections

Commentary

The EU path towards regulation on artificial intelligence

European Executive Vice-President Margrethe Vestager speaks at a media conference on the EU approach to Artificial Intelligence following a weekly meeting of EU Commission in Brussels, Belgium, April 21, 2021. Olivier Hoslet/Pool via REUTERS

Advances in AI are making their way across all products and services we interact with. Our cars are outfitted with tools that trigger automatic breaking, platforms such as Netflix proactively suggest recommendations for viewing, Alexa and Google can predict our search needs, and Spotify can recommend songs and curate listening lists much better than you or I can.

Although the advantages of AI in our daily lives are undeniable, people are concerned about its dangers. Inadequate physical security, economic losses, and ethical issues are just a few examples of the damage AI could cause. In response to AI dangers, the European Union is working on a legal framework to regulate artificial intelligence.  Recently, the European Commission proposed its first legal framework on Artificial Intelligence. This proposal is the result of a long and complicated work carried out by the European authorities. Previously, the European Parliament had issued a resolution containing recommendations to the European Commission. Before that, the EU legislators enacted the 2017 Resolution and the “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things, and Robotics” accompanying the European Commission “White Paper on Artificial Intelligence” in 2020.

In the Resolution of October 20, 2020 on the civil liability regime for artificial intelligence, the European Parliament acknowledged that the current legal system lacks a specific discipline concerning AI-systems’ liability. According to the legislative body, abilities and autonomy of the technologies make it challenging to trace back specific human decisions. As a result, the person who suffers from damage caused by AI-systems generally cannot be compensated without proof of the operator’s liability. For this reason, the Resolution formulated a proposal at annex B with recommendations to the European Commission. This proposal has 17 pages, five chapters, and 15 articles.

Following the recommendations of the European parliament, on April 21, 2021, the European Commission developed its proposal for an AI legal framework through a 108-pages and nine annexes. This framework follows a risk-based approach and differentiates the uses of AI according to whether they create an unacceptable risk, a high risk, or a low risk. The risk is unacceptable if it poses a clear threat to people’s security and fundamental rights and is prohibited for this reason. The European Commission has identified examples of unacceptable risk as uses of AI that manipulate human behavior and systems that allow social-credit scoring. For example, this European legal framework would prohibit an AI system similar to China’s social credit scoring.

The European Commission defined high-risk as a system intended to be used as a security component, which is subject to a compliance check by a third party. The concept of high-risk is better specified by the Annex III of the European Commission’s proposal, which considers eight areas. Among these areas are considered high-risk AI systems related to critical infrastructure (such as road traffic and water supply), educational training (e.g., the use of AI systems to score tests and exams), safety components of products (e.g., robot-assisted surgery), and employees’ selection (e.g., resume-sorting software). AI systems that fall into the high-risk category are subject to strict requirements, which they must comply with before being placed on the market. Among these are the adoption of an adequate risk assessment, the traceability of the results, adequate information on the AI system must be provided to the user, and a guarantee of a high level of security. Furthermore, adequate human control must be present.

If AI systems have a low risk, they must comply with transparency obligations. In this case, users need to be aware that they are interacting with a machine. For example, in the case of a “deepfake”, where a person’s images and videos are manipulated to look like someone else, users must declare that the image or video content has been manipulated. The European Commission draft does not regulate AI systems that pose little or no risk to European citizens, such as AI used in video games.

In its framework, the European Commission adopts an innovation-friendly approach. A very interesting aspect is that the Commission supports innovation through so-called AI regulatory sandboxes for non-high-risk AI systems, which provide an environment that facilitates the development and testing of innovative AI systems.

The Commission’s proposal represents a very important step towards the regulation of artificial intelligence. As a next step, the European Parliament and the member states will have to adopt the Commission’s proposal. Once adopted, the new legal framework will be directly applicable throughout the European Union. The framework will have a strong economic impact on many individuals, companies, and organizations. Its relevance is related to the fact that its effects could extend beyond the European Union’s borders, affecting foreign tech companies that operate within the EU.  From this point of view, the need to adopt a legal framework on artificial intelligence appears crucial. Indeed, AI systems have shown in several cases to have severe limitations, such as an Amazon recruiting system that discriminated against women, or a recent accident involving a Tesla car driving in Autopilot mode that caused the death of two men. These examples lead to serious reflection about the need to adopt a legal framework in jurisdictions other than the European Union.


Amazon and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations and conclusions in this piece are solely those of the authors and not influenced by any donation.

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).