Provisional agreement between the Presidency of the Council and the European Parliament on the proposal laying down harmonized rules on artificial intelligence (AI)
Disclaimer: This article is based on official communications from the Parliament, the Council and the Commission, as well as the Commission’s updated Q&A through December 12, 2023, along with information on the recent European Union (EU) Artificial Intelligence Act. Content may be subject to change in subsequent agreements
The increasing penetration of Artificial Intelligence (AI) in various sectors has generated regulation around ethics, privacy and security. In this context, last Friday, December 8, 2023, after extensive negotiations over several months, European bodies reached a provisional pact on the most controversial aspects of the future Artificial Intelligence (AI) Regulation in Europe.
This regulatory framework has been under development since 2018, when the Commission presented the Communication on Artificial Intelligence for Europe to the European Parliament and other institutions (COM(2018) 237 final). In 2021, the Commission presented the first draft, followed by the Council in 2022, and finally, in 2023, the Parliament approved its text, incorporating environmental aspects and focusing on citizens’ rights. As the Regulation continues its legal passage, updates in the Commission’s Q&A on IA and official press conferences by the Commission, Council and Parliament have revealed key aspects of the agreement, although numerous details still remain in the shadows. Below is a summary of the information disclosed to date.
- DEFINITION AND SCOPE
The initial disagreement between the European bodies centered on the conceptualization of the term “artificial intelligence system” and its impact on the companies affected by the regulation. While the Commission advocated a broad definition that would encompass most technology companies, the Council opted to restrict it to parameters that could be easily circumvented by operators. The Parliament, in its desire to harmonize regulation at the international level, proposed a definition that was very similar to that of the OECD, which was subsequently updated, and which was located in an intermediate position between the pre-existing definitions. This definition was finally selected, so that the conceptualization of an artificial intelligence system is established as: “A machine-based system that, with explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments. Different artificial intelligence systems exhibit variations in their levels of autonomy and adaptability after implementation.”
The scope of application of the regulation maintains the exclusion of objectives related to national security, military, investigative or strictly personal uses. However, the Parliament’s proposal to exclude open source systems does not seem to have advanced, and according to Carmen Artigas, certain obligations will be imposed on them, which have not yet been detailed. The territorial extension of the Regulation remains uncertain. Although the three bodies agreed that any system whose output information is used in the Union would be subject to the Regulation, regardless of its location, the Parliament added additional criteria of subjection. In particular, it proposed to veto the possibility for any company based on European territory to supply systems considered prohibited outside the Union.
- SPECIFIC PROHIBITIONS AND GUIDING PRINCIPLES
European legislation is characterized by the inclusion of specific prohibitions aimed at preserving fundamental rights. The regulatory framework incorporates a list of uses considered to be unacceptable risks and therefore prohibited in the European Union (EU). During the trilateral negotiations, this list has been expanded to include categories proposed exclusively by the Parliament.
The list of prohibited uses includes elements such as real-time remote biometric identification in public spaces for law enforcement purposes, biometric categorization for the purpose of inferring personal characteristics, web-scrapping of facial images for the construction of facial recognition databases, emotion recognition in work and educational contexts, criminal prediction systems, social scoring based on social behavior, and systems that manipulate human behavior to circumvent free will, as well as those that exploit vulnerabilities based on age, disability, social or economic status. The prohibition of artificial intelligence systems that manipulate human behavior and the use of real-time facial recognition for mass surveillance reflect a strong ethical orientation, although it is important to note that this list should not be interpreted in a rigorous manner, as the final text will establish numerous exceptions and requirements that have not been previously detailed. Such provisions are identified as guiding principles at the intersection between technology and fundamental rights.
III. HIGH-RISK SYSTEMS
The categorization of artificial intelligence systems as “high-risk” in crucial sectors, such as healthcare, transportation and the administration of justice, leads to the imposition of more stringent regulatory requirements. These obligations cover various aspects, such as risk management, data quality, human oversight and cybersecurity. Included in this classification are systems used in professional recruitment processes, emotion recognition, biometric categorization, border control and in democratic processes.
Although the final inventory of high-risk systems has not yet been disclosed, those systems that perform basic tasks that improve the results of human activities, do not affect human decisions or perform preparatory functions will not be considered high-risk. This clarification is of great importance and beneficial to the European innovation landscape. In addition, the Parliament’s proposal has been ratified, requiring those who must implement it to carry out an impact assessment on fundamental rights, simultaneously with the impact assessment on personal data when relevant. This assessment will be mandatory for high-risk systems and those used in public services or by public entities. Similarly, citizens are granted the right to file complaints related to artificial intelligence systems and the right to receive explanations about decisions based on high-risk systems, pursuant to Article 22.3 of the General Data Protection Regulation.
- GENERAL-PURPOSE SYSTEMS, GPT REGULATION
In 2021, the Commission presented the first version of rules to regulate Artificial Intelligence (AI), adopting a risk-based approach depending on how the technology is used. The subsequent arrival of general-purpose systems such as GPT, Meta’s LLaMA or Claude surprised the Commission, generating the need to include specific measures in the Council and Parliament versions. These new models, without having a specific high-risk purpose, can be used for a variety of purposes, such as generating text, images or code.
After extensive negotiations, marked by pressure from large technology companies and the option of regulating these systems solely through voluntary codes, an agreement was reached to regulate these models. The ability to regulate these systems will also be subject to their classification as high-risk systems, a determination based on factors such as capacity or data volume, as proposed by Carmen Artigas. The final decision imposes two levels of regulation: firstly, it establishes minimum requirements of transparency and respect for the intellectual property of third parties for all systems in general use; secondly, it imposes additional obligations on high-risk models, such as incident monitoring or attack testing. These additional obligations will not be in the legal text, but will be developed through “codes of conduct drawn up by industry, the scientific community, civil society and other interested parties in collaboration with the Commission”.
- BIOMETRIC IDENTIFICATION
One of the most sensitive points in the negotiations has been to determine the scope of application of biometric identification cameras by law enforcement agencies in public spaces to safeguard national security. These cameras may be used under judicial authorization to prevent a “genuine and foreseeable” or “genuine and present” terrorist threat, i.e. one that is occurring in real time. In addition, it will be allowed to be used to locate or identify individuals involved in crimes such as terrorism, human trafficking, sexual exploitation, as well as environmental crimes, and to locate the victims of such crimes. Throughout the negotiations, governments have advocated extending the list of crimes, while the European Parliament has sought to limit it as much as possible and to obtain solid guarantees in favor of fundamental rights.
- IMPLEMENTATION OF THE REGULATION AND SANCTIONS
At the national level, national entities will be established to supervise the implementation of the Regulation. In the Spanish context, the Statutes of the Spanish AI Supervisory Agency (AESIA) have already been approved by Royal Decree 729/2023 of August 22. At the European level, supervision will be attributed to the Office of Artificial Intelligence Europe, an entity of the European Commission, in charge of coordinating and implementing the rules applicable to general-purpose systems with binding decision-making capacity. In addition, a Board of Experts will be created to advise the European AI Office, issue recommendations and support standardization activities. Finally, a Consultative Forum will be formed with various representatives of civil society and a panel of experts to assist the Member States. As regards penalties, they are proposed as a percentage of the company’s turnover or as a fixed amount, whichever is higher. The most severe penalties, for non-compliance with the provisions on prohibited systems, are set at 35 million euros or 7% of turnover. For non-compliance with legal requirements, the maximum penalty is set at 15 million euros or 3% of turnover. As an innovation, it is decided that, in the case of startups and SMEs, the penalty will be the minimum between the two figures.
VII. FORESIGHT AND IMPLEMENTATION PERIOD
The adoption of the Artificial Intelligence Act by the EU is a landmark legal achievement in regulating the intersection between technology and fundamental rights. In its normative positioning, the EU could significantly influence the global configuration of legal frameworks aimed at governing artificial intelligence in an ethical and fair manner. At Carmen Artigas’ press conference, it has been announced that, after the entry into force, a gradual period will be implemented to comply with the established obligations: 6 months for prohibited systems, 12 months for general use systems, 24 months for most of the obligations and 36 months for obligations linked to products with harmonized legislation in the European Union. At present, the essential technical details will be finalized, and each Member State representative in COREPER, as well as the Parliament, will have to validate the agreement to be published in the OJEU (the text is expected to be available by summer 2024). This transitional period will be addressed through a Commission-driven Artificial Intelligence Agreement, which will encourage voluntary adherence to the Regulation prior to its implementation.
OFFICIAL SOURCES CONSULTED
- EU Council Communication: https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/
- Parliament Communication: https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
- Commission Communication: https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473
- EU Council press conference: https://video.consilium.europa.eu/event/en/27283
- Q&A updated as of December 12, Commission: https://ec.europa.eu/commission/presscorner/api/files/document/print/en/qanda_21_1683/QANDA_21_1683_EN.pdf
REGULATIONS
- COM (2018) 237 final: https://eur-lex.europa.eu/legal-content/ES/TXT/?uri=COM:2018:237:FIN
- Royal Decree 729/2023, of August 22: https://www.boe.es/legislacion/eli/eli.php?path=es/rd/2023/08/22/729
- General Data Protection Regulation: https://www.boe.es/doue/2016/119/L00001-00088.pdf
LGC