Late on Friday (December 8th), the European Union Commission, Parliament and Council concluded its “trilogue” negotiations for the EU Artificial Intelligence Act. The summary below is based on the information available to date. It will be some time before the definitive text is finalized and released since it will have to go through various committee stages and its legal language finalized in multiple languages.
Prohibited AI Applications
The following applications of AI will be prohibited:
- biometric categorisation systems that use sensitive characteristics of individuals (e.g. political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- AI systems that manipulate human behaviour to circumvent their free will; and
- AI use to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Contentious Uses of AI for Law Enforcement
There were significant differences of opinion between the European Council and Parliament (and within political groups in Parliament) on certain uses of AI for law enforcement purposes, an area that raises sensitive political and social issues. The European Parliament pushed for a complete ban on remote biometric identification systems that use footage gathered in publicly accessible spaces. The final agreement reflects a compromise, where such use will be permitted, but limited to strictly defined lists of crimes and will require prior judicial authorization.
The use of biometric information will need to comply with strict conditions and its use will be limited in time and location, for the purposes of:
- targeted searches for victims of serious crimes such as abduction, trafficking, sexual exploitation,
- preventing specific and present terrorist threats, or
- locating or identifying persons suspected of committing one of a defined list of serious crimes including terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, or environmental crime.
High-Risk AI Systems
Previous versions of the draft AI Act designated certain uses of AI as “high-risk” systems. Such systems can only be placed on the EU market or put into service if they comply with certain requirements, including implementing a risk management system, appropriate data governance, transparency obligations and human oversight. The latest changes introduced by the EU Parliament, included in the final political agreement, require mandatory fundamental rights impact assessments to be undertaken by public sector bodies and some financial services companies. It appears that the categories “high-risk” AI systems have been expanded, as in previous iterations, to include AI systems used to influence the outcomes of elections and voter behaviour.
General-Purpose AI/ Foundation Models
One of the most difficult negotiation issues has been the treatment of general-purpose AI systems, or foundation models, which are capable of a wide range of general tasks and can be the base on which other applications are built – therefore posing the risk that downstream applications share the same problems or issues. During the negotiation process, France, Germany and Italy had objected to restrictions on foundation models, partly driven by a desire to protect their nascent AI industries.
The final position includes transparency requirements on general-purpose AI systems, as proposed by the EU Parliament. Developers of general-purpose AI systems will be required to draw up technical documentation, and provide information about the content used for training the models.
There are additional obligations on high-impact general-purpose AI models (based on their computing power), which pose a systemic risk. Developers of such models must evaluate them to assess and mitigate the systemic risks, carry out testing and submit reports to the EU Commission on serious incidents, ensure their cybersecurity, and report on their energy efficiency.
It remains to be seen whether the “Brussels Effect” (where the EU effectively sets de facto global standards) will have the same force as was the case with previous EU legislation, including the GDPR. The EU has, to some degree, retained its first mover advantage in that the AI Act remains the world’s first comprehensive legislation regulating AI. Nevertheless, it has inevitably been overtaken by the pace of technology and its commercialization, and also by lawmaking in other jurisdictions, such as President Biden’s Executive Order Artificial Intelligence Briefing: Sweeping EO “Establishes New Standards for AI Safety and Security” . There will be a delay before the legislation is passed in full; however, businesses making use of AI systems that may be impacted by the AI Act should note that the original implementation period of two years appears to have been shortened to one year for some high-risk systems. We will continue to provide updates when the final text emerges.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.