The European Parliament (the Parliament) and the Council of the European Union (the Council) reached a compromise on the Artificial Intelligence Act (the AI Act) on 8 December 2023. Although the final text is not yet available and only expected towards the end of January or beginning February 2024, press releases from the Council, the Parliament and the European Commission provide some valuable insights into what the final AI Act will state.
The ‘risk-based’ approach (consisting of the following categorisation of AI systems: prohibited AI systems, high-risk AI systems, limited-risk AI systems; and minimal-risk AI systems) is maintained, with the addition of specific rules on general-purpose AI models and increased regulation of general-purpose AI models that pose systemic risks. AI uses not falling under the scope of the AI Act include: AI systems developed solely for military, defence, research, and innovation purposes, as well as non-professional use.
Below we set out further details on the elements we have extracted from each institution’s press release on prohibited, high-risk and general-purpose AI systems as well as on the governance model, applicable fining-regime and the timing of entry into force of the AI Act.
1. Prohibited AI systems
Due to the unacceptable risk to fundamental rights, the following AI applications are prohibited:
- biometric categorisation systems using sensitive characteristics (such as political, religious, philosophical beliefs, sexual orientation, race);
- untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
- emotion recognition in the workplace and educational institutions;
- social scoring based on social behaviour or personal characteristics;
- systems that manipulate human behaviour to circumvent their free will; and
- systems used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Safeguards have been introduced to allow for the exceptional – safeguarded and narrow – use of biometric identification systems by law enforcement authorities. Additional safeguards will be provided for the use of biometric identification systems in public spaces by law enforcement authorities.
2. High-risk AI systems
AI systems will be classified as high-risk due to the significant potential harm that they may pose to health, safety, fundamental rights, environment, democracy, and the rule of law.
The Parliament expressly specifies AI systems influencing the outcome of elections or voters’ behaviour, as high-risk AI systems.
The European Commission provides the following as examples of high-risk AI systems (which are to be set out in Annex II and III of the AI Act):
- biometric identification, categorisation and emotion-recognition systems provided they do not fall within the banned applications;
- systems determining access to educational institutions or used for the recruitment of people;
- certain critical infrastructures;
- medical devices; and
- certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes.
A mandatory fundamental rights impact assessment will be required for high-risk AI systems.
Finally, citizens will be granted a right to file a complaint in relation to high-risk AI systems – it is not yet clear to whom – and request and receive explanations on decisions that are based on high-risk AI systems and impact their rights.
3. General-purpose AI
The compromise text now includes rules on general-purpose AI (GPAI) and high-impact GPAI with systemic risks. All GPAI are subject to specific transparency obligations, including:
- the provision of documentation;
- compliance with EU copyright legislation; and
- the provision of a detailed summary of the content used to train the system.
Criteria are introduced to determine whether GPAI is a high-impact GPAI with systemic risks. In this case, the GPAI will be subject to additional requirements relating to:
- model evaluation;
- systemic risk assessment and mitigation;
- adversarial testing;
- reporting of serious incidents to the European Commission;
- cybersecurity; and
- reporting on energy efficiency.
The AI Office, to be set up within the European Commission and advised by a panel of independent experts, will supervise the GPAI models. Alongside that, the AI Board comprising representatives of Member States will function as an advisory body to the European Commission. An advisory forum constituted of stakeholders will provide technical expertise to the AI Board.
Depending on the breach, fines are established at:
- EUR 35 million or 7% of an entity’s annual worldwide turnover in the preceding year for breaching prohibited AI practices;
- EUR 7.5 million or 1.5% of an entity’s annual worldwide turnover in the preceding year for supplying incorrect information; and
- EUR 15 million or 3% of an entity’s annual worldwide turnover in the preceding year for any other breach of the AI Act.
6. Entry into force
The AI Act is expected to enter into force on the 20th day following its publication in the Official Journal of the European Union. Most of its requirements will become applicable two years after its entry into force, apart for the provisions on prohibited practices and the GPAI rules, which will apply as follows:
- Prohibited AI systems: 6 months after the AI Act’s entry into force.
- GPAI rules: 12 months after the AI Act’s entry into force.
7. What’s next?
The text of the AI Act must be finalised and reviewed by each institution before approval by the European Parliament and the Council and then publication in the Official Journal of the European Union will follow.
We expect the AI Act to be published by Q1 2024 and to fully apply as of Q1 2026.