The AI Act – EU’s First Artificial Intelligence Regulation

January 2024 – In a ground-breaking move, on 8 December 2023, the European Parliament officially reached a provisional agreement with the Council on the AI act. With this agreement, the European Union has implemented the world's first comprehensive regulation on artificial intelligence, known as the AI Act. This landmark regulation is now in it’s final stage of implementation, as it still needs to be formally adopted by the Parliament and the Council. The AI Act aims to safeguard the well-being and fundamental rights of EU citizens, setting a precedent for global AI governance.

In April 2021, the European Commission proposed the first European Union ("EU") regulatory framework for artificial intelligence ("AI"). The main objective is to classify different AI systems according to the risk they pose to users. The various risk levels will mean more or less regulation. Once formally adopted, this will be the world's first set of rules for AI. Below, we provide a brief overview of the adoption process, content, and potential issues related to the Artificial Intelligence Act (the "AI Act").

The AI Act and its objectives 

The AI Act is a legal framework regulating the selling and using artificial intelligence in the EU. Its official purpose is to ensure the proper functioning of the EU's internal market by setting consistent standards for AI systems across EU Member States. In practice, it is the first comprehensive regulation to address the risks of artificial intelligence through a set of obligations and requirements designed to protect the health, safety, and fundamental rights of EU citizens and beyond, .and it is expected to have an outsized impact on global AI governance.

The European Parliament's ("Parliament") priority is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. To prevent harmful outcomes, AI systems should be managed by humans, not by automation. The Parliament also aims to establish a technology-neutral, standard definition of AI that could be applied to future AI systems.

Implementation process 

Originally proposed by the European Commission ("Commission") in April 2021,[1] the General Approach Policy was adopted by the European Council ("Council") last year,[2] and on 14 June 2023, Members of the European Parliament ("MEPs") adopted the Parliament's negotiating position on the AI Act.[3] Discussions are now underway among EU countries on the Council on the final form of the legislation.

The three have just negotiated the final details in a “trialogue” process – a three-way negotiation, in order for the Policy to become Law. The provisional agreement was officially reached on 9 December 2023 between the Council and Parliament. The only step that is left is for the Parliament’s Internal Market and Civil Liberties committees voting on the agreement in a forthcoming meeting. Therefore, the legislation will likely be adopted in early 2024, before the European Parliament elections in June that year. Its adoption will be followed by a transitional period of at least 18 months before the regulation is fully enforced.

However, there are still substantial differences in the proposals, particularly regarding technical and detailed provisions such as definitions and enforcement.

The current state of the AI Act 

The AI Act covers AI systems that are "placed on the market, put into service or used in the EU".[4]  This means that it applies not only to developers and users in the EU, but also to global providers who sell or otherwise make their system or its output available to users in the EU.[5]

The provisions of the AI Act are intended to: (i) address the risks posed by AI; (ii) define risk categories; (iii) establish clear requirements and obligations for AI systems and their providers; (iv) propose assessment and enforcement of AI systems; (v) propose a governance structure at European and national level.

Therefore, the main objective of the AI Act is to establish obligations for providers and users according to the level of risk posed by artificial intelligence. These levels are divided into four different (risk) categories:

1. Unacceptable risk -
AI systems with an unacceptable level of risk to 'people's safety

Systems posing this level of risk are:

  • Cognitive behavioural manipulation of people or specific vulnerable groups;
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics;
  • Real-time and remote biometric identification systems, such as facial recognition

Legal regulation of such systems: Prohibited.[6]

In terms of fines, engaging in prohibited AI practices could result in fines of up to EUR 40 million or up to 7% of the company's global annual turnover, depending on which is higher.[7] However, the fines would be "proportionate" and would consider the market position of small providers, suggesting that there could be more flexible rules for start-ups.

2. High risk
- systems that negatively affect safety or fundamental rights

Systems posing this level of risk are:

  • AI systems that are used in products falling under the 'EU's product safety legislation. This includes toys, aviation, cars, medical devices and lifts; and
  • AI systems falling into eight specific areas that will have to be registered in an EU database.

Legal regulation of such systems: Permitted with pre- and post-market assessments.[8]

Providers of high-risk AI will have to register their AI in an EU database managed by the Commission before placing it on the market. Non-EU suppliers will need an authorised representative in the EU to demonstrate compliance and post-market monitoring.

Limited risk

Systems posing this level of risk: AI systems generating or manipulating image, audio, video content and chatbots.

Legal regulation of such systems: Permitted with minimal transparency requirements.

Limited risk AI systems should comply with the minimal transparency requirements that would allow users to make informed choices. After interacting with the applications, the user can then decide whether they want to continue using it. Users should also be made aware when they are interacting with AI.[9]

These AI systems would have to comply with transparency requirements such as (i) disclosing that the content is generated by AI; (ii) designing the model to prevent it from generating illegal content and (iii) publishing summaries of copyrighted data used for training.

Minimal (low) risk

Systems posing this level of risk: Generative AI (e.g. ChatGPT)

Legal regulation of such systems: Permitted with no obligations, but likely voluntary codes of conduct. 

Challenges for legislators 

Dozens of leading European business leaders have withdrawn their support for the EU's proposed law on artificial intelligence, warning that it could damage the EU's competitiveness and lead to an outflow of investment. They argue that the draft rules go too far, particularly in regulating generative AI models, the technology behind popular platforms such as ChatGPT.

European Parliamentarian Rene Repasi, however, has announced that there is no need to worry because the European market, with 450 million consumers, is too attractive for artificial intelligence providers to bypass.

Future of the Act and its enforcement 

MEPs want to strengthen s citizens' right to file complaints about AI systems and receive explanations for decisions based on high-risk AI systems that significantly affect their rights. MEPs also reformed the role of the EU's AI office, which would monitor the implementation of the AI rulebook.

After months of intensive trialogue negotiations, the discussions lead to a political agreement that creates the global first and historic AI Act. However, a number of important issues are still to be resolved. These include determining the scope of the list identifying prohibited and high-risk AI systems, and the associated governance standards. In addition, deliberations will focus on the regulation of foundation models and the establishment of the enforcement infrastructure required to monitor the implementation of the AI Act. The challenge of developing clear and comprehensive definitions is another aspect that will be addressed.

Please note: The content of this article is intended to provide general information on the subject matter. Specialist advice should be sought about your specific circumstances.

[1] “Regulation of the European Parliament and the Council – Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts”, Brussels, 21.4.2021., 2021/0106 (COD), link:
[2] “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts”, Brussels, 25.11.2022., 14954/22, link:
[3] “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts”, Brussels, 14.6.2023., A9-0188/2023, link:  /
[4] “Regulation of the European Parliament and the Council – Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts”, Brussels, 21.4.2021., 2021/0106 (COD),
[5] With three exceptions (AI developed for military purposes, scientific research and free and open source AI systems and components, a term not yet clearly defined).
[6] Some exceptions may be allowed: For example, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed for the prosecution of serious crimes but only after court approval.
[7] The figures could be subject to change during the trialogue.
[8] This assessment will include rigorous testing, documentation of data quality, and an accountability framework including human oversight.
[9] For example, when using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
Dušan Đurić Junior Associate
+381 69 3282 806
    • SHARE