The EU AI Act is near !

20 February 2024

The Committee on the Internal Market and Consumer Protection of the European Parliament has adopted the AI Act on February 13, 2024. It will be submitted for a plenary vote, which is, for now, scheduled in early April 2024.

Image generated with Midjourney

The initial draft was proposed back in 2021 by the European Commission. However, the advent of LLMs and, most famously Chat GPT, significantly delayed and changed the initial project. Nonetheless, it is the first regulation aimed at regulating the AI as a distinct topic. It will undoubtedly shape the future development of AIs a well as their professional uses.

The goal of this post is to give to the readers a very brief and big picture view of this new regulation. Contrarily to what one may think, the upcoming AI Act is not only impactful for AI developers. As the GDPR before it, it is also relevant for professionals outside of the EU.

What is an AI, according to the AI Act ?

According to the AI Act : “An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The first striking feature of this definition is its very broad wording. This comes as no surprise as there is a concern for the definition to be “future proof”. It is also technologically neutral and focuses on the autonomy and the ability of the system to apply a deductive reasoning.

Who shall pay attention to the AI Act compliance ?

The AI Act creates obligations for providers (i.e. developers), importers, distributors and deployers. This is pretty extensive as it includes AI end users as deployers, with an exception for non-professional personal use.

Our Swiss readers shall also pay attention as their behavior will be within the scope of the AI Act as long as the output produced by the AI system happens within the EU (AI ACT, article 2, § 1 letter C).

Finally, anyone using a General Purpose AI (“GPAI” , Eg. Chat GPT or Google Gemini) as an API for its solution could be at least practically impacted. Should said GPAI face a prohibition or restriction, their solution could suddenly become unusable

What are the types of AI distinguished by the AI ACT ?

The AI ACT classifies AI systems as follows:

  1. prohibited AI systems,
  2. high-risk AI systems,
  3. limited risk AI systems,
  4. minimal risk AI systems (which can only be defined by opposition to the above mentioned AI systems and GPAI), and
  5. GPAI models (with or without systemic risk).

What are the types of prohibited AI systems ?

The list of prohibited Artificial Intelligence Practices is listed at article 5 of the AI Act. It can be roughly summarized as follows:

  1. Use subliminal techniques or purposefully manipulative techniques ;
  2. Exploitation of vulnerabilities of a person causing significant harm ;
  3. Biometric categorizing of individuals based on sensitive information ;
  4. Use of data in order to provide a social scoring ;
  5. Use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement ;
  6. Predictive policy based purely on profiling ;
  7. Facial recognition databases through the untargeted scraping ;
  8. Inference of emotions in a work or educational environment.

Such AI Systems are prohibited but, please note that the aforementioned prohibitions are very summarized and subject to a series of exceptions. Therefore, before concluding that a project use is prohibited, you should read in detail the text of article 5.

What are high-risk AI systems ?

AI systems to be used as a product, or that serve as safety features of product covered by EU harmonisation legislation listed in Annex II of the AI Act are deemed high-risk AI systems.

Additionally, AI systems used in the fields listed in Annex III of the AI Act are also deemed high-risk AI systems. Such AI systems include biometric identification systems, AI systems acting as safety features for critical infrastructure. The use of certain types of AI is also deemed high-risk AI systems when such systems are being used in the scope of i) work and educational environments, ii) creditworthiness, iii) eligibility for public assistance benefits and services, iv) law enforcement, v) migration and asylum, vi) Administration of justice and democratic processes.

However, AI systems shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons.

What are the consequences of being considered a high-risk AI system ?

First of all, high-risk AI system providers must comply with a series of obligations. Among which they must ensure their products complies principles of trustworthiness, transparency and accountability. Specific obligations include risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity requirements.

AI system providers must  also document and maintain a quality management system, undergo conformity assessments, and, if necessary, take corrective actions. Additionally, they must keep logs, provide comprehensive instructions for use, ensure systems are designed for effective human oversight, and meet specified accuracy, robustness, and cybersecurity standards throughout the system’s lifecycle.

Please note that the AI Act also includes obligations for the importer, distributor and deployer (i.e. user) of high-risk AI systems. For the sake of brevity, we won’t go into detail here. However, one key takeaway is that deployers shall follow the provider’s instructions in order to avoid any liability.

What are GPAIs ?

 A GPAI can be defined as an AI capable of performing a wide array of tasks effectively and that can be integrated into a variety of systems or applications, for instance thanks to an API.

GPAIs can also be deemed GPAIs “with systemic risk” (cf. article 52a). Systemic risk at the Union level describes a significant threat posed by general-purpose AI models. This threat affects the internal market broadly and can negatively impact public health, safety, security, fundamental rights, or society, potentially spreading quickly through the value chain.

GPAIs with a cumulative amount of computing used for its training measured in floating point operations (FLOPs) greater than 10^25 are presumed of systemic risk. The general 10^25 FLOPs of training threshold was specifically chosen with Chat GPT4 and Google Gemini in mind, which exceed this number.

What are the obligations for GPAIS and GPAIs with systemic risk ?

GPAI Providers have a transparency obligation. Providers must also ensure outputs are marked as AI-generated, especially when leading to deep fakes, to inform users clearly. For AI-generated texts in public interest matters, disclosure is exempt if there are human review and editorial responsibility. They must also maintain detailed technical documentation of the model, including training and testing processes.

Additionally, Providers of GPAI models with systemic risks must also i) perform model evaluations,  ii) assess and mitigate systemic risks at Union level, iii) keep track, document and report serious incidents to the AI office and iv) ensure an adequate level of cybersecurity protection.

Key takeaways :

The AI Act is a complex and long regulation. After being published in the EU Official Journal, it will progressively be enforced (cf. article 85). Some of its features could already be enforced as early as late 2024 or early 2025 (articles pertaining to prohibited AI systems). Most of its obligations will be enforced 24 months after the entering into force of the Regulation.

If you are an AI provider, it is of crucial importance that all your future developments are made with the AI Act in mind. If you incorporate an AI in your product, it is also paramount that you select it carefully, in the light of the upcoming regulation. Finally, if you use generative AIs, especially in a professional context, you should keep in mind that some best practices are already emerging now, especially regarding the transparency of your use.

Should you be in need of guidance on this topic, our team of specialists in technology law is here to help you.



Alexandre Osti

Alexandre OSTI

Avocat, associé | Attorney, partner

a.osti@voxlegal.ch

+41 21 637 60 30