The EU-AI Regulation - Part 1: Overview and structure | Fieldfisher
Skip to main content
Insight

The EU-AI Regulation - Part 1: Overview and structure

Locations

Germany

The EU plans to take a leadership position in regulating artificial intelligence. To do so, it is gradually implementing the goals it set itself in 2018. To this end, the AI White Paper from February 2020 has now been followed by the EU Commission's draft AI Regulation, that was published on April 21, 2021.

Brussels is thus hoping for the next big thing after the GDPR, which even triggered the implementation of a data protection law in California and now is considered the gold standard worldwide.

The gist:

The Commission's proposal puts the citizen at the center of the regulation: the protection of general interests, health, security and fundamental rights is explicitly emphasized. Like the GDPR, the AI Regulation is also to follow a risk-based approach. In short: The higher the potential dangers in an area of application, the higher the regulatory requirements for the AI system. The draft regulation essentially distinguishes between three groups: Prohibited AI systems, high-risk systems, and other systems, with the latter also to be subject to special rules if certain characteristics are present.

Prohibited AI practices (blacklist)

From the perspective of the legislator, AI systems with certain characteristics are so dangerous that they should be completely prohibited. These include:

  • AI systems that use subliminal techniques, imperceptible to the user, to substantially influence a person's behavior in a way that may cause physical or psychological harm to that person or another person.
  • AI systems that exploit a person's weaknesses based on age or disability.
  • AI systems that can be used to rank the trustworthiness of individuals in the context of social affiliation or social behavior (social scoring).
  • Real-time remote identification systems for biometric identification in publicly accessible spaces, unless the strict requirements set forth in the regulation are met. Permit-eligible remote identification systems are used, for example, in the search for victims of crime or missing persons and the prosecution of capital offenders.
 

High-risk systems

The main focus of the regulation is on high-risk systems - well over half of the regulations in the regulation deal with this category, which the legislator only wants to permit under strict conditions. Annex 3 of the regulation provides examples of the specific applications affected. The mixture of technology and sector oriented considerations is interesting: High-risk systems include systems for biometric facial recognition or for use in particularly hazardous environments (traffic management, water supply, energy supply etc., as well as systems for access control), as well as systems for access control in university examinations, for the selection of candidates for jobs, for predictive crime control, for the verification of witness statements and other police or judicial uses, but also systems that are essential for access to essential public and private services - here, among other things, Schufa (the German credit checking system) is probably meant.

High-risk systems trigger significant obligations for providers and users. These include:

  • Data quality requirements
  • Information requirements vis-à-vis users
  • Maintaining human oversight of the system.
  • Conformance testing and certification
 

Low risk

Low-risk AI systems are systems designed to interact with humans, such as chatbots - including systems capable of recognizing emotions. For these, an information requirement applies: the use of the AI must be disclosed to the user.

Minimal risk

All other AI systems, and thus the vast majority of systems in use today that cannot be assigned to one of the other groups, are AI systems that pose only minimal risk. These can be developed and used in compliance with generally applicable rules such as those from data protection law, without any new additional legal obligations.

Scope of Application

The draft regulation also follows the model in terms of scope of application. Accordingly, the regulation targets almost anyone who uses AI in relation to EU residents. It is intended to apply to both public and private actors inside and outside the EU, provided the AI system is placed on the market in the EU or people in the EU are affected by its use. The affected parties are both the developers (providers) and the users (companies, public authorities) who purchase such software and use it for their purposes. Purely private use, on the other hand, is not regulated.

Sanctions

The draft provides for decentralized enforcement of the regulations by the member states. Each member state is to designate at least one national authority to oversee the application and implementation of the regulations and to perform market surveillance. Violators face fines of up to EUR 30 million or 6% of total annual global sales, whichever is greater.

This article is the first part of a series on the European Union's proposed AI Regulation. The second part will deal with the term "AI system".
Authors: Stephan Zimprich, Partner, Fieldfisher; Hagen Küchler, Legal Trainee