Skip to main content

The EU AI Regulation - Part 2: The AI System



The EU plans to take a leading position in regulating artificial intelligence. To this end, it is gradually implementing the goals it set itself in 2018. To this end, the AI White Paper from February 2020 is followed by the Commission's draft AI Regulation published on 21 April 2021. Brussels is hoping for the next big thing after the GDPR, which even gave California a data protection law and is regarded as the global standard.

The "AI system" in the draft regulation

The central concept of the draft regulation is "AI system". Accordingly, an AI system is software that has been developed using one or more of the techniques and concepts listed in Annex I and is capable of producing results, such as content, predictions, recommendations or decisions, that influence the environment with which it interacts, in relation to a set of goals defined by humans.

However, the techniques listed in Annex I include not only machine learning, but also traditional and fully human-controlled programming techniques such as statistical approaches and search and optimisation methods as well as logic-based systems. However, the techniques listed in Annex I are not exhaustive; in Art. 4 of the draft, the legislator reserves the right to supplement the catalogue in order to be able to react to new technical developments.

If a software is an "AI system" in the sense of this definition, the software falls within the scope of the planned AI Regulation - and triggers more or less extensive compliance and documentation obligations depending on the risk level.


Since almost all software today makes at least partial use of one of the techniques listed in Annex I, there is often criticism that the threshold for regulatory intervention is far too low - even if only few or no regulatory requirements are imposed on AI systems that only pose low risks. In view of the rather vague definitions, however, the question of when an AI system exists and in which risk category it falls can often only be clarified with considerable effort. Such threshold analyses, already known from data protection law, are associated with additional compliance costs for users and providers. In the course of the legislative process, a discussion about concretisation is therefore likely to be unavoidable.

Such a discussion is also necessary with regard to the protective purpose of the regulation. The particular dangers of "artificial intelligence" lie primarily in the fact that the path to a result was not devised by a human being, but the software itself generates and applies logics on the basis of certain parameters. These logics are in part not comprehensible to human observers, and they can be influenced by an (unrecognised) bias in the training data used in such a way that the results reproduce or reinforce this bias. The associated dangers do not exist to the same extent when the logic is fully specified by a human programmer. The extension of the term "AI system" to more traditional programming techniques is not compelling against this background.

This article is the second part of a series on the planned AI Regulation of the European Union. The first part is available here.
Authors: Stephan Zimprich, Partner, Fieldfisher; Hagen K├╝chler, Legal Trainee

Sign up to our email digest

Click to subscribe or manage your email preferences.