The EU unveils its AI Regulation proposal: Key takeaways of this human-centric draft legislation | Fieldfisher
Skip to main content
Insight

The EU unveils its AI Regulation proposal: Key takeaways of this human-centric draft legislation

Locations

United Kingdom

Artificial Intelligence or "AI" can be used for the better and for the worse (e.g. understanding cancers to reduce mortality rates, creating deep fake pornographic videos to harass someone…).

The European Commission has now come up with an ambitious proposal for a comprehensive legal framework regarding the use of AI. It aims to be human-centric and is focused on the complete utilisation of the technology (and the risks potentially deriving from it). This proposal, which aims at positioning the EU as the leader of trustworthy AI, is part of a series of legislative initiatives that aim at making the EU fit for the Digital Age (see also our blog posts and webinars on the  Digital Services Act proposal).
 
  • Background context

The White Paper on Artificial Intelligence released in February 2020 was the first in-depth analysis of the policy and regulatory options to regulate AI at EU-level. From February to June 2020, over 1250 stakeholders have provided their views on these options. The requirements applicable to high-risk AI applications were an important focus of the White Paper, and the Commission proposed key features to consider in this respect, such as transparency requirements, human oversight etc. These key features remain in the new horizontal regulatory proposal.

On top of this White Paper, the European Parliament took various other topic-specific initiatives. In October 2020, a number of resolutions related to AI were adopted, including on ethics and the liability regime. In 2021, further sector-specific resolutions on AI in criminal matters and in education, culture and the audio-visual sector were published. These resolutions are not binding but aim at drawing the Commission's attention to sensitive topics.

  • Objectives pursued

The overarching objective of this proposal is to create the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of AI in the EU.

Although regulating AI is quite a challenging task, it is essential to boost and promote it. A good example lies in Elon Musk's tweets calling for the regulation of advanced AI. By offering more legal certainty in this area, AI providers will know from the start what they can or cannot create and how to manage the risks associated to their technologies.

By turning this proposal into a regulation rather than a directive, the European Commission also wants to (i) avoid a fragmentation of the internal market on essential elements of the AI regulation, such as its supervision by public authorities, and (ii) enhance legal certainty for providers and users of AI systems covered by the proposal.

  • Material and territorial scope

In terms of material scope, the proposed AI Regulation would apply to the placing on the market, putting into service and use of "AI systems". The good news is that the proposal does contain a definition of what "AI systems" are. The bad news is that it is not a straightforward one.

AI systems covered by the AI Regulation are (i) software developed in accordance with the first Annex to the proposal – this Annex covers notably AI based on machine-learning approaches, logic and knowledge-based approaches and statistical approaches – and (ii) can, for a given set of human-defined objectives, generate outputs influencing the environments they interact with.

The material scope of the AI Regulation intends to be as extensive as possible. The same goes for the territorial scope. The proposal covers (i) providers of AI systems (in and outside the EU) who place AI systems on the EU market, or put them into service in the EU, (ii) users of AI systems established within the EU and (iii) providers and users of AI systems that are established outside the EU, where the AI system's output is used in the EU.

  • Measures proposed

Introduction of prohibited AI practices

The draft regulation introduces a series of banned technologies, which create "unacceptable risks". As part of this "black-list", there are AI practices that materially distort a person's behaviour in a manner that can cause physical or psychological harm, beyond a person consciousness.

Other examples are AI practices that exploit the vulnerability of special groups of people, based on their age, physical or mental disability, or enable social scoring by public authorities. Finally, the use of remote biometric identification systems in publicly accessible spaces for law enforcement purposes falls into the black-list, unless it meets a series of legitimate security exceptions provided in the proposal.

Strict regime for high-risk AI systems

The proposal introduces new oversight for “high-risk” AI systems. High-risk AI systems will also require a case-by-case assessment from AI providers, based on other Annexes of the proposal and a series of criteria. Among the high risks that the EU wants to avoid, there will be the risk of harm to health and safety (e.g. self-driving cars) or the risk of adverse impact on fundamental rights (e.g. some recruitment AI-based software, which could potentially lead to discrimination if used in certain ways).

The regime associated with high-risk AI systems will be quite burdensome for their providers. Among them, a certification system, which in practice would lead to affixing the CE marking to certify compliance with the future Regulation. They will also have to meet certain transparency requirements towards AI users, notably informing them on the characteristics, capabilities and limitations of performance of the high-risk AI systems. The technologies will have to be registered in an EU database, to be established and managed by the Commission. Reporting obligations are included, for incidents caused by the failure of a high-risk AI systems which have, or could potentially result in serious injury or damage to property, within a 15-day deadline.

Light-touch obligations for low-risk AI systems

Certain AI systems subject to risks of manipulations causing limited risks (e.g. creating lawful deep fakes) are also in-scope. Their providers will need to comply with transparency requirements. Notably, they will have to inform users that they are interacting with an AI system, unless this is "obvious from the circumstances and the context of use".

On top of this new hierarchy of obligations that increase with the risks, providers will have to cope with additional legal rules to design and further develop their technologies lawfully. Note that some initiatives currently under revision that address liability issues related to new technologies, such as the General Product Safety Directive, will have to build on and complement this proposal.

Supervisory authorities & enforcement

Interestingly, the Commission has opted for an approach to enforcement which is different from the ones adopted in the GDPR or in the more recent DSA. Each of the Member States will have to appoint a national authority responsible for supervising AI. However, for cross-border enforcement, the proposed AI Regulation does not contain a one-stop-shop mechanism.

This could potentially lead to inconsistencies in the application of the AI Regulation by Member States. In an attempt to tackle this, the proposal does create a “European Artificial Intelligence Board,” (EAIB) consisting of representatives from every Member State, tasked with assisting the national authorities and the in view of ensuring a consistent application of the AI Regulation.

In terms of sanctions, certain breaches of the proposal may lead to an administrative fine of up to €30 million, or 6% of global annual turnover. Member States will lay down their penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of the future regulation. While the European Data Protection Supervisor ("EDPS") will now expand its power by acting as an enforcement authority, its role will be limited to the EU institutions, agencies and bodies.

Next steps

With the proposal for an AI Regulation, the Commission has put an ambitious piece of legislation on the table. The proposal will now follow the usual EU legislative process, which means that the European Parliament and Council will need to discuss amendments and agree on a final version.

It is unclear what the timeline will be for this but given the unprecedented attempt of regulating AI, it is clear that this will take quite a lot of time. We also expect to see a lot of lobbying by AI providers to soften some of the requirements and obligations.

In any event, AI providers are advised to follow closely the legislative process. Pending the adoption of the final version of the AI Regulation, it can also be useful to already consider some of the fundamental principles when designing new AI systems.

If you are interested to know more about AI, we invite you to register to the ongoing webinar series of our colleagues on GDPR/AI. To see the webinars already posted online, click here.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Data and Privacy

Related Work Areas

Artificial Intelligence