The EU's Artificial Intelligence Act: setting a global standard? | Fieldfisher
Skip to main content

The EU's Artificial Intelligence Act: setting a global standard?


United Kingdom

European Commission recently published its proposal for a Regulation laying down harmonised rules on Artificial Intelligence (AI).

The proposal has been publicised as the first ever regulatory framework on AI, to address the risks of AI in Europe. For almost every business there will be opportunities to be explored by making some use of nascent AI technologies. It will  be crucial for businesses that currently use AI, or plan to do so in the future, to ensure that they comply with the obligations introduced under the Regulation. 

What are the main changes under the proposed Regulation?

The proposed Regulation would adopt a "risk-based" approach, dividing different uses of AI into four categories: Unacceptable, High Risk, Limited Risk, and Minimal Risk.

Unacceptable Risk: This category includes those AI systems which the European Commission considers to be the most objectionable, and which it proposes to ban outright. In particular, the draft Regulation prohibits the use of: "subliminal techniques" that try to distort a person's behaviours in a way that might cause harm; systems that exploit a person's vulnerability due to age, physical or mental disability; "real-time" remote biometric identification systems in public places; and systems that allow "social scoring" by governments. The Commission gives the example of a toy using voice assistance that would encourage dangerous behaviour among minors. 

High Risk: The draft Regulation highlights those AI technologies that are used as safety components of products, as well as AI used in, for example, critical infrastructure, law enforcement and employment contexts (e.g. AI used to sort CVs for recruitment procedures).

Providers, importers, distributors and professional users of high-risk AI systems will be subject to various obligations before they can place these sorts of systems on the market, including the need to ensure:
  • Adequate risk assessments.
  • High quality datasets to e.g. minimise discriminatory outcomes.
  • Appropriate human oversight measures to minimise risk.
  • High levels of robustness, security and accuracy.
  • Registration of the AI on a new EU-wide database.
  • Reporting to national competent authorities of serious incidents or malfunctions.
Limited risk: This covers "AI systems intended to interact with natural persons" (like chatbots), as well as AI that "appreciably resembles existing persons, objects, places" (i.e. deep fakes). Specific transparency obligations will be imposed so that users are made aware that they are interacting with a machine, or that the content has been artificially generated or manipulated. 

Minimal risk: The proposed Regulation sets out measures in support of innovation which attempt to promote the free use of AI systems representing "minimal or no risk" to citizens' rights or safety. The sorts of systems the Commission seems to have in mind are AI-enabled video games or spam filters. The measures proposed include: controlled environments (known as "sandboxes") in which AI systems can be developed, tested and validated before being placed on the market or put into service. Within these sandboxes processing of personal information will be permitted for developing AI systems in the public interest. A specific obligation is placed on EU Member States to ensure that small-scale providers and start-ups are provided with access to these "sandboxes", are supported with appropriate guidance and that fees are proportionate.

How could the Regulation affect businesses?

The costs of breaching the Regulation are likely to be significant. The Commission has proposed fines of up to:
  • €30,000,000 or up to 6% of total worldwide annual turnover for committing serious breaches of the Regulation (including operating the banned practices).
  • €20,000,000 or up to 4% of total worldwide annual turnover for breaching the obligations applying to high-risk AI.
  • €10,000,000 or up to 2% of total worldwide annual turnover for "the supply of incorrect, incomplete or misleading information" to notified bodies or national authorities in response to a request. 
Businesses will  want to consider whether their planned usage of AI technologies is anticipated to fall within one of the higher risk categories, and review whether their processes are likely to comply with the proposed obligations. Even for those using AI systems in the lower risk categories, the proposed Regulation sets out a framework for the creation of voluntary codes of conduct. Businesses in all sectors will want to ensure that the eventual codes of conduct are tailored appropriately to their industry.

When will the provisions of the Regulation apply?   

The European Parliament and the EU Member States will need to adopt the Commission's proposals as part of the ordinary legislative procedure. The contents of the Regulation are subject to change during this process, which could take another couple of years. The Commission currently estimates that the Regulation could enter into force in the second half of 2022, and become applicable to AI operators in 2024. 

Risks and opportunities under the Regulation

The Commission's proposals reflect its aims to be responsive to the speed of technological change, and address concerns – including possible ethical issues – around the use of AI while such technologies remain relatively nascent. The EU has referred to a sense of urgency to address emerging trends and ensure a high level of protection for European citizens, particularly in the face of technology that can be unpredictable, complex and partially autonomous.  

The risk of the Commission's approach is that start-ups and SMEs using AI will bear a disproportionate burden from increased regulation. Where the largest tech firms may be able to shoulder the cost of compliance with new rules, smaller businesses whose products are caught as High Risk might simply be deterred from innovating, in the face of new rules with broad application, and large fines for non-compliance.

Global impact on businesses 

Article 2 of the proposed Regulation suggests that it will apply not only to those providing AI systems in the EU, but also providers in a third country where the users or output of the system are located within the EU. Therefore, the reach of the Regulation will likely extend to all relevant businesses with any kind of EU operation. International businesses will need to consider how the proposals affect their strategy towards AI, if they want to access the EU market.

There also appears to be a desire at Commission level for the principles set out in these new proposals to be adopted and reproduced elsewhere. Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, has stated: 'the EU is spearheading the development of new global norms to make sure AI can be trusted.'

There have been suggestions that Canada and Japan, for instance, might follow suit as a result, although the draft rules for the regulation of AI published last year might suggest that the US may still take the lead here. As for the UK, the Committee on Standards in Public Life published a report on Artificial Intelligence and Public Standards last year which recommended that 'though no new AI regulator is needed', the UK Government should 'establish consistent and authoritative ethical principles' for the regulation of AI. It is hard to imagine that an approach similar to the EU's would not at least be considered, even post-Brexit.

Co-authored by James Russell (trainee solicitor)