Looking Ahead At AI Regulation In The EU And UK | Fieldfisher
Skip to main content

Looking Ahead At AI Regulation In The EU And UK


United Kingdom

First published on Law360, 4 January 2023.

At the start of December we saw co-legislators in the European Union agree on the text for the new EU Artificial Intelligence Act, which has been the subject of much debate over the last few months.

In parallel, and despite consistent signals from the UK Government that there will be no new UK AI regulation or regulator in the near term, a private member's AI bill was introduced on Nov. 22 in the House of Lords making provision for the creation of a so-called AI authority in the UK, among other things.

In the EU, the regulatory approach is complex and multifaceted. A lot of material has been published about the EU AI Act and so we shall not dwell on it here, but in essence it focuses on:

  • Establishing a central AI office, and an AI board to coordinate with member states;
  • Prohibiting certain so-called unacceptable use cases, and requiring impact assessments for high-risk use cases, as well as quality and risk management programmes;
  • Transparency requirements for foundation models, such as generative AI, and guardrails for general purpose AI systems;
  • Measures to support innovation, particularly small- and medium-sized enterprises; and
  • Very high penalties for noncompliance, up to 7% of annual turnover for the worst offenders, with a tiered system for lesser offences.

The draft AI bill, on the other hand, is short, at only about five pages of content, but its implications could be far-reaching. Its central tenet is to establish an AI authority whose function will be to seek alignment between the various regulators on their approach to regulating AI in the UK, and to establish the principles on which that approach must be based.

Greater Flexibility

The EU AI Act will, as a regulation, be directly effective in each EU member state and, upon coming into force in about two years' time, will implement a complex system of rules.

On the other hand, the UK bill provides more of a framework for the government on which to hang further regulation on, and it also moves to reinforce coordination between existing regulators, e.g., the Information Commissioner's Office, Competition and Markets Authority, the Financial Conduct Authority, Ofcom, etc.

While the primary functions of the new UK AI authority would be to consult and monitor regulatory frameworks and ensure coordination between the existing regulators, rather than to regulate AI itself, it will remain open for the secretary of state to broaden the functions of the AI authority.

This gives scope for the government of the day to modify the extent to which rulemaking is centralised when it comes to AI, as well as the principles upon which such regulation is based, quickly and without the need for an active approval by Parliament.

Appointment of an AI Officer

The proposed Bill also sets out some early steps in addressing the governance of AI within organisations. If it becomes law, organisations using AI tools will need to appoint an AI officer to ensure the safe, ethical, unbiased, and nondiscriminatory use of AI.

The AI officer's role would include ensuring that data used by the business in any AI technology is unbiased. Public companies would have to publish information in their strategic reports on their development, deployment, or use of AI, as well as the name and activities of the AI Officer.

The EU AI Act is much more prescriptive on governance processes and, depending on the potential use cases, sets a fairly high compliance bar for organisations wishing to develop and deploy AI in a way that affects EU citizens, even if it takes place outside the EU.

Along with high-risk areas such as health, safety, fundamental rights, the environment, democracy, and the rule of law, the rules will also apply to the insurance and banking sectors. This includes a requirement to conduct mandatory AI impact assessments, and rights will be given to affected individuals to challenge organisations using AI that affects them.

Protecting owners of content used in training AI

Intellectual property rights holders may be comforted to see the draft bill signalling the potential for new legislation to enable IP enforcement.

The draft bill requires those training AI to report on what – and whose – materials are being used to conduct that training and give assurances that they do so lawfully, as well as proposing an obligation to allow independent audit. Indeed, it expressly sets out that businesses using AI should comply with IP as a core regulatory principle.

This chimes with the EU AI Act, which will contain protections for the use of copyright in AI.

Regulatory sandboxes

Both the EU AI Act and the UK bill recognise the need for innovation, and the potential for the concentration of power and value within Big Tech if measures are not taken regarding AI development.

The hyperscalers – the largest tech companies having massive resources and processing vast quantities of data – already have a substantial advantage over small- and medium-sized enterprises in the AI space due to their ready access to data and established AI programmes, and the cost of complying with complex regulation provides a further barrier to entry for smaller organisations.

The proposed legislation in both the EU and the UK therefore addresses the establishment of regulatory sandboxes to assist organisations by providing a controlled environment for the development and testing of new AI systems.

This could be helpful in particular for smaller businesses that might not otherwise have the resources to develop in a safe space but under real-world conditions.

Next steps

The agreed text of the EU AI Act must now be formally adopted by the European Parliament and Council before it hits the EU statute books. It is expected to be fully effective by the first quarter of 2026.

The UK bill is at its second reading stage, which remains to be scheduled. While the drafting suggests that the AI authority is not intended to be a regulator per se, but a body that will coordinate the existing regulators, this draft does not fully align with the government's previous statements that there will be no new AI regulator.

Consequently, it is possible that the government will see to it that the bill does not progress much further, although there are voices of support for tighter regulatory control from members of both houses of Parliament, and indeed across party lines.

These developments will be hugely impactful for those organisations that are developing AI, and the time is now for leaders within those organisations to start thinking about deploying AI governance, including addressing issues such as accountability, risk management processes, and internal guardrails.

For those organisations already implementing AI governance regimes, these regulatory developments should be looked at carefully to help shape those policies and processes, and, for some, a rethink may be necessary.

It would be all too easy to overengineer a response to the challenges posed by artificial intelligence, and it is important to take a balanced approach to responsible AI innovation – enabling business while managing risk. AI is rapidly becoming a force multiplier when it comes to competing in the modern marketplace, and an organisation's ability to walk that line will be a key factor for success.

Related Work Areas