New UK Artificial Intelligence (Regulation) Bill introduced | Fieldfisher
Skip to main content
Insight

New UK Artificial Intelligence (Regulation) Bill introduced

Locations

United Kingdom

A Private Members' Bill (starting in the House of Lords) entitled 'Artificial Intelligence (Regulation) Bill', which provides for 'the regulation of artificial intelligence; and for connected purposes', was introduced to the UK Parliament by Lord Holmes of Richmond (a member of the House of Lords) and had its first reading last week on 22 November 2023.

The bill is only about 7 pages long with 9 sections in its current form. Its main purpose is to establish a central 'AI Authority' to oversee the regulatory approach to AI and to set out some key principles that the AI Authority must consider when regulating AI. 

A quick word on Private Members' Bills (PMBs) – these are introduced by MPs and Peers who are not government ministers. PMBs provide an important opportunity for backbench MPs or Peers to initiate legislative proposals and to respond to issues of public interest and concern and the procedures are different to those that apply for government bills. The success rate is not very high for these bills and the majority do not go through, but this one might be different given the importance of the subject matter and it should at least attract attention. The government's position on this bill is not yet clear, although Rishi Sunak was reported to have been keen on establishing some kind of 'AI Authority' in the UK, which features in the bill, so this could be what the government needs to prompt it to consider legislative action.

Some of the key provisions include:

AI Authority (Section 1)

  • The AI Authority will:
    • Ensure that relevant regulators take account of AI
    • Ensure alignment of approach across relevant regulators
    • Undertake a gap analysis of regulatory responsibilities
    • Coordinate a review of relevant legislation, including product safety, privacy and consumer protection, to assess its suitability to address the challenges and opportunities presented by AI
    • Monitor and evaluate the overall regulatory framework’s effectiveness and the implementation of the principles set out in the bill, including the extent to which they support innovation
    • Assess and monitor risks across the economy arising from AI
    • Conduct horizon-scanning, including by consulting the AI industry, to inform a coherent response to emerging AI technology trends
    • Support testbeds and sandbox initiatives (see section 3) to help AI innovators get new technologies to market
    • Accredit independent AI auditors
    • Provide education and awareness to give clarity to businesses and to empower individuals to express views as part of the iteration of the framework
    • Promote interoperability with international regulatory frameworks

Regulatory principles (Section 2)

  • The AI Authority must have regard to the following principles:
    • Regulation of AI should deliver safety, security and robustness
    • Appropriate transparency and explainability
    • Fairness
    • Accountability and governance
    • Contestability and redress
  • Businesses which develop or deploy or use AI should (and note the nod to protecting IP!):
    • Be transparent
    • Test it thoroughly and transparently
    • Comply with applicable laws, including in relation to data protection, privacy and intellectual property
  • AI and its applications should:
    • Comply with equalities legislation, be inclusive, be non-discriminatory
    • Meet the needs of those from lower socio-economic groups, older people and disabled people
    • Generate data that are findable, accessible, interoperable and reusable
  • Any burdens or restrictions imposed in relation to the use of AI must be proportionate to the benefits, taking into consideration the nature of the service or product being delivered, the nature of risk to consumers and others, whether the cost of implementation is proportionate to that level of risk and whether the burden or restriction enhances UK international competitiveness.

AI responsible officers (Section 4)

  • The Secretary of State must, by regulations and having consulted with the AI Authority and other relevant people, provide that any business which develops, deploys or uses AI must have a designated AI officer to ensure the safe, ethical, unbiased and non-discriminatory use of AI by the business and to ensure that data used by the business in any AI technology is unbiased.

Transparency, IP obligations and labelling (Section 5)

  • The Secretary of State must, by regulations and having consulted with the AI Authority and other relevant people, provide that any person involved in training AI must:
    • Supply to the AI Authority a record of all third-party data and intellectual property (“IP”) used in that training
    • Assure the AI Authority that (a) they use all such data and IP by informed consent; and (b) they comply with all applicable IP and copyright obligations
    • Any person supplying a product or service involving AI must give customers clear and unambiguous health warnings, labelling and opportunities to give or withhold informed consent in advance
    • Any business which develops, deploys or uses AI must allow independent third parties accredited by the AI Authority to audit its processes and systems
  • Regulations under this section may provide for informed consent to be express (opt-in) or implied (opt-out) and may make different provision for different cases

Interpretation (Section 7)

  • "Artificial intelligence” and “AI” mean technology enabling the programming or training of a device or software to— (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and (c) make recommendations, predictions or decisions; with a view to achieving a specific objective
  • AI includes generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained

Comment

Although this is short, somewhat high level and more of a framework to deliver more detailed regulations (by statutory instrument (SI)), it is a good starting point. It establishes a regulatory body ('the AI Authority') that will sit above and coordinate existing regulators (e.g. the ICO and CMA) and addresses the governance of AI use in businesses by introducing 'AI responsible officers'.

It is also reassuring to see provisions relating to the protection of intellectual property in the use of AI. In section 2 there is a general principle that businesses which develop or deploy or use AI should comply with IP, and section 5 requires those involved in training AI to supply a record of all third-party IP used in that training. This is also in line with the EU AI Act which proposed last minute protections for the use of copyright in generative AI, requiring the disclosure of any copyright material used to train an AI model. It is good to see that the bill has put the spotlight on the increasing concerns about the use of unlicensed content in generative AI.

However, the bill will need to be refined if it is to operate effectively and to ensure that it is futureproof. There are definitely gaps in the bill, including a lack of provisions on how to deal with any breach of the regulations or wrongdoing when using AI (e.g. not disclosing third party IP used to train an AI Model). Section 8 does state that regulations may create offences and require payment of fees, penalties or fines, but it goes no further than that.

Also, whilst the bill appears to give the Secretary of State the power to decide on important aspects of the bill rather than Parliament, there are other aspects which would require further scrutiny. For example, any draft SI containing regulations under sections 1 and 2 must be approved by both Houses of Parliament, adding another layer into the mix, and other SIs can be annulled by either of the Houses.

Another question is how would this bill sit with the other (numerous) government initiatives, reports, codes of conduct and global agreements that are currently circulating (some of which were discussed at the recent AI Safety Summit at Bletchley Park)? It will be interesting to watch it all play out, especially against the backdrop of the government's desire for a light touch and 'pro-innovation' approach set out in its White Paper.

Next steps

Like government bills, PMBs must pass through both Houses of Parliament if they are to become law. The second House of Lords reading for this bill (which will be a general debate on all aspects of the bill), is yet to be scheduled. There is usually limited parliamentary time to debate PMBs and whilst the bill is only 7 pages, what it does contain needs some careful consideration. But time is of the essence if the government is to keep up with the incredibly fast pace at which AI is developing and fulfil its desire for the UK to become a global AI superpower.

We will of course be following the bill's progress closely and will update you as and when we have news.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Intellectual Property