Rules on the horizon for AI in health screening | Fieldfisher
Skip to main content
Insight

Rules on the horizon for AI in health screening

Locations

United Kingdom

As enthusiasm for deploying artificial intelligence in healthcare applications builds, regulators have begun to indicate how they plan to manage its deployment to ensure the technology is used safely and ethically.

Artificial intelligence (AI) could offer significant benefits for healthcare. However, to date, regulators and healthcare providers have been cautious about implementing the technology until more is known about its opportunities and risks.

In 2021, following a rapid review of AI in breast cancer screening, the UK's National Screening Committee (NSC) concluded that the evidence at the time was insufficient in quality and quantity to recommend the use of AI for image classification in breast screening.

Research published by Harvard University in 2021 identified a lack of trust in AI among patients, while articles published in respected medical publications including the British Medical Journal (BMJ) have outlined ethical concerns about deploying AI in healthcare settings.

However, familiarity with and acceptance of AI among medical practitioners and the public has increased in the past three years and AI capabilities in healthcare, particularly in health screening, are developing apace, offering tantalising glimpses of huge potential improvements.

Mindful of historical concerns, lawmakers and regulators recognise that in what has until now been a largely unregulated area, these promising developments need to be managed with appropriate controls.

The EU's proposed Artificial Intelligence Act, as well as new secondary legislation in the UK arising out of the ongoing 'software and AI as a medical device' change programme from the Medicines and Healthcare products Regulatory Agency (MHRA), are among the key legal changes medical AI developers will need to consider.

The Artificial Intelligence Act

In April 2021, the European Commission published its draft Artificial Intelligence Act (AI Act), aimed at providing a legal framework to "facilitate investment and innovation in AI" and "facilitate the development of a single market for lawful, safe and trustworthy AI".

Following various amendments introduced by the European Council and Parliament, the Act will enter final discussion stages before becoming binding law – possibly as early as this year.

Since the draft AI Act was published, various committees including European Parliament's Committee on Legal Affairs (JURI) and Committee on Industry, Research and Energy (ITRE) have proposed changes.

In an indication of how controversial the AI Act is and the importance of this area of law to future technological growth and human rights, MEPs tabled 3,312 amendments to the text – a huge amount, even for the perennially detail-oriented European Parliament.

Ultimately, the final act will have undergone many refinements and revisions, as it is intended to provide a comprehensive legal framework for AI in Europe.

Much like the GDPR, the regulation will have extra-territorial effect, so if an AI system affects people located in the EU or is placed on the EU market, then the AI Act will apply. So while the UK is no longer a member of the EU, the legislation is also likely to impact those seeking to deploy AI solutions in the UK (adding a further layer of legal consideration for those in the already highly regulated healthcare sector).

How will AI in health screening be regulated?

An AI system will be deemed 'high-risk' under the AI Act if it is a product covered by legislation listed in Annex II of the Act's draft text, and is required by the legislation to undergo a third-party conformity assessment with a view to the placing it on the market or putting it into service.

One of the pieces of legislation listed in Annex II is Regulation (EU) 2017/745 on medical devices (the "Medical Devices Regulation") which regulates all "general" medical devices. This includes "software … intended by the manufacturer to be used, alone or in combination, for human beings" for specified medical purposes including diagnosis of disease.

Consequently, an AI system used to assist the diagnosis of medical conditions will be deemed high-risk and subject to the strict rules imposed by the AI Act.

To achieve its aim of providing a high level of protection for "overriding reasons of public interest such as health, safety, consumer protection and … other fundamental rights" the AI Act would place a number of key requirements on AI cancer screening systems, such as:

  • The implementation of a continuous, iterative, risk management system;
  • Capability of human oversight when in use; and
  • Transparent operation with clear user instructions.

The Act imposes these obligations on providers of high-risk AI systems, with other obligations extending to importers, distributors, and users.

The use of AI under the regulation will therefore require those deploying the AI system to weigh up whether it delivers a strong enough benefit to the screening workflow to be worth the onerous requirements and risk of heavy fines for non-compliance with the AI Act.

The UK will also have its own regulatory approach, and so AI screening systems used by UK clinicians will likely need to align to multiple regulatory regimes.

In contrast to Europe, the UK is pushing more responsibility onto sector-specific regulators such as the MHRA. This is intended to result in a more tailored regime for each sector to make the UK the "most pro-innovation regulatory environment in the world".

However, if care is not taken, it could result in frustrating regulatory complexities for AI service providers.

A lack of harmonisation between international approaches could similarly culminate in an impenetrable web of regulation and guidance that ultimately stifles growth and development.

The UK has acknowledged this risk and has stressed that its regulatory framework will be light touch, although what this means in practice remains to be seen.

With thanks to Solicitor Sophia Steiger, co-author of this article.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Related Work Areas

Digital Health