Right now, the regulation of artificial intelligence is high on the agenda for policymakers and regulators.There is a proposal for a new European AI Act, the UK's AI strategy has been published, and the Medicines and Healthcare products Regulatory Agency (MHRA) is considering the application of medical device regulation to artificial intelligence. With all of this going on, now is the time for providers in the digital health space to be considering how to remain compliant and safeguard their position in this lucrative and growing market.
In this article, Fieldfisher lawyers Chris Eastham and Olivia Woolston-Morgan identify some of the risks that lawmakers are seeking to address as they relate to clinical governance, and how we're helping health technology businesses to respond in this fast-moving arena.
AI offers huge potential for improving the clinical effectiveness of diagnosis and treatment to deliver better patient outcomes, including through the scaling of services that would otherwise be limited by staffing constraints, or to analyse data to develop improved offerings.
But it would be too easy for these new tools, created by software developers and data scientists with limited input from clinicians, and with little training available to front line staff on usage and limitations, to be misused. For 'at home' AI solutions, often widely available without interaction with healthcare professionals and delivered via mobile devices, the risk of misdiagnosis becomes even more prevalent.
There are obvious opportunities for AI to reduce risk to patients, practitioners, and organisations by spotting patterns in data that a human wouldn't be able to perceive. However, AI can also introduce risk to the organisation by bringing with it additional compliance requirements, and nuanced legal issues that are not fully addressed in conventional software development, procurement, and management approaches.
Legislators are concerned about the potential for patient harm, driven by noise in clinical inputs, differences between training data and real-world data, and use in varying clinical contexts. Missed diagnoses, treatments wrongly indicated and delivered, and inappropriate medical interventions could all lead to catastrophic consequences for patients, and liability for care and service providers.
The introduction of medical AI expands an already complex ecosystem involving patients, professionals, and providers, to include software development teams and data scientists.
A current lack of clarity around who takes liability for problems caused by AI, and therefore what appropriate mitigating steps should be taken at each step in the supply chain, puts businesses and healthcare professionals at risk.
We're already seeing rules imposed to allocate responsibility, and failure to comply will lead to significant fines and penalties—even up to 6% of global turnover.
Patient and Public Involvement
Securing feedback from patients and the public to gain insight on quality of care could be made more effective by providing easier or more engaging means for individuals to provide feedback. AI could do this as effectively as clinicians in many contexts—however caution is advised to ensure the AI is able to interact correctly with everyone it may encounter, some of whom may be intrinsically or emotionally vulnerable.
The risk of bias (due to bias within datasets, algorithms/processes, or development teams), is a real one.
A substantial challenge for AI in healthcare will be establishing trust, which can be difficult with the associated lack of transparency about the design and operation of AI systems. Explainability, in a format (or formats) suitable not only for the clinician but also the patient, will be fundamental to establishing trust.
The ability of an AI system to review high volumes of documentation to check compliance with guidelines, for example in the context of medical notes audits, presents potential use cases in an auditing function.
On the other hand, there are some very challenging questions around how the actions of an AI system could be audited—this is something that regulators have recently been consulting on. Understanding what standards AI should be assessed against, and making AI sufficiently transparent to allow such an audit are fundamental questions.
Whilst the potential for harm may not be obvious in staff management as it is in a more clinical setting, this is still considered by regulators to be a high-risk area. AI could provide benefits when it comes to identifying under-performance or picking up errors, or in recruitment and checking qualifications, however tight controls will be required to avoid unwanted bias. The rights of individuals to object to automated decision-making will also need to be taken into account.
Information and Privacy
The use of AI in digital health certainly challenges traditional principles of privacy—how do systems that allow machine learning to derive what information is necessary from large data sets, sit with requirement of data minimisation, for example? AI can also generate new personal data without the permission of the data subject. The risk of personal data being shared and used without informed consent is just one of the risks for data privacy and security when using AI in healthcare. We must also consider data being repurposed without the patient's knowledge, data breaches that could expose sensitive or personal information, and the risk of harmful cyberattacks on AI solutions (at both patient and hospital level).
The UK Information Commissioner's Office (ICO) has stressed the importance of privacy by design, i.e., the mitigation of risks being engaged with at the design stage, on the basis that "retrofitting compliance as an end-of-project bolt-on rarely leads to comfortable compliance or practical products". Compliance with data protection principles needs to be at the forefront of all AI projects to ensure that any systems developed benefit the data subjects whose data AI approaches rely on.
The ICO has recently updated its AI toolkit, which aims to help its users to understand some of the AI-specific risks to individual rights and freedoms, as well as providing practical steps to mitigate, reduce or manage them: AI Toolkit
Additional Regulatory Considerations
Under the Medical Device Regulations 2002 (as amended), software will be classified as a medical device in the same class as any device that it drives or influences. Naturally, this will extend to artificial intelligence, and the Medicines and Healthcare products Regulatory Agency (MHRA) is currently analysing feedback to its 2021 consultation on potential changes to the way the Medical Devices Regulations 2002 apply to software and AI as a medical device (AIaMD). The likelihood is that this will take the form of further guidance and standards, rather than new legislation, and will address both pre-market requirements as well as post-market surveillance. We can anticipate this over the course of the next 12 months.
When assessing AI systems, and whether to build or procure them, it will be important to carry out the following tasks:
- assess which AI technology is most suitable for your purposes—different technological approaches will either exacerbate or mitigate inherent risks in AI;
- consider specific use cases and identify any regulatory requirements that may be triggered;
- align internal structures, roles and responsibilities maps, training requirements, policies, and incentives to an overall AI governance and risk management strategy;
- allocate responsibility and governance for AI projects not only internally, but with partnering organisations, and make sure that the team developing and managing the AI project has the requisite skills and resources.
Sign up to our email digest
Click to subscribe or manage your email preferences.