Opportunities and challenges for AI in healthcare | Fieldfisher
Skip to main content
Insight

Opportunities and challenges for AI in healthcare

Taly Dvorkis
17/07/2023

Locations

United Kingdom

Last month, Fieldfisher's Digital Health team hosted an event considering the drivers and challenges for AI in healthcare. There was evident consensus amongst attendees that AI has the potential to be the most transformative technology of our generation, particularly in the area of healthcare.

We're hugely grateful to our guest speakers for their insights, and we wanted to share some of the key observations that were raised in the presentations and the panel session. In a rare break from tradition, we will not be discussing legal risk in this article.

Opportunities

We, like our speakers, are seeing huge enthusiasm for AI right now across all sectors. When it comes to the UK healthcare sector, the challenge to keep up with increasing demand with limited resources is creating an urgency to find ways to deliver care more efficiently and effectively. We expect AI to provide part of the solution.

With Rishi Sunak seeking to position the UK as a world leader, and the recent publication of a national AI strategy that intends to be supportive of innovation, the UK could become an ideal place for AI to thrive. This is not going unnoticed by businesses. Palantir, for example, has announced that the UK will be a focal point for its AI development for Europe. This is due in part to the UK providing the best access to AI talent in Europe, and a pragmatic regulatory environment.

A further opportunity for AI in front line care is the wide range of potential use cases to explore. From analysing medical notes to identify trends and patterns that could support diagnoses, to assisting the development of diagnostic and treatment regimes by bringing together disparate sources of data, including complex literature. This is increasingly becoming possible with the development of large language models that can connect complex inputs with complex outputs, and introduce a semantic element to 'understand' intent.

Healthcare in the UK is being thought of more holistically than ever before, and as a result, there is an increased importance of being able to integrate different data sets. These use cases are going to really help when it comes to population health management, and helping clinicians to obtain more real time information.

There's a real opportunity for the private sector here, as whilst the demand exists within public sector healthcare to deliver on the promise of AI, it does not have the capability or capacity that currently exists within businesses.

Challenges

One major challenge to the implementation of AI in the healthcare sector is data quality. The inclusion of data that is not clinically meaningful, has not been subject to independent evaluation, or is of narrow application, creates risk. Anyone considering training AI for health purposes will need to consider data governance, including ensuring high quality and clinically meaningful data that is representative of the relevant populations, removing unwanted bias, and ensuring traceability of data.

A critical question is what guardrails will exist to ensure patient safety and confidentiality. AI is not a new phenomenon and there are many traditional guardrails spanning areas such as transparency (especially regarding risk thresholds and tolerances within models); protection of sensitive information; auditing of AI-made decisions; versioning to track all changes to data, models, parameters they use, and the conditions the system operates in; and creating transparency around which models and data are used for what decisions. However there remains a substantial disconnect between theory and practice within the healthcare sector.

AI ethics in general has seen a really rapid convergence on a set of ethical principles which most people would recognise as binding or as good guidance in this space, certainly much quicker than the bioethics field got a grip on things in relation to genetic manipulation—hence the lack of trust and resulting difficulty in progressing technology in that space. However when we look at the way ethical principles are implemented there is still an enormous gap, and stating high level principles doesn’t necessarily lead to good ethical decision-making. One thing that's been discovered in medical ethics is even though there are clear frameworks to rely on, ethical difficulties primarily arise when principles come into conflict. When you really interrogate the ethical principles they don’t speak for themselves, and there needs to be an explanation of what they mean in any given situation.

There is a tendency to believe that, if given the right information, AI will make morally sound decisions for us. Actually what it may be able to do is make good technical decisions—so we mustn't give it more moral authority than we really should. Where morality in judgement is required, human involvement will need to be supported by artificial intelligence, not supplanted by it. AI in this sphere will need to be considered as a means to assist clinicians rather than a means of allowing medical processes to be placed on autopilot.

Complementary intelligence

One of the exciting things about healthcare in the UK is we have an incredibly digital population, with over 20 million registered users on the NHS app. One benefit of this shift towards a focus on population health is that we are now starting to think more broadly—not just about the hospital. We are increasingly able to change the way we support patients, by enabling them to monitor their own care. In a world of much greater democratisation of information, much greater levels of patient understanding, and where generative AI may actually help patients to understand things; we need to take patients on the journey with us.

To succeed in capitalising on the opportunities presented by this rapidly evolving field, organisations will need to adopt a 'Responsible AI' approach throughout the entire system lifecycle, using AI to support our front line staff. Done right, we hope this will benefit society by delivering the benefits of AI in a safe and effective way, and alleviating the ever-growing pressure on human time and effort that the healthcare sector currently operates under.
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE