Interview with Simon Bristow, Global Data Privacy and AI Expert, Novartis | Fieldfisher
Skip to main content
Insight

Interview with Simon Bristow, Global Data Privacy and AI Expert, Novartis

Locations

United Kingdom

Ranjit Dhindsa, Head of Employment, Pensions, Immigration and Compliance at Fieldfisher, speaks to Simon Bristow, Global Data Privacy and AI expert at Novartis. 

Can you tell us a little bit about your background?

At Novartis, I design and own global governance and processes for data privacy, digital, and AI compliance. My recent focus has been on the creation of a new risk and compliance framework for AI.

I have been a data privacy professional for 15 years and have led a range of legal and compliance programmes, both in-house and as a consultant, primarily in the life sciences industry.

Can you tell us a little bit about this AI framework and what the process involves?

In our industry, AI has the potential to be used to improve the speed and accuracy of diagnosis, treatment protocols, drug discovery, drug development, patient monitoring, and patient care. On the other hand, AI presents risks to individuals, groups, and societies.

Our proposed risk and compliance framework balances these risks and opportunities. Within the framework, AI systems are subject to a two-stage process.

First, potential impacts are assessed in a range of areas, including bias and discrimination.

Second, systems that pose a higher risk are subject to an in-depth assessment and enhanced risk management. The framework also addresses governance bodies, roles and responsibilities, and risk management processes.

I know that your focus is on building ethical AI – what are the values and behaviours that underpin this approach?

Our approach is centred on the principles set out in the Novartis commitment to ethical and responsible use of AI.

These cover a range of topics, including bias mitigation, transparency and explainability, and security and privacy.

Our approach is also driven by feedback from the patient community, as we recognise that the use of AI should benefit patients.

In your experience, where does risk within AI come from? 

Risks exist at every stage of the lifecycle of an AI system, though different risks may arise at different stages. We have created a ‘risk universe’ that maps risks to the different lifecycle stages.

The risk universe also indicates which stakeholders (eg data scientists) need to be involved in the risk assessment process and at what stage.

Risks may exist in many different aspects of an AI system, including bias in datasets, a lack of human review and oversight, the use of third parties, and a lack of transparency.

Do you carry out impact assessments prior to the deployment of AI technologies?

As mentioned, we have a two-stage risk assessment process. The first stage, impact assessment, considers potential impacts on individuals, groups, societies, and the organisation, in relation to:

  • Equality and non-discrimination
  • Patient safety
  • Privacy and security
  • Health, safety, and the environment
  • Organisational risk

Each topic is rated according to its potential impact and the likelihood of occurrence. This leads to an overall risk rating (low, medium, high, or prohibited) which drives the risk management approach and determines whether a more detailed assessment is required.

Should organisations have a multi-disciplinary team that considers the development of ethical AI? If so, which teams would be involved?

AI risk management should not be left to a single team or function.

A multi-disciplinary team should include specialists in risk management and regulatory compliance (data privacy professionals are often a good fit), specialists in AI and technology, data scientists, and business stakeholders.

Other specialists may also need to be involved, for example to assess the impact of an environmental risk. In smaller organisations, one individual may perform multiple roles.

What are the most difficult aspects of identifying and addressing risks in AI tools?

AI risk management is challenging topic that continues to evolve. Key challenges include:

  • Understanding AI technology and the models behind it, especially where technology is obtained from third parties
  • Embedding AI risk management into an organisation’s overall IT and digital governance frameworks and business-led projects
  • Identifying and managing risks throughout the lifecycle of an AI system
  • Finding and obtaining input from specialists (at present, there is a limited pool of AI risk management experts to draw from)
  • Keeping up to date with the flood of new information on AI

How important is human involvement in the development and implementation of AI tools? Why?

Human involvement is very important. It is necessary to understand risks from a human perspective, and to develop effective transparency and explainability controls. Having a diverse development team may reduce the risk of biases.

Human involvement may also be a legal requirement, for example where Article 22 of the General Data Protection Regulation is applicable.

How important is transparency and explainability? How can organisations ensure that AI is transparent and explainable?

These are important and challenging topics. For example, it may be difficult to be fully transparent with users when a ‘black box’ model is licensed from a third party that wishes to protect its intellectual property. There are similar challenges with explainability, though there is a range of guidance available, including from the Information Commissioner’s Office.

When designing an AI system, organisations should consider seeking feedback from a diverse range of stakeholders.

For example, it may be useful to consult with a patient group when designing a system that uses patient data. These stakeholders can help to shape approaches to transparency and explainability.

How important is it to have policy frameworks in place in relation to the development and use of AI?

A policy framework should be at the core of AI governance and risk management. Some organisations may have dedicated AI policy documentation, while others may take a holistic approach and incorporate AI into their existing framework.

In either case, AI should be addressed at all levels, from high level ethical principles down to process and control documentation. As well as a policy framework, organisations should also have clear AI business and risk strategies.

Do you have any final top tips for other companies developing their own AI risk management frameworks?

Organisations should act now and proceed on the basis that there is no perfect, one-size-fits-all approach to AI risk management. In particular, organisations should:

  • Consider creating a set of ethical principles to shape their approach to AI
  • Consider using existing data privacy frameworks and programmes as a basis for AI risk management
  • Build a cross-functional group of stakeholders and define clear roles and responsibilities
  • Consider taking an agile implementation approach, initially focussed on the most business-critical and high risk AI systems
  • Be agile and adaptable to constant changes in AI technology, industry practice, laws and regulations, and public perception

Disclaimer: The opinions expressed in this article are the author's own and do not necessarily reflect the view of Novartis Pharma AG or other members of the Novartis Group.
 
 

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE