Ethical Artificial Intelligence, Part Two: Intelligibility | Fieldfisher
Skip to main content
Insight

Ethical Artificial Intelligence, Part Two: Intelligibility

Locations

United Kingdom

In Part One of this series, I outlined some of the elements that make an AI solution trustworthy, and I proposed authenticity and honesty as being important. A component of this, and one of the principles proposed by the House of Lords Select Committee on Artificial Intelligence, in its paper published back in April 2018[1], is 'intelligibility'. In my view, we don't need to explain everything about every AI. We do need to explain enough to justify trust.

Machine Nature Transparent

The first requirement for an AI to be understood, is to understand that it is an AI. Using technology to deceive, mislead or misinform is at best morally questionable, and in some cases downright criminal. Unfortunately, this type of behaviour is not unheard of. In 2016, Canadian dating site Ashley Madison admitted using fake female profiles operated by 'chatbots' to persuade male users to pay for the ability to respond to messages from non-existent women. Amazingly, it was alleged that 80% of initial purchases were to respond to messages written by machines. The US Federal Trade Commission launched an investigation, leading to a $1.6 million settlement—not an inexpensive mistake. But not so much of a disincentive to prevent similar behaviour by others.

In 2010, the Engineering and Physical Sciences Research Council (EPSRC) proposed that whilst sometimes it might be a good thing for an AI to seem like a human, any person that interacts with it should have the ability to determine both its machine-nature, and what its purpose is. I agree, and think it would be dangerous to permit Providers to create products that deceive humans for an undisclosed purpose.

I'd also agree with the view expressed by the European Group on Ethics in Science and New Technologies[2],  which proposed legal limits on how people can be "led to believe that they are dealing with human beings while in fact they are dealing with algorithms". Such limits were described as being necessary to support the principle of inherent dignity required to establish human rights—something I'll come back to later in the series. The Ashley Madison case demonstrates that this has already caused issues, and AI's ability to act human is only going to improve. Deception in this context does not necessarily require malicious intent in order to cause harm. It is easy to imagine how destructive it could be for a human—particularly a vulnerable user such as a child—to attribute emotions to and establish a relationship with a machine unknowingly.

We need to be careful about how we regulate this issue not to be overly prescriptive whilst still addressing the mischief. There will be a question as to the optimum timing for disclosure of the AI's machine-nature, and the answer will not necessarily be the same in all scenarios. As a quick straw poll, I asked a group of around thirty senior in-house counsel whether they would want to be made aware at the start of an interaction if they were communicating with an AI. A substantial majority agreed that they would.


Explanation of the AI

The degree to which an Provider should explain the operation of an AI will be relative to: (i) the solution being described; (ii) to whom it is being described; (iii) its function; and (iv) the use case or context. To take a well-worn example, it is likely the public would want to see a greater degree of transparency in a system that diagnoses illnesses, compared with a system that recommends movies. Having said that, if the AI demonstrably saves lives as well as or better than a human doctor, then many would argue that it isn't as important to understand how it works, just that it does. It's worth noting this principle of 'functional equivalence', as it will become particularly useful when we start to think about standards of care from a liability perspective.

We need to consider carefully what is both necessary and proportionate in relation to explaining the AI, balancing the competing tensions of a desire to avoid requirements that have a chilling effect on technological progress, whilst also enabling trust in the system and supporting our rights to choose. Whatever the explanation, it doesn't necessarily need to expose the developer's proprietary information—though it's possible that in some cases this may be unavoidable to an extent. Either way, for the benefit of the user and the Provider, that information needs to be conveyed in a way that the user readily understands.

Sometimes a machine may reach a conclusion that we just do not or cannot understand, and therefore are unable to explain. There is a common fallacy in many people's understanding of AI, believing that the processes the AI uses to reach a conclusion are capable of replication by the human brain, or simply of being conceptualised by humans. It would be incorrect to assume that machines will deliver an outcome in the same way that a human would—we're built somewhat differently! It should therefore come as no surprise if the opacity of a system grows in proportion to its cognitive power.

For example, when a neural network is used there are millions of different variables at work, and with deep-reinforcement learning the system teaches itself by engaging with its environment whilst following certain behavioural goals. Approaches like this make it very hard to explain what is actually going on. Where the conclusion reached by the AI is based on high volume and high velocity data, it simply may not be possible to replicate the process outside of an AI solution. If we are to rely on conclusions that cannot be rationalised by conventional means then of course we must be cautious and proceed with care, but that is not to say we mustn't ever proceed.

In many cases it won't really interest the user how the automated process actually works, just that it does. Richard Susskind rather neatly stated the concept at a Law Society event: "people don't want doctors; they want health", and in those cases looking to the outcome rather than the process will be enough. When employing an electrician I dare say most homeowners would not need to know how they are wiring an appliance—they trust them. That is, so long as they are suitably experienced and qualified, and that there is suitable recourse if they make a mistake.

One way of reassuring ourselves of the quality of an AI-driven process is to benchmark it against an existing proven comparator. This does have its drawbacks of course, as it only assures the quality of the results of the process at the time of benchmarking, and is reliant on maintaining the same quality of data throughout its lifetime to allow the system to give consistent results. If the process is evolutionary (e.g., using machine learning), then this could introduce a substantial challenge to using benchmarking as a quality assurance tool.

We will also need to understand what cultural, social, and legal norms are assumed in the AI's creation and, to the extent practicable, what editorial 'thought process' it uses to reach conclusions. There is no such thing as a single set of universal values, so a system designed in China may take a different approach, and give a different outcome, to a system designed in the US, the Middle East, or in Europe. What's more, moral values, whether those of an individual or commonly held by a group of people, are not static and change over time. Developers need to be prepared to explain the system of values that is applied by the AI.

It isn't just the AI

Given how crucial the datasets are to the training and day-to-day operation of AIs, it is vital to ask not only how the process is tested but also how the training and input data are quality assured. It's not uncommon for data to be biased in one way or another. We mustn't forget that bias is not necessarily a bad thing, although clearly it can be. And we undoubtedly need to be aware that it can impact the process the AI uses to reach a conclusion. Providers should therefore explain what quality standards are applied in the collection and preparation of data for use with the AI solution, how unwanted bias has been removed, and what biases remain.

For an AI to satisfy the test of intelligibility therefore, we need to recognise its machine nature and 'thought' process as something distinct from our own, and either understand how it reaches its decisions, or at least be able to justify those decisions with reference to something outside the machine itself.
 


[1] House of Lord Select Committee on Artificial Intelligence, AI in the UK: ready, willing, and able?, 16 April 2018.

[2] European Group on Ethics in Science and New Technologies, Statement on Artificial Intelligence, Robotics, and Autonomous Systems, 9 March 2018.

 
We are all navigating uncharted waters as business and society faces up to the impact of COVID-19.  We very much hope you and your loved ones remain in good health. 

 Please be assured that Fieldfisher is continuing to work with clients to navigate COVID-19 related issues and on business as usual needs.  Do get in touch with us if you would like to chat anything through.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Related Work Areas

Artificial Intelligence