Ethical Artificial Intelligence, Part Four: Decision-making | Fieldfisher
Skip to main content
Insight

Ethical Artificial Intelligence, Part Four: Decision-making

Locations

United Kingdom

Should we ever permit a machine the power to make decisions completely autonomously? How might it affect our concepts of fairness and justice? And how are we dealing with it now?

If we can be sure that an AI will make decisions in a completely fair and impartial way, then we may be more inclined to trust it to do so. However, the United Nations' Universal Declaration of Human Rights recognises the inherent dignity of all human beings as being the foundation of justice, which in turn requires that each person is a free moral agent who is simultaneously autonomous and responsible[1]. In order to have moral responsibility, human beings must retain ultimate control over decisions that affect other human beings, including the ability to decide when and how to delegate (or reassume control of) any part of a decision-making process.

In some ways, a useful parallel in history is the development of the doctrine of equity in English law. Judges developed this body of rules over time to mitigate the injustice that the strict (and often harsh) application of the law might inflict. Similarly, the strict application of an algorithm to data may result in injustices that a human agent would not permit if the decision were still in their hands. Wherever there is scope for those injustices to occur, there is an argument not to delegate responsibility for the decision—or at the very least make the decision subject to challenge without undue difficulty.

Subject to some exceptions, Article 22 of the General Data Protection Regulation[2] and section 49 of the Data Protection Act 2018 enshrine the right not to be made subject to decisions based solely on automated processing where it significantly affects the individual. This right to require human intervention provides an impetus for companies to develop AI systems in a way that they are interpretable, so that they can explain and if necessary justify the decision-making process. If Providers give enough explanation up front, this will likely reduce the number of legal objections.

But is it right to expect machine-made decisions to be completely explainable? I spoke about some of the difficulties of intelligibility earlier[3], but is there any justification for holding a machine to higher standards than a human? When a natural person makes a decision, you can never truly know why they concluded as they did. Even if the person is truthful about their motives and methodology, there will always be subconscious influences of which the decision-maker is unaware[4]. Perhaps the difference comes down to the fact that a human always remains accountable for their decision, whereas an AI is not.

I think there is definitely a policy question to be answered when it comes to the standards of performance we should require of AI systems. When AI is first introduced to a field of endeavour, it makes sense to ask whether the output from the AI is at least as good as that which an appropriately trained, skilled, and experienced human would produce. This is the concept of 'functional equivalence'. But once the AI technology is outperforming humans in the field, perhaps we should not be looking at what a human would do, but at the state of the art, for determining whether an appropriate standard has been met.

Perhaps in some cases decisions can be delegated, and we won't mind so much that the process isn't completely understood because the outcome is objectively correct. You probably wouldn't ask an electrician how he's fixing your wiring, as long as it works correctly and safely. On the other hand, there may be some situations where we absolutely want to understand how a particular decision was reached. For example if an AI is used to determine prison sentences, I'd expect the decision-making process to be clearly explained so that justice is not only done but can be seen to be done, and also to allow meaningful challenges to be made to the outcome where permitted in law.

When it comes to deciding whether a particular scenario for AI decision-making is acceptable—within the framework of our cultural perspectives on right and wrong—context is critical. It is not too early to be thinking about the types of decisions and functions that we should never delegate even to the most intelligent machine. Personally I'd prefer to start that conversation now than allow ourselves to blindly walk into a position where we're accepting decisions made by machines, and potentially putting human lives and liberty in jeopardy. An obvious example for me is the decision to inflict harm (or take a risk of inflicting harm) on human beings. There are also many less obvious areas that need to be discussed, such as the extent to which we rely on technology for critical infrastructure, logistics, and crisis management.

So how is industry responding? A number of the big tech firms have spoken out on the subject. As one example, Google established seven principles that it will use to guide business and engineering decisions when it comes to AI, and being "accountable to people" is at number 4. The tech giant states that it will design AI systems that "provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control". I recently commented on the activities of other tech firms that don't appear to be living up to the standards they're setting for themselves, and we need to make sure that such promises go further than just words on a page.

In the next part of the series, I'll explore accountability for decisions made or assisted by AI.


[1] Chaïm Perelman, The Safeguarding and Foundation of Human Rights, 1 LAW & PHIL, 119 (1982)
[2] And after Brexit, the 'UK GDPR' under the Data Protection, Privacy and Electronic Communications (Amendments etc) (EU Exit) Regulations 2019.
[3] See Ethical Artificial Intelligence Series, Part Two: Intelligibility.
[4] See Ethical Artificial Intelligence Series, Part Three: Prejudice and bias.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Technology and Data

Related Work Areas

Artificial Intelligence