Ethical Artificial Intelligence | Fieldfisher
Skip to main content
Insight

Ethical Artificial Intelligence: Introduction

Locations

United Kingdom

Robots live among us… Well, perhaps not quite. But anyone with access to the media can't fail to have noticed that 'artificial intelligence', as it is being called, is rapidly permeating every aspect of our lives, both at home and in the workplace.
 
While purists would insist that the term 'artificial intelligence' is in fact a misnomer, this moniker remains a useful way of referring to current and future applications of a number of technologies. These include machine learning and neural networks, process automation, affective computing and sentiment analysis, natural language processing, and similar. They are already influencing national and global economies and security on the macro scale, and we may well not yet have comprehended the potential long-term impacts—even in science fiction.

So how do we prepare ourselves for the future? While I don't claim to have the answer, I hope in this series to bring out some of the ethical and legal issues I see arising out of the creation and deployment of artificial intelligence. Among others, I plan to cover topics including trustworthiness, intelligibility, fairness, accountability, democratization and equality of access, integration of AI into organisations and society, and the interplay with human rights.
 

Ethical Artificial Intelligence, Part One: Trustworthiness

To kick off the series, I'll start by explaining why I see trust as being vital to the widespread adoption of AI, and therefore why we need multi-disciplinary co-operation to establish sensible approaches to building trust.

As with any new technology, we all have some preconceptions about what artificial intelligence is—typically drawn from popular media. We've seen film and television portraying images of Schwarzenegger as a 'T-800' or the 'Cybermen' from Doctor Who, or the somewhat less homicidal Wall-E or C3PO. We've come across AI in literature, such as Arthur C Clarke's 'HAL-9000 (homicidal again) or Douglas Adams' much-loved (if perpetually paranoid) 'Marvin'. I'd like to think most people are aware that these are fictions, and that the realities of artificial intelligence are—thankfully?—substantially different. But the facts behind AI and what it can do are still relatively unknown to most people.

For the most part, we don't create technology for its own sake but to solve a problem or fulfil a role. AI-based solutions are no exception. Moreover, if a solution is to be widely adopted, those purchasing it need to trust that it will perform the role they intend it to fulfil. It must be trustworthy, having characteristics that generate trust.

This includes having an established capability to produce consistent outcomes, acting in a way that is authentic and honest, treating people fairly, and not causing wider adverse effects. Successful creators and operators of these products—I'll call them 'Providers' in this series for simplicity—will need to design, build, and deploy solutions that are trusted when users properly understand them, and engage with users and the wider public to ensure that level of understanding.

Concepts such as reliability, authenticity, fairness, and benignity are nothing new, even as they relate to AI. Back in 2010, the Engineering and Physical Sciences Research Council (EPSRC) published a number of principles of robotics in which many of the aspects we'll be discussing in this series were raised as important issues. Citing genetic modification technology as a precedent, EPSRC identified that bad practice by even a few market players would hurt the whole industry. If the public comes to view AI as a threat, then no amount of reassurance will repair its image. It will therefore be crucial to establish a consistency of approach from policy-makers, regulators, and industry, whilst recognising that there is no 'one size fits all' answer.

So how can a Provider demonstrate that the solution is capable of fulfilling the functions it is supposed to fulfil, and that it does so in a way that gives consistently correct results? Explaining the methodology the AI uses could be one option, so that it is completely transparent and open to scrutiny. There may be reasons however that a Provider cannot or does not wish to do this, and I'll talk about intelligibility and explaining the solution in Part Two. So, let's assume for a moment that the methodology followed by the AI to achieve its results is not comprehensible.

You might imagine that the Provider could rely on a proven track record—this is, after all, how we have come to trust many of the world's biggest brands. That might work in some scenarios, but we must be cautious. AI solutions may be dependent upon having the right data inputs, or evolve through self-learning, so we can't just look at the solution as it rolls off the Provider's production line. We also need to consider where sources of error can sneak in throughout its lifetime, and whose responsibility it might be to safeguard against them. Like other technologies, some AI solutions may be adaptable to a number of different use cases. But with technology that integrates deeply into its environment, we can't be sure that flawless results in one will necessarily transfer to another. Even if we can rely on past performance for one specific context, that trust may not be transferable.

Consistency of correct results equates to a low error rate, however we can't just look at the number of problems—the magnitude of the consequences of those errors will also be a factor. A high profile failure with serious consequences could be catastrophic when it comes to trust in AI, so solutions must perform in a way that assures the safety and security of users and the wider public. It must not put them or their property at increased risk (whether actual or perceived), and must not expose their information in a way they are not aware of or do not intend. Providers will need processes in place to ensure that they design, build, and maintain AI solutions to appropriate safety and security standards, giving a good level of confidence that they will not pose a risk—to people, their property, or their legal rights.

These are not new challenges in the field of technology. Beta testing, professional and amateur product reviews, and published performance statistics are common. We also have long-established mechanisms to establish product safety and security, such as recognised international standards. While existing methods and standards may not be appropriate for all instances, it's not hard to imagine how we might develop new standards and regulatory frameworks to cover different types of AI solution or, more appropriately to my mind, different use cases. That said, given the connected nature of the digital world, it might become increasingly difficult to protect against issues if AI is developed in places where equivalent standards haven't been adopted. I'll return to recognising the importance of geography in a later article but, as we become more interconnected and geographical and political boundaries have less meaning, such standards and frameworks may prove ineffective unless they are universally enforced. End users, and particularly individual consumers, will need to be educated as to how to protect themselves—this may include recommendations not to use products that do not conform to the established standards.

As I alluded to earlier, even if standards are complied with when the solution is deployed, a bigger challenge may be how to avoid the introduction of risks later. What controls can be put in place to avoid the solution malfunctioning because of the data being supplied to it? How can we avoid self-learning AIs from picking up bad habits? What degree of oversight and control will the Provider retain once the solution has been delivered?

Difficult as it may be, we must persist in working multi-laterally to achieve common understanding. Indeed the European Commission has described building an "ecosystem of trust" as being a policy objective in itself[1], intending to address this through investment and regulation as part of its AI strategy.

So where does that leave us? With more questions than answers. But it seems clear that meaningful dialogue between experts from many different fields of endeavour will be necessary to achieve trust in artificial intelligence and drive adoption.
 
[1] European Commission, On Artificial Intelligence – A European approach to excellence and trust, 19 February 2020.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Related Work Areas

Artificial Intelligence