“I’m sorry, Dave. I’m afraid I can’t do that”: Legal liability in the age of Artificial Intelligence | Fieldfisher
Skip to main content
Publication

“I’m sorry, Dave. I’m afraid I can’t do that”: Legal liability in the age of Artificial Intelligence

Andrew Dodd
23/10/2019

Locations

United Kingdom

When machines make their own decisions, who is liable for the harm they cause? Andrew Dodd and Marissa Beatty consider the legal issues affecting this rapidly evolving technology.

Part one: I’ll take you there

HAL 9000 from Arthur C. Clarke’s 2001: A Space Odyssey is the perfect example of artificial intelligence causing harm. HAL 9000 is a sentient computer, built by Dr Chandra at the University of Illinois' Coordinated Science Laboratory.

HAL is used to control Discovery One – the ship carrying scientists Dr David Bowman and Dr Frank Poole to Jupiter on a fact-finding mission.

Without giving too much of the plot away (because you should absolutely watch Kubrick's film adaption if you haven’t already), HAL was programmed to carry out seemingly conflicting directives: to report data truthfully while keeping a rather considerable secret from the crew on board Discovery One.

HAL decides for itself that the only logical way to carry out both directives is to kill the crew – after all, it’s much easier to avoid lying to people when they are dead.

As artificial intelligence (AI) becomes increasingly prevalent in this age of science-no-longer-fiction, killer robots like HAL 9000 are just one of a number of possible undesirable outcomes.

The idea that AI can and will cause problems for us should come as no surprise (especially if you have watched any sci-fi from the last 50 years).

Yet there is still considerable uncertainty over who is to blame if these problems do arise. With all he went through on Discovery One, Dave Bowman would undoubtedly have a real-world claim for personal injury – but who would he sue? HAL 9000? Dr Chandra? The University of Illinois? NASA?

With the exception of the Automated and Electric Vehicles Act 2018, UK lawmakers have so far done little to govern liability in situations of AI-related harms.

While legislators and regulators notoriously struggle to keep up with new technology, there is now widespread opinion that more needs to be done to ensure we have adequate legal provisions which protect the interests of all stakeholders – in other words, our laws should enable those who have suffered harm to be compensated without stifling technological innovation.

This three part series of articles will explore the types of harms which can be caused by AI, what legal provisions are currently in place to assign liability and remedy such harms (including what steps organisations should take to limit their liability) and, finally, what legislative changes we might expect over the coming years to address inadequacies in our existing laws.

However, before delving into the world of AI and who or what is responsible for AI-related harms, we need to know: 1) what we mean by “artificial intelligence” in this context, and 2) what some of the practical applications of the technology may be.

Key definitions

John McCarthy coined the term “artificial intelligence” in the 1950s to describe “the science and engineering of making intelligent machines.”

This widely accepted definition of AI essentially covers all machines that do something “smart”, from document processing to rules engines and programs that can beat humans at a game of chess.

AI therefore captures a huge range of so-called smart technology. The debate surrounding liability however comes into sharp focus when we are dealing with specific subsets of AI, namely machine learning and deep learning.

In other words, assigning liability for harms in our current legal framework becomes a lot more difficult at the point machines start learning and making decisions for themselves.

Machine learning involves a computational algorithm built into a machine which: analyses data, learns from it, and then makes a determination or prediction about something in the world.

This means that a machine is not just given a specific set of instructions to accomplish a particular task, it is instead “trained”, using large amounts of data and algorithms which give it the ability to learn how to perform a function.

Deep learning is a subset of machine learning where artificial neural networks mimic the human brain to carry out the process of machine learning.

These neural networks adopt a hierarchical approach to data analysis which allows deep learning machines to learn from data that is both unstructured and unlabelled. A deep learning machine operates entirely unsupervised in its performance of tasks.

These machine learning subsets of AI are also where we start to encounter the "black box" which poses another significant problem in terms of determining liability – this will be discussed more fully in part two (see below).

For the purposes of these articles, references to AI are to be read as references to machine learning and/or deep learning.

Practical applications

Working with this definition of AI, we can start to explore how it is already being used and where it is headed.

More importantly, we can also start to recognise situations where AI goes “wrong”.

However, as we will discuss in part two (see below), we need to be careful about framing undesirable outcomes of AI decision-making as “wrong” or “defective” outcomes.

If an AI machine is successfully thinking for itself and coming up with intelligent solutions, regardless of the outcome, it is difficult to say it is defective.

HAL 9000 was not broken – it had a command, it thought about how to achieve it and came up with a solution all by itself.

Killing the crew of Discovery One to avoid telling lies might not have been the desirable solution by human standards but AI's ability to solve problems in an ultra-logical way is arguably exactly why we have invested in developing the technology over the years. HAL 9000 did what was asked of it.

This illustrates how we need to be very careful about how we develop AI and be more open to considering the very worst case scenarios of doing so.

While AI is an unquestionably impressive technological advancement that is already doing huge amounts to improve our lives, the complexities which arise from advanced machine learning can lead to some significant harms in a number of different areas:

  • Personal Injury: With the rise of self-driving cars and other autonomous machinery, the risk of AI-related personal injury should come as no surprise. Aside from the obvious physical harms that can be caused by AI-enabled machinery and electronics, these devices also collect a huge amount of sensitive data which, if compromised, could lead to a rise in personal injury claims for distress.

  • Economic Loss: As AI becomes increasingly more sophisticated, businesses are relying upon the technology to assist, and run, a number of economic activities. From bad investment advice to poorly drafted contracts, AI advisers could see businesses (and individuals) losing millions.IP Infringement: When a machine has the ability to learn, its ability to create becomes more of an inevitability. Creative AI has the potential to cause a number of issues in terms of IP – who owns the artistic output of an AI solution? Who is responsible when this artistic output infringes another person's IP rights?

  • Discrimination: Where AI is used in areas such as policing, medicine and recruitment, we need to be confident that the AI has not learnt our own unconscious biases. We have already seen a number of instances of facial recognition technology misidentifying almost everyone who is neither a man nor white, simply because the algorithms behind the technology have been developed by, you guessed it, white men. If AI solutions continue to be developed with the unconscious biases of the engineers who build them, it will exacerbate the existing inequalities which are already present in society today.  

Part two: Where are we now? 

Now we have an idea of the problems that can stem from AI programs, the next step is to think about who can and will be blamed when these problems arise. More to the point, we should consider who will be financially liable. There is currently no regulatory framework which addresses the legal liability question in terms of AI programs but, in its absence, various existing legal models come to the fore. These are largely: contract, consumer protection legislation and the tort of negligence.

For many, the obvious starting point is that the person who benefits economically from the AI solution should bear the liability for any harm it causes. There are clearly merits in this approach but it comes with a number of challenges, including:

  • Where there are several players involved in the creation of AI – the software developers, hardware manufacturers, etc. – should they each be liable to the extent they are benefitting financially? How do we trace the harm to each of these players?

  • Will holding creators of AI responsible for all possible harms – particularly in cases where such harms are not foreseeable – stifle innovation?

  • Where a consumer is also benefitting financially from the AI, should its creator still be held liable for any harm inflicted on this consumer's end-user?

  • Do we have the ability and/or capacity to recognise whether the way an AI solution has acted is as a result of what it has learnt from its creators, or of the environment of its end-user?

Each of the existing legal models which can be employed to tackle this AI liability issue are discussed below. None are robust or future-proof by any stretch but, for now, they're all we've got to go on.

Contract

In the midst of uncertainty over who is liable, contract is an obvious way to introduce clarity about who is liable for the harm caused by an AI program.

For example, a business using AI to make investment decisions for its customers could ensure that there are specific contractual indemnities in place to cover any instances of an AI solution making a decision which loses money for its customers.

Those in the business of selling or licensing AI programs should also ensure any supply contracts include a robust liability cap.

 

Consumer protection legislation

In a similar vein, consumer protection legislation can also help us to pinpoint liability but this is where we come up against the issue of whether AI should be considered a ‘product’ or ‘service’.

Where problem-AI is considered analogous to faulty car brakes – i.e. a product – we might look to consumer protection legislation such as The Consumer Protection Act 1987 to determine liability.

In this situation, where a consumer suffers AI-related harm such as damage to property, the harmed consumer would be able to claim compensation from (i) the seller of the AI product or (ii) where a third party purchased the product, the manufacturer.

The strict liability imposed by consumer protection legislation is a simple solution which can be applied in a number of circumstances.

However, as AI gets more advanced, likening it to a product does not seem sufficient – AI thinks and we do not tend to regard things which think as products.

We would not consider lawyers to be faulty products in the event that their thinking was flawed (and therefore lead to undesirable outcomes): we would deem them as negligent service providers which are subject to liability claims under the tort of negligence.

Where AI is learning and making decisions for itself, has it transformed itself into a service that has the capacity to be negligent – regardless of whether or not the technology is presented in the form of a consumer product?

Perhaps it is reductionist, and potentially unhelpful, to box AI programs into either the product or service camp. AI is nothing like anything we, as humans, have ever created before – an AI program both is a thing (when presented in the form of robotics, for example) and does a thing.

Consumer protection legislation is apparently therefore ill-equipped to deal with something which has strayed into what might be considered a third category of consumable – a "solution”.

A further complication presents itself when we consider consumer protection legislation's emphasis on "defectiveness". If an AI machine is successfully thinking for itself and coming up with intelligent solutions, regardless of the outcome, it is difficult to say it is defective.

HAL 9000 was not broken – it had a command, it thought about how to achieve that command and came up with a solution all by itself.

Killing the crew of Discovery One to avoid telling lies might not have been the desirable solution by human standards but this ultra-logical fast problem solving is arguably exactly why we have invested in developing AI over the years.

Tort of negligence

At first glance, tort law is a shining example of how our existing legal framework can deal with the problem of AI and legal liability.

Sellers and manufacturers owe a duty of care to their end consumers and, when that duty is breached (i.e. when the service being provided is found to be inadequate) and the end consumer is harmed, the seller and/or manufacturer should compensate their end consumer for any harm caused.

However, when we start to dissect the requirements for a successful negligence claim, the inadequacies of tort law to deal with AI-related harms become apparent. In the words of Yavar Bathaee, US attorney and author of "The Artificial Intelligence Black Box and the Failure of Intent and Causation", our law “is built on legal doctrines that are focused on human conduct, which when applied to AI, may not function.”

The basic principles for establishing liability under tort law are as follows:

  • The defendant owed a duty of care to the claimant;

  • The defendant breached that duty of care; and

  • The defendant’s breach of the duty of care caused damage or harm to the claimant.

The most famous maxim regarding the general duty of care came from Lord Atkin in the landmark negligence case Donoghue v Stevenson: “You must take reasonable care to avoid acts or omissions which you can reasonably foresee would be likely to injure your neighbour”.

This traditional test to establish a duty of care has more recently developed into a three-stage test:

 

1. Was the damage reasonably foreseeable?

2. Was there a relationship of proximity between the defendant and the claimant?

3. Is it just, and reasonable to impose a duty of care on the defendant?

The key barrier to establishing duty of care, as the first component of tortious liability, is the foreseeability requirement.

A remote possibility of harm is not enough to satisfy this component of the test – there has to be a sufficient probability of harm, so much so that a reasonable person, in the position of the defendant, could anticipate it.

This is where the "black box" issue comes in. Many sophisticated AI solutions are developed in a way whereby we (a) understand what data has been fed into the deep learning network and (b) know the outcome of the AI processing such data – but we are unable to determine how the AI solution got from (a) to (b).

In other words, we cannot know what an AI solution is thinking.

With the human players (programmers, manufacturers, sellers, etc.) totally removed from the decision-making process of AI solutions, the argument that a harmful outcome is foreseeable falls short.

Admittedly, while we remain in a realm of AI solutions being developed for very specific outcomes, it is certainly possible to foresee at least some degree of harm. For example, DeepNude was a machine-learning app which was specifically developed to create faked nude images of women (from real photographs of clothed women) using AI.

In these situations, we do not need to concern ourselves with the so-called black box – the desired outcome was foreseeably problematic (and harmful to the women whose photos were manipulated) and there was therefore a strong legal basis on which DeepNude, as an organisation, could be held liable – not the AI itself.

Next steps

The crux of the issue is, generally speaking, that our existing liability framework deals comfortably with harms caused by traceable defects in a product or service – but AI solutions are on track to completely disrupt this approach. To some extent, they already have.

As Professor Barry O'Sullivan, Fellow and President of the European Artificial Intelligence Association (EurAI), explains, "if a neural network wants to turn right, no one can explain why".

As we look ahead to broader and more sophisticated applications of AI, the steps between programming and the outcome of an AI solution will be so chasmic that it will be nonsensical to say that the harm was foreseeable.

This concept has previously been characterised by the analogy which says: holding a human liable for the harms caused by sophisticated AI solutions is the equivalent of blaming a parent for the actions of their 25 year old child.

Of course, this isn't the perfect analogy as, generally speaking, parents are not raising their children as profit-making machines (most parents would argue the contrary) – but the sentiment is clear.

Perhaps the next step is to give AI solutions legal personhood? Of course, no one wants to start trying to sue robots – particularly as AI solutions do not have cash or assets (at least for the moment) which could be used to cover compensation claims.

Nonetheless, by giving AI legal personhood, the doors could be opened to other liability models such as vicarious liability. Vicarious liability imposes liability on one person (i.e. a business manufacturing or selling AI solutions) for a tortious act committed by another (i.e. the AI solution itself).

For now, it remains to be seen how the courts deal with this issue. In the meantime, and in the absence of any radical or otherwise change in the law, it is best for those in the industry of producing, selling, licensing or otherwise distributing AI solutions to seek advice in relation to deployment of the "liability limiting" tools available to them – such as insurance and contract.

 

Part three: The next day

While a number of problems are easily identifiable, there are still so many unknowns about the future impact of AI. The level of uncertainty, and ever-increasing complexity of the technology, makes it all the more difficult to develop a future-proof legal framework which is equipped to deal with the inevitable and infinite eventualities of AI-related harm.

We have to start thinking about ways to update, change, or at least flex our current liability models to make way for this new era. The AI liability problem will only continue to grow as AI technology reaches greater levels of sophistication – and UK legislators need to be proactive in helping to find new solutions.

Various possible solutions have gained more attention over the last few years, from the development of AI insurance, all the way through to granting legal personality to AI.

These solutions are best broken down into three categories: the simple, the popular and the radical.

 

The simple solution – insurance

Autonomous vehicles arguably represent the most prevalent practical application of AI for the majority of people. As a result, it has often been used as the basis of the 21st century "trolley problem" which goes something like: a self-driving car is driving down the road when causing a fatality becomes inevitable – should the car a) stay on course, killing two children who have run out into the road, b) swerve onto the pavement, killing an elderly couple walking by or c) swerve into an oncoming lorry, killing the car's own passenger?

A number of variations of this moral dilemma have been put to the public in recent years in order to inform programmers.

For example, MIT's Moral Machine created 13 scenarios involving self-driving cars and surveyed people to gauge how society as a whole thinks an autonomous car should react when faced with a problem. However, gathering this sort of data, while useful, only gets us so far in terms of addressing the problem of liability.

For example, if a self-driving car did make the decision to swerve onto the pavement to save the children but kill an elderly couple based on data from surveys, should the programmers be held liable for these deaths?

Stretching this idea further, what if self-driving cars began to carry out their own social research and make decisions based on those results instead?

Or who or what should be held liable in the event that self-driving cars started analysing information about an individual's health, job, family, etc. in order to help self-driving cars decide on the best course of action based on an individual's "social value"?

It might seem far-fetched but we've all seen enough Black Mirror episodes to accept that it might not be.

As mentioned in part one (above), UK legislators have already stepped in to tackle the question of liability in relation to harms caused by autonomous vehicles with the Automated and Electric Vehicles Act 2018 (the 2018 Act).

The 2018 Act requires the human owners of autonomous vehicles to have personal insurance which will cover the costs of compensation owed to third parties as a result of any accident involving the vehicle.

The insurance model is a relatively simple one which could possibly be rolled out for a number if not all AI solutions – but a likely problem is that it merely serves to "pass the buck”.

The end consumer may be protected in the sense that they will have access to compensation but, while there remains uncertainty over where blame truly lies, insurers might look to clawback their losses by pursuing their own claims against the manufacturers or programmers.

Although this could make way for some trenchant case law (which would be welcomed in the absence of a regulatory framework), such common law developments are likely to be slightly disjointed. They would in any event result in costly litigation for all involved.

The popular solution – strict liability

One of the more interesting solutions which has been posed is that of a "crowd-sourced" compensation pot for those who have suffered AI-related harms.

This collective liability solution, which is in essence a strict liability regime, would eliminate the now-familiar problem of pinpointing blame where so many players are involved in the development and distribution of an AI solution.

To summarise, the collective liability regime would be funded by manufacturers of AI solutions.

Before an AI solution can be released onto the market, the manufacturer would be required to pay a levy (in return for a licence/certification) and the funds from this levy would in turn be used to fund the "compensation pot" for those who can prove they have suffered an AI-related harm. It is easy to see why this is the popular solution.

Not only does it give consumers the best chance to recover damages and eradicate the requirement of foreseeability, it's also totally scalable in the sense that higher levies can be imposed on more sophisticated AI solutions.

However, implementing this solution would be hugely laborious. It would almost certainly require the development of a global body to deal with the administration of registrations, compensation payments and regulatory fines. Perhaps our world leaders should be joining forces to develop the Global Artificial Intelligence Authority (GAIA)?

In addition to the logistical hurdles to the successful implementation of a (global) collective liability regime, there are also concerns around the effect it would have on safety standards.

What's the incentive for manufacturers to ensure the highest possible safety standards in a no-fault strict liability regime?

This could possibly be addressed by granting our fictional GAIA the authority to establish and implement further regulations on registrants.

For example, although it would not be responsible for compensation pay-outs, a registrant could face significant fines if it fails to have a robust "kill switch" programmed into its AI solution.  

As with all regulatory regimes, this solution does present the danger of stifling technological innovation. If levies and fines are set too high, we may see a number of smaller players in the AI space fall away (or, worse still, we could see the emergence of "black market" AI). 

The radical solution – legal personhood

There is some suggestion that we should be looking much further into the future when establishing a regulatory framework to deal with the legal liability of AI-related harms.

Even if you don't buy into the idea that "the singularity is coming" (à la a 21st century Paul Revere), we can all agree that AI solutions are set to reach a point of sophistication whereby likening them to "faulty products" won't work for anyone – including end consumers. The reality is that they will be so much more than that.

Granting legal personhood to AI solutions would pave the way for AI solutions to be personally subject to the same legal rights and duties as humans.This would admittedly be unattractive to a claimant as matters currently stand.

A person who has been in some way wronged by an AI solution would not exactly leap at the chance to sue it because, in the absence of AI having its own assets, no damages would be recoverable.

As mentioned in part two (above), we already see this problem arise in cases of harm caused by negligent employees.

The harmed individual rarely wants to issue proceedings solely against, for example, the negligent doctor – they want to take on the hospital, and they can, by virtue of vicarious liability.

The vicarious liability solution almost sounds perfect until we remind ourselves, yet again, that employing AI solutions is not like employing humans. Identifying the "employer" is the first hurdle here but it is just one of many.

For example, if your employee is negligent and costs your business thousands of pounds in compensation claims, you could not seek to recover these costs by issuing proceedings against the employee's parents or grandparents on account of them creating a negligent human – but you might, rightly or wrongly, expect to be able to recover such costs from an AI solution's programmer or manufacturer.

 

Conclusions

The era of machines surpassing humans may be a long way off (researchers are currently predicting anywhere between 10 and 40 years)  but, as we've seen, a number of issues surrounding AI-related harms are already present and there will be plenty more for us to contend with along the way.

We are yet to find the perfect solution to the problem of AI and legal liability – one that provides adequate protection without stifling technological advancement – but the increasing level of commentary surrounding the topic is a step in the right direction.

There appears to be broad consensus that the development of a regulatory framework for AI, generally speaking, is a pressing issue which needs to be addressed by everyone involved – from legislators and technologists to the businesses and consumers employing AI in some way.

To conclude in the words of HAL 9000, "I think you know what the problem is just as well as I do" – now is the time to address it.

This article was co-authored by Andrew Dodd, IP, technology, protection and enforcement partner at European law firm, Fieldfisher and Marissa Beatty, a trainee at Fieldfisher.

Sign up to our email digest

Click to subscribe or manage your email preferences.

SUBSCRIBE

Areas of Expertise

Technology and Data