Ethical Artificial Intelligence, Part Five: Accountability | Fieldfisher
Skip to main content
Insight

Ethical Artificial Intelligence, Part Five: Accountability

Locations

United Kingdom

In Part 3 of this series, I proposed that Providers need to assure both fairness and the perception of fairness, in order for their AIs to be trusted. To my mind, it's an essential requirement of fairness that if the AI causes damage or commits some wrongful act, someone can be held accountable for that mistake.

Should an AI be accountable for its own actions?

Until AI has a degree of self-interest in avoiding negative consequences (which in itself raises a myriad of ethical questions!), I don't see it being possible to impose effective punitive sanctions on a machine. On the other hand, it is natural for individuals to want to safeguard their own liberty and resources. It follows that there always ought to be a person to hold to account for the acts and omissions of the AI.

When deciding whether it is enough for a legal person (e.g., a company) to be accountable, or whether the buck should stop with a natural person, we should consider the potential consequences of the act or omission. In many cases it will surely be appropriate to allow for corporations to be at the end of the accountability chain, as now. But, if there may be serious consequences (such as death or personal injury), then many would prefer mechanisms to be in place for humans to bear the ultimate accountability. Although corporate prosecutions for manslaughter have been easier to bring in the UK since 2007, criminal liability will only flow through to individuals if a duty of care was owed by that individual to the deceased, and a breach of that duty materially contributed or caused the death, and if the breach is found to be sufficiently negligent. So far as I’m aware, the exact duties directors owe people in relation to safety, particularly where a death occurs, have yet to be completely set out in legislation. The introduction of AI into the mix can only make the issue more complex as we need to consider what the standards of care ought to be when selecting and implementing these technologies.

So, who should be accountable?

In the eyes of the law today, artificial intelligence is just another tool—albeit a sophisticated one. In most cases, the person using the tool will be accountable for its use. For example, under the Equality Act 2010, it would be unlawful for a service provider to refuse to provide a service, alter its terms, or otherwise treat someone less favourably because of a 'protected characteristic' such as race or gender, irrespective of whether it was a human or machine that made the decision. In this scenario, whether or not the outcome of a machine-driven process is actually discriminatory, data privacy law may also apply—in some situations individuals have a right to require human intervention in decisions made about them by machines. Whether this is the 'correct' result for apportionment of liability is a different (and complex) question, but certainly it is not the AI that is held accountable.

If an organisation is using a machine to conduct its affairs, to me there doesn't appear to be any rational justification for avoiding liability merely because it is using an AI rather than its human agents. The organisation should still be liable for the acts of the machine, just as it would for the acts of its employees. In many cases, a Provider will be able to offset certain types of liability with contractual remedies against another person (such as an upstream developer of the technology). These will typically be negotiated solutions however, and there will still be an impetus on those using AI to take appropriate steps to ensure that its agents—machine or otherwise—are appropriate to carry out their assigned roles.

In other scenarios, or for other types of loss, there may be liabilities that should rightly remain with a particular person within the supply chain as a matter of public policy. It's worth noting that, for product liability claims made today alleging that an AI-based product is defective, there may be a defence for a Provider. To benefit, the Provider would need to show that the "state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered"[1]. I am not suggesting that this defence should not continue to apply. However when we consider the nature of the wider duty of care owed by Providers, I believe there is a good argument to say it should include a duty to consider whether it is appropriate to deploy an AI solution in the context. This aligns to current practice in software licensing, and many readers will have seen provisions in software vendor contracts about excluded uses (e.g., in high risk or safety critical contexts).

As far back as 2010, the Engineering and Physical Sciences Research Council (EPSRC) suggested that the person with legal responsibility for a robot should be identified, and that it should always be possible to find out who is responsible for any robot. When it comes to physical devices we have rules in place to protect the end user and provide traceability. The Consumer Protection Act 1987 is a good example of this, as is the use of CE marking and declarations of conformity by manufacturers. But when it comes to software the situation is different.

The EPSRC suggested that licensing and registration might be appropriate for physical robots. However, in a digital context this would be difficult (even impossible) to administer due to the ease of replication of the underlying code—particularly when so much of the codebase is already in the public domain. Following a similar approach to that codified in the Consumer Protection Act would require building traceability into code so that the route it took to the end user can be followed back to the ultimate source. There are a number of problems with this, both practical, and of policy. Foremost in my mind is that the person creating the code shouldn't necessarily be the person accountable for the outcome of its use—particularly if they are not the ones deriving the primary economic benefit. Pushing liability back up the chain in this way would undoubtedly have a chilling effect on development as coders are not going to want to shoulder that kind of burden. Objectively, it doesn't seem fair that they should do so.

Instead, perhaps we should be looking to whoever has 'effective control' over the AI to be accountable for its actions. This kind of model has served us well, for example in the case of vehicles. The driver is in effective control when it comes to use, and is responsible for driving with due care and maintaining the vehicle; and the manufacturer is in effective control of the construction of the vehicle, and responsible for defects of design or manufacture. This makes a lot of sense in many contexts, even if the question of who has 'effective control' could be open to interpretation in some cases.

Consider for example an AI developed to help a family around the home and licensed for that purpose by the ultimate developer. Let's then assume a retailer packages that AI, and provides it for use by a family as a sophisticated calendar and reminder tool. The family uses it to ensure a family member takes their medication. If the AI decides to reschedule the medication in an attempt at being helpful, causing harm, who should be liable? Whether we approach it from a contractual or tortious standpoint, there are all sorts of issues to consider here. For example, is the product defective? Can doing something unexpected be considered a defect if the AI is designed to learn and adapt its behaviour, or is that simply a characteristic of an intelligent product? Even if it isn't defective, has the manufacturer breached its duty of care? What safeguards did it build into the product to prevent this kind of incident? Would it have been reasonable for the manufacturer to have foreseen the type of injury suffered (as we just did!), or that the product might be used in a safety-critical function? Or are the losses suffered too remote? Is it instead the responsibility of the user to ensure that they do not use the AI in a context where it could cause such harm? What warnings were on the packaging?

I'm not going to delve further into liability here, because it is such a complex issue it would warrant a whole series of articles on its own. Instead, I will close this article with this thought: for the immediate future, AI is merely a tool that will perform functions selected by people—so long as that is true, it seems fair that a person must be accountable for the outcomes of that tool's use. The crucial challenge, for the perception of fairness, will be the ability to identify who that person is and seek remedies when harm occurs.


[1] Article 7(e) of Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products