UK Guidelines for the use of AI in Courts and Tribunals (December 2023) | Fieldfisher
Skip to main content
Insight

UK Guidelines for the use of AI in Courts and Tribunals (December 2023)

Locations

United Kingdom

The use of artificial intelligence (AI) tools, including technology assisted review (TAR) and automated contract-updating is prevalent in UK legal practice. Whilst legal practitioners have long-adopted AI and its helpful usages, UK judicial practitioners have been slow to embrace its generic assistance.

In December 2023, AI guidance was issued for judicial office holders in the UK ("Artificial Intelligence (AI) Judicial Guidance"); the expectation is AI will, in time, offer a better, quicker, and more cost-effective digital justice system.

In considering the day-to-day aspects of AI use in Courts and Tribunals, the non-binding guidance addresses the following main areas of judicial application:

  1. Understanding AI and its applications
  2. Confidentiality and privacy
  3. Accountability and accuracy
  4. Bias awareness
  5. Security
  6. Taking responsibility
  7. Court / Tribunal users' use of AI tools

The guidance suggests that the judiciary use AI-assistance to summarise large bodies of text, draft presentations, and compose emails and memoranda. The guidance cautions against the use of AI for legal research and legal analysis, owing to the inability to verify the results independently or produce convincing analyses or reasoning.

Sir Geoffrey Vos, Master of the Rolls, stated that the guidance was the first of its kind in the courts of England and Wales, providing "great opportunities for the justice system". Sir Vos went on to say that because AI is so new it needs to be affected with care so that "judges at all levels understand what it does, how it does it and what it cannot do."

What is the guidance's purpose?

The guidance was developed to assist judicial officeholders' use of AI in their daily working lives – it covers examples of key risks and issues that judicial AI-users might encounter, alongside some suggestions for minimising them.

Whilst the guidance acknowledges that there is no reason why a legal representative ought to use AI (and refer to its use), it observes that some litigants, unrepresented by legal practitioners, have been relying on AI tools to assist their pleaded cases, accepting that such litigants rarely possess the skills to independently verify AI-generated legal results, and therefore may be oblivious to any consequential mistakes or errors within their submissions.

Summary of the Guidance

  1. Understanding AI and its applications

The guidance recommends that judicial officeholders ensure that they grasp a basic understanding of AI capabilities and potential limitations before its use, with particular attention paid to the following points:

  • Public AI chatbot predictions: Chatbots do not provide answers from authoritative databases and are generated using an algorithm based on trained data - this means that the output is what the model predicts to be the most likely answer based on the training data, rather than the most accurate answer.
  • Verifiable information: Tools cannot be relied upon to provide verifiably correct information and therefore should not be used for conducting research.
  • AI prompt quality: Even with the best AI prompts, the information returned may be inaccurate, incomplete, misleading, or biased.
  • Jurisdictional bias: Large language models (LLMs) are typically trained on the available published material available on the Internet, resulting in AI chatbots returning examples of U.S. law (of which there are more publications), rather than English law.
  1. Uphold confidentiality and privacy

Questions or information submitted to AI chatbots are retained as public data and used to train the AI, unless the chat history is disabled.  Therefore, private and confidential information should not be entered, and if it is, it should be reported by judicial officeholder as a data incident.

  1. Ensure accountability and accuracy

AI chatbot results should be cross-checked, owing to potential inaccuracy, incompleteness, misleading, dated results, or results that apply to other jurisdictions other than the UK. The guidance mentions that AI tools have been known to make up fictitious cases, citations, or quotes, or refer to legislation, articles or legal texts that do not exist. In the U.S., ChatGPT, provided fictitious cases when relied upon for legal research: Varghese v. China Southern Airlines; Shaboon v. Egypt Air.

  1. Be Aware of Bias

AI tools based on LLMs generate responses based on the dataset they are trained upon. Information generated by AI will inevitably reflect errors and biases in its training data.

  1. Maintain Security

Security best practices should always be followed, including using work devices and work e-mail addresses to access AI tools.

  1. Take Responsibility

Judicial office holders are personally responsible for material produced in their name, irrespective of reliance on AI tools to assist – judicial practitioners should therefore use such tools appropriately and take steps to mitigate any risks.

  1. Be aware that court / tribunal users may have used AI tools

As legal practitioners may also harness the use of AI to assist with their submissions, they should be reminded that any AI-generated content must be independently verified.

What next?

Sir Vos has acknowledged that people and businesses currently do not have confidence in AI being used to determine disputes, and consequently AI-driven decision making is not likely to be introduced any time soon. That said, Lord Justice Birss suggested the possibility of using AI to help the judiciary determine provisional assessments of costs, a data heavy and time-consuming exercise in its present form.