How AI is changing disputes and why legal expertise still matters

This article examines the rise of AI‑generated legal documents and hallucinated case law in UK disputes, the risks involved, and why legal expertise remains essential.

6 min read Updated on 26 Feb 2026
How AI is changing disputes and why legal expertise still matters

Generative AI has entered the UK legal landscape with remarkable speed. AI is now being used not only by law firms exploring efficiency gains, but also by litigants‑in‑person drafting claims, employees preparing grievances and businesses seeking quick answers to complex legal questions. The result is a growing wave of AI‑generated documents landing on the desks of solicitors, regulators and, increasingly, the courts.

Alongside this surge in use comes a very real problem: AI systems can produce content which appears at first glance convincing, but may contain entirely fabricated case law, misquoted judgments, inaccurate statutory references, and procedurally erroneous content.

Hallucinated case law is not always easy to spot, yet it can fundamentally distort a party’s understanding of their legal position. Judges have already issued warnings about the risks of unverified AI content, and the Solicitors Regulation Authority (“SRA”) has made clear that professional obligations around competence and supervision apply just as much when technology is involved.

As this series will explore, AI is reshaping the legal landscape, not by replacing lawyers but by changing the nature of the material that enters the litigation process. In this first instalment, we examine how AI‑generated claims and hallucinated case law are emerging in disputes, why they pose risks for clients and practitioners alike, and why expert legal judgment remains essential in an era of technological revolution.

AI‑generated claims in disputes: What lawyers are seeing

The rise of generative AI has led to a noticeable increase in AI-drafted legal documents such as Letters of Claim or Claim Forms. This is particularly common in cases where litigants-in-person (“LiPs”) are involved (unrepresented parties).

These documents can have serious consequences if they contain: –

  • incorrect legal principles;
  • exaggerated or inaccurate allegations; or
  • fabricated legal authorities

These errors can undermine credibility, lead to sanctions, and increase costs. AI-generated content can appear superficially credible because it closely mimics the tone, structure and terminology of professional legal writing.

How courts and lawyers spot AI‑generated content

Opposing counsel and judges can often identify AI-generated arguments or citations by looking for patterns and inconsistencies that suggest automated drafting rather than careful legal analysis. Authorities cited by other parties will be double-checked, making it likely to quickly become obvious if case law has been hallucinated.

As a result of the rise of the use of AI, there have been instances of lawyers being caught using erroneous case law. One example involved a High Court matter in which the Claimant’s submissions included five made-up cases. The judge found that the Barrister had acted improperly, unreasonably and negligently and the Claimant’s solicitors and the Barrister were ordered to personally pay £2,000 each to the Defendant’s legal costs.

In another case, a barrister was referred to the Bar Standards Board for citing a non‑existent AI‑generated case.

Other than issues with case law, solicitors have been warned against inputting client data and documents into public AI tools as doing so will breach client confidentiality and waives legal privilege. A solicitor has admitted to uploading client emails and Home Office decision letters into ChatGPT to improve and summarise them.

The challenge for clients: Misleading AI “legal advice”

Clients will ask AI generated systems for “legal advice” that appears authoritative but is inaccurate or misleading. However, inaccurate legal advice can: –

  • give clients misplaced confidence in the strength of their claim;
  • offer procedural guidance that is wrong;
  • exaggerate remedies or prospects of success; or
  • cause clients to question accurate advice from their solicitor.

For lawyers, this introduces several practical issues. They may need to spend additional time reviewing, correcting and explaining why the AI tool is incorrect and/or misleading. In some instances, relying on AI before consulting a qualified lawyer could lead to clients taking steps that are harmful to their legal position.

The courts’ response: Guidance, warnings, and expectations

Courts in England and Wales are becoming increasingly alert to the risks posed by AI.

A judiciary statement on 31 October 2025, warned that public AI chatbots do not use verified legal databases. Instead, they generate text by predicting the most likely combination of words based on their data, which means their answers may not be accurate and are unreliable for researching new or unverified legal points. Their outputs may be outdated, biased or inaccurate and their view of the law is often based heavily on US and historic law.

The SRA expects solicitors and regulated firms in England and Wales to maintain competence, effective supervision and responsible use of technology, including generative AI, under its regulatory framework.

Senior judges have warned that lawyers and litigants’ who submit material containing fabricated or unchecked law may face: –

  • professional misconduct findings
  • wasted costs orders
  • referral to regulators

Judicial guidance now encourages judges to enquire whether AI tools were used to prepare material before the court and to remind parties that they remain responsible for the accuracy of citations, prompting a more cautious and exacting approach overall.

Why legal expertise still matters

Whilst AI tools can be useful for drafting or summarising information, they cannot replace the role of a trained legal professional.

Legal practice relies on:-

  • proper interpretation of statutes
  • understanding legislative intent
  • application of binding precedent
  • nuanced, fact-sensitive judgment
  • strategic assessment of litigation risks

Our approach at Ellis Jones: Safe, responsible and expert use of AI

At Ellis Jones, we have a strict AI policy which is designed to ensure professionalism and regulatory compliance, whilst also benefitting from the time and cost-efficiencies which closed-source and reliable AI services can bring.

We ensure that:

  • AI tools rely only on accurate, trusted data
  • All AI‑assisted output is checked by a qualified lawyer
  • AI is used to support, not replace, expert legal judgment

It remains our view that any AI-generated content should be treated with caution. If in any doubt, legal advice should be obtained from a qualified legal professional.

If you need any assistance with a legal matter, please get in touch with our firm either through our make an enquiry form or by contacting our team on 01202 525333.

How can Ellis Jones help?

If you would like help or advice regarding from one of our specialists, please do not hesitate to contact us on 01202 525333.

Get in touch