Open-Source AI: The hidden threat to legal professional privilege

Open‑source AI tools can inadvertently expose confidential legal advice, risking an irreversible loss of legal professional privilege and placing new ethical obligations on both clients and lawyers.

5 min read Updated on 24 Mar 2026
Open-Source AI: The hidden threat to legal professional privilege

Open-source AI is becoming increasingly popular amongst clients of law firms and the public at large. However, after a recent decision by the Upper Tribunal in Munir v Secretary of State for the Home Department [2026] UKUT IAC 81, the use of AI could have drastic consequences – in this case the loss of legal professional privilege.

The judgment

The judgment reads:

“to put client letters and decision letters … into an open source AI tool, such as Chat GPT, is to place this information on the internet in the public domain, and thus to breach confidentiality and waive legal privilege”

What is legal professional privilege?

Legal professional privilege is a right which allows all clients to communicate openly and freely with their lawyer without the fear of correspondence or advice being disclosed to third parties or used against them in proceedings. Clients can waive this privilege either explicitly or implicitly. An implicit waiver of legal professional privilege could be implied through conduct or actions, which now includes placing advice into open-source AI.

Once legal professional privilege has been waived, even unintentionally, this cannot be regained. It is therefore very important to consider what information you put into open-source AI tools.

If you have received advice from your lawyer and placed this into an open-source AI tool like ChatGPT, according to the Munir judgment, this is deemed to be placed on the public domain. Therefore, your advice is at risk of being accessed by third parties and could potentially be used as evidence in Court proceedings.

A warning for lawyers

Lawyers have a duty to maintain client confidentiality under Principle 6.3 SRA Code of Conduct, as well as to comply with both professional and ethical standards imposed by various regulating bodies. To mitigate these risks, the Law Society have recommend that lawyers should consider the following questions before sharing any client data with generative AI:

  • Where is the data being processed?
  • Who is processing the data?
  • How and where is the data stored?
  • Who has access to the data?
  • What are you using the generative AI output for?
  • Can alternative, more appropriate means can be used to achieve this?

The Law Society have also stressed the importance of lawyers adhering to their organisation’s AI policy, if applicable.

What does this mean for me?

Make sure to remove any confidential or personal information before submitting prompts to an open-source AI tool. A useful way to remember this is to ask if yourself if you would be happy for your legal advice to be posted on social media. If the answer is no, you probably should not be entering this into an open-source AI tool.

Every AI tool will store their data differently and some will inevitably be riskier than others. The Munir judgment does specify that:

“closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks.”

If you have any doubts as to whether you are able to enter personal information into an AI tool, make sure to check their terms of service and privacy policy to properly evaluate how they use their data.

How can Ellis Jones help?

If you would like help or advice regarding from one of our specialists, please do not hesitate to contact us on 01202 525333.

Make an enquiry

Get in touch

  • This field is hidden when viewing the form