Contact Us
  1. Home
  2. >
  3. News & Blog
  4. >
  5. Corporate Law
  6. >
  7. More reasons to...

NEWS & BLOG

More reasons to be wary of AI in legal work

We’ve recently touched on some of the shortcomings and risks associated with the use of AI (artificial intelligence) in legal work.

In a striking demonstration of this point, a case recently hit the headlines when a UK barrister submitted court documents containing fabricated case citations, allegedly generated by ChatGPT.

Let’s take a look at the details and see what conclusions can be drawn for both the legal profession and for you as a consumer.

 

The incident: fabricated citations in court filings

The case in question involved a barrister who, during the course of legal proceedings, included references to legal cases that, upon scrutiny by the judge presiding over the case, were found to be non-existent. These fictitious citations were reportedly produced by ChatGPT, an AI language model which is just one of a large swathe of AI products that are seeing an explosion in work across millions of homes and businesses.

The barrister’s failure to verify the authenticity of these references before submission has raised serious questions about the ethical use of AI in legal practice. Their actions also resulted in significant professional repercussions. The barrister and the instructing solicitors were held jointly liable for £4,000 worth of costs and the barrister was reported to their respective regulatory body for what the court called, “appalling professional misbehaviour”.

This is not an isolated incident. In the United States, a similar situation occurred in the case of Mata v. Avianca, Inc. (yes we have checked this case reference out for authenticity), where attorneys submitted a legal brief containing fake case citations generated by AI. The court imposed a $5,000 fine on the lawyers involved, emphasising the seriousness of relying on unverified AI-generated content in legal documents.

 

The phenomenon of AI “hallucinations”

At the heart of these incidents is the issue of AI “hallucinations”. These are instances where AI models generate information that appears plausible but is entirely fabricated. In the legal domain, such hallucinations can lead to the creation of fictitious case laws, statutes, or legal principles, which, if uncritically accepted, can compromise the integrity of legal proceedings.

A study titled “Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models” found that AI models like ChatGPT can produce hallucinated legal content at alarming rates, with significant implications for legal research and practice.

 

Ethical and professional implications

The use of AI-generated content in legal documents without proper verification raises serious ethical and professional concerns. Legal professionals have a duty to protect the integrity of the legal system and to ensure the accuracy and reliability of the information they present in court ensuring they are not misleading the court or fabricating evidence. Relying on AI tools without adequate oversight can lead to breaches of this duty, potentially resulting in professional misconduct charges, sanctions, or damage to one’s reputation.

Regulatory bodies, such as the Solicitors Regulation Authority (SRA) in the UK, have highlighted the risks associated with AI in legal practice. They emphasise the importance of maintaining professional standards and the necessity for solicitors to critically assess AI-generated content before its use in legal contexts.

 

It seems abundantly clear that AI is not fit for purpose in many legal processes. Not only is it impersonal where a conscientious professional human lawyer is not, but the potential for error and “hallucinations” is a serious risk to the legal practice and to you, the client.

As a consumer, you should make sure your solicitor is transparent about the use of AI tools in their legal work. If their use of the technology is extensive, you may want to think about whether they’re the right professional for you.

 

The bottom line

The integration of AI into legal practice offers significant potential benefits, including increased efficiency and access to information, but the risks can outweigh the benefits. Until those risks are removed, reliance on AI tools like ChatGPT can and will lead to serious professional and ethical pitfalls.

Gorvins operate with extreme caution around the use of AI and don’t rely on AI to do any important legal work. We pride ourselves on the personable, professional service we have always offered to our clients and will continue to make that our absolute priority going forward.

If you need help with a legal matter and want to ensure it’s a human behind the advice, contact Gorvins Solicitors today. Call us on 0161 930 5151, email us at enquiries@gorvins.com or fill in the online form.