Search our site...

Articles

When AI goes wrong – safeguarding your practice from GenAI risks

In short

  • World-wide cases demonstrate the potential for serious consequences for lawyers in the event of improper use of GenAI.
  • Lawyers must be cognisant that while AI tools can provide valuable assistance, they should never replace a solicitor’s professional judgment.
  • Law firm leaders must provide staff AI training and set clear policies on its use to mitigate risks.

Generative artificial intelligence (GenAI) has rapidly transformed the legal landscape, potentially offering unprecedented efficiency for legal practitioners.

While GenAI tools present opportunities for faster research and drafting, for example, they also introduce significant risks that Australian solicitors need to manage. Recent cases across multiple jurisdictions demonstrate the potential for serious consequences in the event of improper use of GenAI. This article examines these risks and provides practical guidance for law firm managers to implement appropriate safeguards and maintain professional standards within their law practice.

Key risks for solicitors in using GenAI

  1. AI hallucinations and misinformation

The most-publicised risk associated with the use of GenAI is its tendency to produce "hallucinations", or content that appears plausible and authoritative but which is actually erroneous or entirely fictional. This phenomenon has led to legal cases in which practitioners have faced referrals to professional regulators, adverse costs orders and the potential for reputational damage.

In Australia, two recent cases highlight this danger. In Handa & Mallick [2024] FedCFamC2F 957 (19 July, 2024), a solicitor was asked to show cause why he should not be referred to the Office of the Victorian Legal Services Board + Commissioner after submitting AI-generated false case citations, causing the adjournment of a hearing. The solicitor admitted to using AI software without verifying the information before filing the submission to the court.

Similarly, in Valu v Minister for Immigration and Multicultural Affairs (No.2) [2025] FedCFamC2G 95, a lawyer filed submissions containing non-existent authorities, including non-existent quotes allegedly from an Administrative Appeals Tribunal decision. The court referred the solicitor’s conduct to the NSW Office of the Legal Services Commissioner for falling short of a legal practitioner's duty not to mislead a court.

International cases have resulted in similar outcomes. In James Gautheir v Goodyear Tire & Rubber Co. (2024), a Texas lawyer used Claude AI to produce a legal submission without verifying cited cases, resulting in a US$2000 penalty and mandatory continuing education. In Mata v. Avianca Inc., No.1:2022cv01461 – Document 54 (S.D.N.Y. 2023), an attorney was ordered to pay a penalty of US$5000 after using ChatGPT to generate submissions that included non-existent court authorities.

Ironically, on 15 May, 2025, a lawyer for Anthropic, the developer of Claude AI, was forced to apologise after using an incorrect citation created by its Claude AI chatbot.

  1. Confidentiality and privacy breaches

Perhaps the most critical professional concern is the risk to client confidentiality. Public AI platforms such as ChatGPT may store and use input data for further training. When solicitors input client information into these systems, they risk inadvertently exposing privileged information to third parties, potentially violating Rule 9 of the Australian Solicitors' Conduct Rules.

As stated by the Victorian Legal Services Board + Commissioner: "Lawyers cannot safely enter confidential, sensitive or privileged client information into public AI chatbots/copilots (like ChatGPT), or any other public tools." Even with commercial AI tools, practitioners must carefully review contractual terms to ensure information security.

Enterprise AI solutions, such as private AI deployments, firm-specific implementations, or on-premises AI systems such as Copilot offer enhanced confidentiality protections compared to public tools. These enterprise deployments typically operate within the firm's infrastructure with customised security controls. However, even these more secure systems are not infallible. Firms must still implement strict data-governance policies and remain vigilant about potential confidentiality risks.

Another consideration is preventing cross-client data contamination, where one client's confidential information might inadvertently influence AI outputs for another client. Law firms should consider whether their enterprise AI systems maintain strict client data separation and implement appropriate information barriers within AI training and usage protocols. Without these safeguards, even enterprise systems have the potential to lead to breaches of client confidentiality and conflicts of interest.

The technology's rapid evolution means security vulnerabilities may emerge that were not initially apparent, requiring ongoing assessment and risk management practices.

  1. Professional independence and judgment

AI tools cannot replace a solicitor's professional judgment. Over-reliance on GenAI risks compromising the independent thinking and critical analysis that form the cornerstone of legal practice. The NSW Supreme Court Practice Note SC GEN 23 “Use of Generative Artificial Intelligence (Gen AI)” emphasises that these tools "should never replace the lawyer's most important asset – the exercise of independent legal judgment."

In Mavundla v MEC (2025), a South African court found that blindly relying on AI for legal research is “irresponsible and downright unprofessional," breaching legal practitioners' duties to the court. Both practitioners were referred to their regulatory body for investigation, highlighting the professional consequences of abdicating responsibility for legal research to AI systems.

  1. Legal duties, and regulatory and ethical compliance

Using GenAI without appropriate verification processes may breach multiple professional conduct rules under the Australian Solicitors’ Conduct Rules:

  • Rule 4: Duties of Competence, Integrity and Honesty
  • Rule 5: Dishonest or Disreputable Conduct
  • Rule 9: Confidentiality
  • Rule 19: Duty to the Court.

As the recent UK case Ayinde v The London Borough of Haringey [2025] demonstrates, courts can view the submission of fake cases as "professional misconduct" requiring referral to regulatory authorities. In the judgment, Justice Ritchie noted it would be "negligent for this barrister, if she used AI and did not check it, to put that text into her pleading."

Guidelines for safe integration

Court protocols and practice notes

NSW Supreme Court Practice Note Gen 23 provides valuable guidance for practitioners, prohibiting GenAI use for drafting affidavits, witness statements and expert reports without prior leave. Where GenAI is used in written submissions, practitioners must verify all citations and references. Affidavits must contain disclosure that GenAI was not used in generating content, unless leave is granted.

Practical tips for law firm managers

  1. Implement clear AI policies
    Develop comprehensive AI usage policies that specify:
  • Which AI tools are approved for use
  • Who may use these tools and for what purposes
  • What information can be entered into AI systems
  • Verification procedures for AI-generated content
  • Supervision requirements for junior staff.
  1. Establish confidentiality safeguards
  • Prohibit input of client information into public AI tools
  • Consider secure, firm-specific AI solutions with appropriate data protection
  • Review AI vendor contracts for confidentiality provisions
  • Implement technical controls to prevent unauthorised AI use.
  1. Create verification protocols
  • Require independent verification of all AI-generated content
  • Implement multi-level review processes for court documents
  • Document verification steps taken for all AI-assisted work
  • Never rely on AI tools alone for verification.
  1. Provide staff training
  • Educate all staff on AI capabilities, limitations and risks
  • Conduct regular updates on emerging AI issues and case law
  • Include AI ethics in professional development programs
  • Ensure understanding of hallucination risks and verification requirements.

Conclusion

GenAI offers significant benefits, but introduces substantial risks that must be carefully managed. As illustrated by recent cases, failure to properly verify AI-generated content can lead to professional discipline, financial penalties, reputational damage and regulatory referrals.

The legal profession's values of accuracy, integrity and independent judgement should guide AI implementation. By establishing clear policies, thorough verification processes, and appropriate safeguards, law firm managers can harness AI's benefits while protecting their clients, their firm, and the administration of justice.

With proper governance and human oversight, GenAI can enhance legal practice, rather than undermine it. However, as these technologies continue to evolve, vigilance and professional responsibility remain essential. AI should be treated as a tool to support legal expertise, not as a replacement for the careful, critical thinking that defines the legal profession.

* Simone Herbert-Lowe is a Partner, Cyber, Media and Technology at Clyde & Co. She wrote this article with assistance from Arman Salehirad and Jessica Kim, who are Associates at Clyde & Co.