How AI Got a Lawyer Sanctioned in Federal Court
Chaney v. Transdev Services Inc., C.D. Cal. — A 2026 federal ruling on AI hallucinations, professional duty, and the cost of skipping verification.
In Chaney v. Transdev Services Inc. et al. (C.D. Cal. Apr. 28, 2026), a federal district judge sanctioned an attorney after he submitted legal briefs containing fabricated case law generated by an artificial intelligence tool. The attorney relied on LexisNexis+ Protégé, a generative AI writing assistant, which produced a citation that does not exist — In re Shubert, 799 F.3d 1124 (9th Cir. 2015) — and included it in his filings three separate times without verifying whether the case was real.
What Happened
The attorney was representing plaintiff Tekoma Chaney in an employment dispute against Transdev Services Inc. In drafting opposition briefs to two summary judgment motions, he used LexisNexis+ Protégé to help with legal writing. The AI tool generated In re Shubert as supporting authority. The citation carries a reporter and pin cite that correspond to an unrelated real case — a detail that masked the fabrication from a casual glance.
The nonexistent case appeared in his opposition to Ogilvie's motion for summary judgment (p. 17) and twice more in his opposition to Transdev's motion (pp. 15–16). When the court flagged the citation, the attorney initially characterized the issue as a simple mistake rather than disclosing the AI's role. Only after the court demanded a formal explanation did he acknowledge that the citation had come from a generative AI tool.
Making matters worse, his initial declaration to the court failed to disclose the additional instances of the same fabricated citation — an omission the court characterized as an attempt to minimize the scope of the misconduct.
Four Mistakes That Led to Sanctions
Failure to verify AI output
The attorney did not confirm whether the AI-generated citation existed before filing. Verifying case law is a foundational duty — looking up the cite in Westlaw or Lexis itself takes under a minute. Skipping that step for an AI-generated citation is, in the court's view, a violation of basic professional duty.
Repeated use of the same fake citation
The fabricated case appeared three times across two separate briefs. The repetition signaled not an isolated oversight but a pattern of unverified reliance on AI output — which the court treated as an aggravating factor when assessing sanctions.
Misleading the court about the source of the error
When the fabrication was discovered, the attorney initially framed it as a clerical mistake — not as a product of AI. Candor to the tribunal is an absolute professional obligation. Obscuring the origin of the error, even temporarily, compounded the original violation.
Incomplete disclosure of all instances
His initial declaration to the court omitted some occurrences of the fake citation, leaving the court to discover them independently. The court explicitly cited this incomplete disclosure as evidence of bad faith — not negligence.
Court's Ruling and Penalties
The court found that the attorney acted in bad faith. The ruling emphasized that lawyers must understand the specific limitations of AI tools — including the risk of hallucinated citations — and independently verify every legal authority before relying on it. Ignorance of how the technology works is not a defense.
Why This Case Matters
Chaney v. Transdev is part of a growing line of post-Mata v. Avianca decisions in which federal courts have moved from warnings to concrete sanctions for AI hallucinations in legal filings. Several features make it particularly instructive:
The AI tool was a named legal product. LexisNexis+ Protégé is a specialized legal research assistant marketed to attorneys — not a general-purpose chatbot. The court's ruling implicitly rejects any assumption that AI tools built for legal use are hallucination-proof. Verification is required regardless of the tool's branding.
The penalty extends beyond money. The requirement to disclose prior AI misuse in every future filing within the district creates an ongoing professional consequence — a scarlet letter that will follow the attorney into every courtroom in the C.D. Cal. for the foreseeable future.
Bad faith, not negligence. Courts that stop at negligence findings often limit themselves to reprimands. Here, the bad faith finding — anchored in the failure to disclose AI involvement and the incomplete accounting of instances — unlocked a harder sanction and a judicial notification requirement. The lesson: the cover-up compounds the original error.
Professional responsibility is technology-neutral. The court did not create a new AI-specific duty. It applied existing duties of competence, candor, and diligence to a new context. Attorneys who use AI tools remain fully responsible for the accuracy of every citation they submit.
Source Document
The court order is publicly available. This analysis is based on the full text of the April 28, 2026 ruling.
View court order (PDF) →