Lawyers Sanctioned for Fake AI Case Citations: This Month in Legal AI
KEY POINTS
1. First published case sanctioning lawyers for AI hallucinations
A California appellate court sanctioned attorneys $10,000 for submitting briefs with fabricated case citations created by generative AI tools—marking the first published opinion addressing AI misuse in legal filings.
2. “If you sign it, you own it.”
Lawyers are responsible for everything they submit, even if AI helped draft it. The duty of competence (Rule 1.1) includes technological competence—meaning attorneys must understand and verify AI outputs.
3. Law firms need AI-use policies
Firms should define when and how AI can be used, who can use it, and how results must be checked. Training and multi-platform verification (e.g., cross-checking AI results in Westlaw) are essential.
4. AI is a tool, not a shortcut
Used properly, AI enhances efficiency and expands thinking. Misused, it risks producing false citations and professional sanctions. Responsible lawyers use AI to augment human skill—not replace it.
5. Broader AI adoption and ethical awareness
The episode also covers emerging tools like Claude, Copilot, and AI scheduling assistants—reminding lawyers to balance innovation with caution, data privacy, and ethical accountability.
Transcript:
(00:01) [Music] Welcome to Cooper’s Code, the AI show. I’m thrilled to be doing this monthly session with Sherrod Milanfar and Marshall Pole. We’re going to be talking about what’s new in AI since the last time we had this episode. So, with that and not much further, let’s go ahead and kick it off.
(00:24) I wanted to turn it over to you, Marshall, because you were the one who initially surfaced the Sylvia Nolan vend second district case. I don’t think there’s a citation for it yet, but the document number is B331918 for anyone who wants to take a look. Do you want to just kind of dive in and give us a little bit of background of what happened here again?
(00:47) Sure. So, the case came to my attention. It was circulated by a professional group that I’m a board member of, the Association of Southern California Defense Council. And I’m just going to read the quote—the court is outstanding because it just kind of sends a message that people seem to still not quite learn, right?
(01:06) Here’s what the court of appeals said:
“What sets this appeal apart and the reason we have elected to publish this opinion is that nearly all of the legal quotations in Plaintiff’s opening brief and many of the quotations in Plaintiff’s reply brief are fabricated. That is, the quotes Plaintiff attributes to published cases do not appear in those cases or anywhere else. Further, many of the cases Plaintiff cites do not discuss the topics for which they are cited. And a few of these cases do not exist at all.
These fabricated legal authorities were created by generative artificial intelligence tools that Plaintiff’s counsel used to draft his appellate briefs. The AI tools create fake legal authority, sometimes referred to as AI hallucinations, that were undetected by Plaintiff’s counsel because he did not read the cases the AI tools cited.”
(01:55) The court sanctioned $10,000 to be paid by the attorneys to the court. It also directed the clerk to send the decision to the state bar for reporting.
(02:36) This is the first published opinion where a court chastised and sanctioned opposing counsel for using fabricated AI cases.
(03:16) Despite all the articles warning lawyers, people are still falling into this trap. It’s astonishing that the word hasn’t reached everyone yet.
(04:00) Some lawyers don’t even have Westlaw or LexisNexis access. One lawyer didn’t even own a cellphone.
(05:54) Many lawyers lack access to technology or the networks that spread this information.
(06:18) Under Rule 1.1 — the duty of competence — lawyers must remain technologically competent. The rules require lawyers to be “technology people.”
(07:27) Some lawyers resist technology, others rush in without understanding it. Both are problematic.
(08:11) The key is using technology as a tool — not a crutch. You have to do due diligence.
(09:31) The issue spans generations — from first-year to senior lawyers. Many see AI as a shortcut instead of a research assistant.
(10:06) The problem isn’t just using AI; it’s using it blindly. Lawyers must verify all citations.
(10:26) Mistakes in appellate briefs can lead to serious professional consequences, including state bar action.
(11:11) The rule is simple: If you sign it, you own it.
(12:19) Firms should create internal policies for AI use:
- Define when and how AI can be used
- Train associates to verify outputs
- Ensure double-checking with reliable sources
(13:39) Policies differ by firm size and practice type, but training and verification are essential.
(14:07) Combining AI with legal databases like Westlaw increases efficiency while maintaining accuracy.
(15:09) Policies should also clarify what tools are used for which tasks. For example:
- Don’t rely on ChatGPT for citations
- Cross-check all results in Westlaw or Lexis
(16:22) Use multiple platforms to vet results. Don’t accept AI output at face value.
(17:18) Lawyers must fight the “lazy tendency” of relying on AI without verification.
(18:02) AI isn’t necessarily faster — it broadens thinking and expands approaches.
(18:50) The right mindset is using AI to enhance skills, not replace effort.
(19:41) Some firms, especially contingency-based ones, risk misusing AI to save time — that’s dangerous.
(20:43) Responsible firms give equal attention to all cases, regardless of payout.
(21:17) Some lawyers have successfully used AI-generated briefs, but only when they deeply understand the law and verify all sources.
(23:00) [Ad break] If you enjoy Cooper’s Code, leave a 5-star review and email interview suggestions to podcast@coopers.law.
(23:19) Next topic: Nvidia’s $100 billion investment in OpenAI.
(23:40) Microsoft’s $13 billion investment is now worth $150 billion. Larry Ellison’s net worth rose by $100 billion in one day.
(24:16) Jurors see huge numbers in the media, which may desensitize them to large verdicts.
(25:15) When jurors award large non-economic damages, defense firms must adapt their strategies.
(26:03) Discussion on “reptile theory” and the psychology of jury awards.
(27:52) Society is now “numb” to big numbers due to constant exposure to billion-dollar headlines.
(28:25) Defense lawyers must be as creative as plaintiff lawyers in countering such narratives.
(29:13) Legal strategies are “chess, not checkers.”
(29:44) Discussion shifts to Claude AI’s new features:
- Access to all past chats
- Create Word and Excel files
- Calendar integration (read-only)
(30:48) Concerns arise about privacy and security when giving AI calendar or email access.
(32:12) The new Word/Excel export feature is improving but not seamless yet.
(33:59) Persistent chat memory in AI tools can be valuable for continuity in long projects.
(34:31) Microsoft’s CoPilot integration is being added into Outlook, featuring:
- Prompt Coach
- Writing Coach
- Idea Coach
- Career Coach
- Learning Coach
(35:11) CoPilot can mimic your email writing style, but it still “feels artificial.”
(36:02) The hosts joke that maybe this podcast is AI-generated using their voices.
(36:26) AI could soon suggest time blocks and scheduling improvements — a potential game-changer for productivity.
(36:52) The hosts conclude that managing time and emails is still a major pain point.
(37:35) [Music] Outro.