This Month in Legal AI: New Tools for Lawyers, LLM Biases, and Avoiding Brainrot
KEY POINTS
- AI is evolving fast—beyond predictive LLMs toward “superintelligence.”
Power/compute concentrated among a few tech leaders could shape how these systems learn and what they prioritize. - Bias is real and shows up by default.
LLMs often reproduce historical/institutional bias (e.g., defaulting to a white male “attorney”); users must prompt deliberately to counteract it. - Use AI as a thinking partner, not a crutch.
MIT’s “brain on AI” insight: over-reliance dulls critical thinking. Challenge outputs, ask for sources, and cross-verify in primary tools (e.g., Westlaw/Lexis). - Courts and policy are catching up, but you still own the work.
Handle suspected AI errors from the court with respectful corrections; California courts require AI-use policies by Dec 15, 2025. Bottom line: if you sign it, you’re responsible. - Tools differ—and have trade-offs.
Models show different “personalities” and compliance postures; consider privacy and even environmental costs. Practical wins: Google NotebookLM (podcast summaries + mind maps) and new short-form video generation for demos.
Transcript:
(00:01) Howdy and welcome to the show. In this episode, we’re doing our monthly deep dive into the AI world here on Cooper’s Code. We have Sherrod Milanfar. Normally, we also have Marshall Cole—Marshall is headed out to trial on Monday, so he’s taking a pass. He’ll join us again next month.
(00:21) There’s been a lot going on in AI. We’ve got a few things to bring up and spitball—what’s been happening in AI and how it relates to our legal world. Anything tickling your brain right now—cool new things, changes in the law, AI bias—where do you want to start?
(00:51) What’s striking is how quickly AI is changing and evolving. It may help to talk about what AI is, what it’s likely to become, and how that might impact us as legal practitioners. Right now, these are large language models—mostly predictive: the model looks at your input and predicts the next words.
(01:19) Now many top AI executives are talking about “superintelligence,” meaning systems that go beyond prediction—attempting to learn, form a thought process, change responses, maybe develop intuition. It’ll be interesting if systems leapfrog learning and start thinking in ways that supersede us.
(02:06) A competing theory: with all the computing power accrued by a few individuals (Elon, Mark, Jeff, Sergey), do we end up with their AI world? Superintelligence shaped by the agendas/biases of the few who control compute.
(02:52) That’s possible. A small set of people could wield immense power by steering how AI learns—what’s important and what isn’t. Today, prompts like “attorney in a deposition” often default to a Caucasian male. Diversity usually appears only when explicitly requested—evidence of embedded bias.
(04:03) Pivot to pre-work: an article by James Mixon in the Daily Journal, What AI learns from us and why that could be a legal problem. Institutional biases (e.g., historic Amazon hiring) can translate into AI filtering resumes toward white men. Twofold issue: how AI is programmed and our responsibility to input prompts that level the field.
(05:05) The question: given studies showing brains get “lazy” with assistive tools, will we take time to craft better prompts or just accept defaults? Neuroscience splits the “reptilian” fast brain vs. the slower higher-thinking brain (Kahneman’s Thinking, Fast and Slow). Day-to-day, we must stay mindful.
(06:37) MIT’s “Your Brain on AI” study: if your AI thinking partner does all the thinking, your brain may under-exercise. Impact on meetings and strategy? It’s hard to stay vigilant because our minds choose the easy path. Growth happens outside the comfort zone—“your comfort zone is a dead zone.”
(08:53) Anecdote: an associate asked Claude about federal deposition appearance notices. Claude confidently cited Rule 30, which didn’t say that. Pushed to check, Claude reversed itself. As a thinking partner—great. As a reliance engine—troubling.
(10:15) If a human associate brought the same wrong answer from Westlaw/Lexis, we’d say “Are you sure?” Do the same with AI: challenge it, ask for sources, verify cross-platform. Add prompt constraints (“don’t hallucinate; cite verifiable sources”) to reduce error rates.
(11:58) Yes, products should default to accuracy, but reality differs. Most models warn they make mistakes. Due diligence is still on the user—just like supervising a junior associate.
(13:50) Judicial/clerks’ AI use: a judge issued a TRO with inaccuracies; no proof AI was used, but suspicion existed. The court corrected quickly, but once an order is out, damage can be done. How to flag such issues without insulting the court? A firm modeled a gentle correction letter—no accusations, no specific relief requested.
(17:30) Judges say many courts are working on AI policies, but they’re not finalized—rapidly evolving space.
(18:00) California Judicial Council adopted new AI rules since last episode—generative AI in the court system. Not a ban; policies must be adopted by Dec 15, 2025. Users must review/correct AI errors and disclose when AI is used for publicly accessible content. Bottom line remains: if you sign it, you’re responsible.
(19:32) Concern: AI will change before 2025; will policies update quarterly, semiannually? Law often lags tech. Guardrails beat overly specific rules. Responsibility for filed work endures—even with “AI associates.”
(21:09) Implementation questions for courts: will they just provide access or also training to cement guardrails? Budgets matter. Practitioners may need to drop judicial analysis into their own AI to check for hallucinations.
(22:35) Method: go slow and be methodical to reduce errors, especially when moving from old-school methods to AI.
(22:53) Darker direction: paper by Kenneth Payne & Baptiste Alou-… on strategic intelligence in LLMs. Models played game-theory (Prisoner’s Dilemma). Different models showed different “personalities” (e.g., Gemini more ruthless/retaliatory). Takeaway: not all AI is equal—people run the same task across models.
(24:12) Compliance note: Claude previously advertised HIPAA/SOC 2 compliance more clearly; now murkier. Consider safeguards for sensitive data. Model “horsepower” and compute access (Google/Gemini vs. others) may matter.
(25:57) Environmental costs: energy and water consumption of data centers (example: 500,000 gallons/day affecting a local community). Meta building a massive AI server farm. To what end? Broader implications beyond legal.
(27:20) (Jokes about Mars/Grok/Stranger in a Strange Land.)
(27:47) Cool new things: One favorite—Google’s NotebookLM: drop content in; it turns it into a talking-head podcast quickly and convincingly. Great for research when you don’t have time to read—consume while doing chores.
(28:45) NotebookLM can now create mind maps: breaks down parties, legal issues, cited cases—bite-sized and digestible before deep reading.
(30:08) Use case: new California Supreme Court case (ambulance crash limitations period)—dump into NotebookLM, listen to the podcast, view mind map, then read fully. Multimodal exposure (hear + read) improves retention.
(31:10) Audio can make information “sticky.” It’s not perfect (mispronunciations), but it accelerates understanding before close reading.
(32:35) Video: Google’s V3 model can render 8-second clips. With a tool like Flow, chain clips. You can input a deposition snippet; it renders a scene/witness in minutes—useful for demonstratives.
(33:54) It’s hard to keep up—sharing discoveries helps. Trial work requires imagination to keep juries engaged; AI expands possibilities if you experiment and refine prompts.
(34:55) Closing thanks; wish Marshall luck at trial; invite audience contributions (podcasters.law). Please leave a five-star review and share. To everyone doing justice out there—happy hunting.