LLM hallucinations got a lawyer sanctioned. Treat AI output like untrusted input.

This is not an “AI ethics” story. It is a controls story.

LLM hallucinations got a lawyer sanctioned. Treat AI output like untrusted input.

This is not an “AI ethics” story. It is a controls story.

Courts have been sanctioning lawyers who file briefs with citations to cases that do not exist. The explanation is consistent: the lawyer used an AI tool, trusted the output, and nobody verified the underlying sources before the document hit the docket.

The lesson is not “never use AI.” The lesson is that if AI touches high-stakes work product, you need verification gates, logging, and a defensible process the same way you would for any other high-risk system.

(Obvious note: this is not legal advice. It is operational risk guidance.)

The case in 60 seconds

The fact pattern is painfully simple:

1) Someone uses an AI tool to draft or support a legal filing. 2) The output includes fabricated citations, quotes, or procedural “facts.” 3) The filing is submitted. 4) Opposing counsel or the court checks. 5) Sanctions and reputational damage follow.

If you work in legal ops, compliance, investigations, or legal-tech, this should feel familiar. It is the same failure mode as any other incident: an untrusted input crossed the boundary into an authoritative artifact.

Reframe: this is a control failure, not a technology surprise

AI output is persuasive, fast, and sometimes wrong.

That is not novel. What is novel is the speed at which a wrong answer can become a signed PDF.

If your organization is treating a chatbot like a junior associate, you are already in discovery.

The signature block matters. Courts are not interested in “the software did it.” They see it as: you filed it, you own it.

Minimal defensibility checklist (copy-paste)

You do not need a big program to avoid this class of embarrassment. You need three things: verification, traceability, and thresholds.

1) Citation verification is mandatory

  • If an AI tool produced a citation, a human must open the underlying source and confirm it exists.
  • Define who verifies (drafter, reviewer, librarian, paralegal) and when (before internal approval, before filing).
  • Treat verification as a required sign-off, not a suggestion.

2) “AI use” policy that matches reality

  • Define where AI is allowed (drafting, summarization, formatting) and where it is not (final legal conclusions, unverified citations).
  • Define disclosure expectations (internal, client, tribunal) based on your jurisdiction and risk posture.
  • Train the team on what counts as “AI-assisted.” If people cannot answer that consistently, you have a governance gap.

3) Capture the chain of custody

For any AI-assisted work product that could become evidence, capture: - Prompt (or a sanitized version) - Output - Model/tool name and version (when available) - Human edits and verification steps - Approvals

This is not surveillance. It is defensibility.

4) Logging, retention, and matter IDs

  • Tie AI usage artifacts to the matter ID.
  • Retain what you need for internal accountability and external scrutiny.
  • Protect integrity. If the same person can generate output and alter logs, you do not have an audit trail.

5) Human review thresholds

Define thresholds that trigger mandatory human review by a qualified reviewer: - Anything filed with a court - Anything sent to a regulator - Anything that asserts legal authority (citations, quotes, precedent) - Anything that will be relied upon to make a business decision

What to implement this week (3 tiers)

Tier 1: no new tooling (process-only)

1) Add a “verification required” checklist to your filing workflow. 2) Add a one-paragraph AI use section to your matter opening template. 3) Add a standard note: “AI output treated as untrusted input, verified before use.”

Tier 2: light tooling (teams can do this fast)

1) Store prompts and outputs in your document system under the matter. 2) Require reviewers to attach verification notes (links, screenshots, or citations checked). 3) Centralize approvals so they are auditable.

If you are building legal AI features, your differentiator is evidence: - Built-in citation verification workflows - Immutable logs (or at least tamper-evident) - Model and prompt versioning - Reviewer sign-off gates - Exportable audit packets for matters

“Defensible AI” will matter more than “magical AI.”

The question to ask your team

If a judge asked tomorrow, “How do you verify AI-assisted citations before filing,” would you answer with a process, or a vibe?

If you want a one-page “Defensible AI Use Policy + Verification Checklist,” reply “CHECKLIST” and I will post the template.

Sources