Legal AI Hallucinated Case Law
Category: Pure Software & Algorithmic Agents
Hazard Definition
Legal AI hallucinated case law refers to incidents where artificial intelligence tools used for legal research generate fabricated case citations, invented judicial opinions, or nonexistent legal authorities that attorneys then cite in court filings. These hallucinations—a known failure mode of large language models—can result in sanctions against attorneys, harm to clients, wasted judicial resources, and erosion of trust in AI-assisted legal practice.
Mechanism of Harm
Legal AI hallucinations emerge from fundamental characteristics of large language models and their interaction with legal practice workflows.
Generative model architecture: Large language models generate text by predicting probable next tokens based on training data patterns. When asked for legal citations, these models may generate plausible-looking case names, citation formats, and holdings that follow the patterns of real legal text but correspond to no actual judicial decision.
Confident presentation: AI systems typically present fabricated citations with the same confident tone as accurate information, providing no indication that the referenced case may not exist. The generated text may include realistic details such as judge names, dates, procedural histories, and quotations—all invented.
Verification failures: Attorneys who rely on AI-generated research without independently verifying citations through authoritative legal databases may submit fabricated authorities to courts. Time pressure, unfamiliarity with AI limitations, and misplaced trust in technology contribute to verification failures.
Documented Incident Patterns
Federal court sanctions, state bar proceedings, and judicial opinions have documented specific incidents of hallucinated legal citations being submitted to courts.
Federal court sanctions: In a widely reported 2023 case, a federal judge in the Southern District of New York sanctioned attorneys who submitted a brief containing multiple fabricated case citations generated by an AI chatbot. The Mata v. Avianca court record documents the incident and resulting sanctions order. The court found that none of the cited cases existed, and the attorneys had failed to verify the citations before filing.
Additional documented filings: Following the initial high-profile case, courts across multiple jurisdictions have identified additional instances of AI-hallucinated citations in filed documents. Some have resulted in sanctions; others in judicial warnings or sua sponte identification of the fabricated authorities.
State bar investigations: Bar disciplinary authorities in multiple states have opened investigations into attorneys who submitted AI-generated fabricated citations, examining potential violations of competence and candor rules.
Regulatory Status
No specific regulation governs AI use in legal practice, though existing professional responsibility rules regarding competence, supervision, and candor to tribunals apply to AI-assisted work product. The ABA Model Rules of Professional Conduct establish the ethical framework under which attorney AI use is evaluated.
Multiple federal courts have issued standing orders requiring attorneys to disclose AI use in document preparation or certify that AI-generated content has been verified by a human. These orders vary in scope and requirements across jurisdictions.
State bar associations have begun issuing guidance on ethical AI use in legal practice, generally emphasizing that attorneys remain responsible for verifying AI-generated work product and cannot delegate professional judgment to automated systems.
Known Data Gaps
- Total number of court filings containing AI-hallucinated citations that have not been identified
- Frequency of hallucination across different legal AI tools and use cases
- Client outcomes in cases where hallucinated citations were submitted
- Effectiveness of court disclosure requirements in preventing hallucinated citation submissions
Report an Incident
If you have identified fabricated legal citations in court filings or have been harmed by AI-hallucinated case law in legal proceedings, you may submit a confidential report for documentation and potential investigation.
Submit a Report