If you’ve ever used artificial intelligence to help with your work, you know it can sometimes be a “confident liar.” In the legal world, where one wrong fact can ruin a person’s life, companies promised they had fixed this problem with a new technology called RAG.
But according to a new study from researchers at Stanford and Yale, those promises might be more science fiction than science fact.
RAG is broken and nobody’s talking about it.
Stanford researchers exposed the fatal flaw killing every “AI that reads your docs” product in existence.
It’s called “Semantic Collapse,” and it happens the second your knowledge base hits critical mass. If you’ve noticed your AI… pic.twitter.com/sz4zbtnxpC
— How To AI (@HowToAI_) April 13, 2026
Standard AI, like the basic version of ChatGPT, is “closed-book” — it tries to answer questions from memory. RAG (Retrieval-Augmented Generation) was supposed to be the “open-book” version. It’s designed to look at a specific pile of legal documents first and then summarize them so it doesn’t make things up.
The problem? The “book” has become too big.
Researchers found that when an AI’s library hits a certain size (around 10,000 documents), a glitch called “Semantic Collapse” happens. Imagine a library where the librarian is so overwhelmed that they can no longer tell the difference between a serious history book and a comic book because they both have “Washington” in the title. Mathematically, the AI starts seeing every document as “relevant,” leading it to give answers that sound professional but are totally wrong.
The study tested top-tier legal AI tools used by real lawyers, including Lexis+ AI and Westlaw. Even though these tools are marketed as “hallucination-free,” the researchers found they messed up 17% to 33% of the time.
Some of the AI’s failures included:
- Reversing Reality: Citing a case but claiming the judge said “Yes” when they actually said “No.”
- Inventing Laws: Writing out entire paragraphs of legal rules that don’t actually exist.
- Breaking the Chain of Command: Claiming a small local court “overruled” the U.S. Supreme Court — which is basically impossible in the U.S. legal system.
The researchers say law is a “final boss” for AI because it’s not just about matching keywords. It’s about “vibes” and hierarchy.
For example, if you ask about a “car crash,” the AI might find a document about “cars” in a tax law book. A human knows that’s the wrong kind of law, but the AI just sees the word “car” and thinks it’s helping. The AI also suffers from “sycophancy” — a fancy word for being a “yes-man.” If a user asks a question with a wrong assumption, the AI often just agrees with them instead of correcting them.
You might not be a lawyer, but this affects how we trust technology. Right now, lawyers are required by ethical rules to double-check everything an AI tells them. If they don’t, they could face huge fines or lose their licenses.
The study warns that we shouldn’t believe the “AI washing” — when companies claim their tech is perfect just to sell more subscriptions. The researchers conclude that while AI is a great tool for a “first draft,” it is nowhere near ready to be the final word in a courtroom.
For now, the “hallucination-free” AI lawyer is a myth. If you’re heading to court, you’d better hope your lawyer is still doing their own reading.
