Why More U.S. Lawyers Are Turning to ChatGPT for Legal Work

Why Do Lawyers Keep Using ChatGPT—Even When It Gets Them in Trouble?
Every few weeks, there’s another headline about a lawyer facing sanctions for submitting court filings with, as one judge put it, “bogus AI-generated research.” The pattern is familiar: an attorney uses ChatGPT or a similar large language model (LLM) to assist with legal research—or worse, to draft filings. The tool generates convincing but fictitious case citations, and the mistake only surfaces when a judge or opposing counsel calls it out. In several instances, including a notable aviation case from 2023, lawyers have been fined for relying on AI hallucinations.


So why haven’t lawyers stopped using ChatGPT and similar tools?
In short: time pressure and the rapid normalization of AI in professional workflows.
Legal research platforms like LexisNexis and Westlaw now offer AI-powered features. For attorneys managing heavy caseloads, these tools promise a tempting boost in speed and efficiency. Most lawyers aren’t handing over the drafting reins entirely to ChatGPT, but many are using it—and other LLMs—for research. The problem is that many attorneys, like the general public, still don’t fully understand how these models work. One lawyer sanctioned in 2023 admitted he thought ChatGPT was just a “super search engine,” only to learn the hard way that it can generate entirely fabricated—but highly plausible—citations.


Andrew Perlman, dean of Suffolk University Law School, believes most lawyers are using AI responsibly. The ones who end up in the headlines are exceptions. “These hallucination problems are real and serious,” he said, “but they don’t mean the tools lack value. There are tremendous potential benefits in using AI to enhance legal services.” Platforms like Westlaw have already embedded AI-driven tools directly into their systems.
Indeed, according to a 2024 Thomson Reuters survey, 63% of lawyers said they’ve used AI in their work, and 12% use it regularly. Common uses include summarizing case law and searching for relevant statutes, forms, or sample language. Lawyers told Thomson Reuters they see AI primarily as a time-saver—and for half of those surveyed, exploring how to integrate AI was a top priority. As one respondent put it, “The role of a good lawyer is to be a trusted advisor—not a document factory.”
But AI-generated documents often come with a caveat: accuracy isn’t guaranteed.
In one high-profile case, lawyers for journalist Tim Burke submitted a First Amendment-based motion to dismiss the charges against him after he was arrested for publishing unaired Fox News footage. Judge Kathryn Kimball Mizelle later struck the motion from the record after discovering it contained nine hallucinated citations. Attorney Mark Rasch took responsibility, noting he had used both ChatGPT Pro’s “deep research” feature and Westlaw’s AI tools.
Rasch isn’t alone. Lawyers representing AI firm Anthropic admitted to using Claude, the company’s own chatbot, to draft an expert declaration—only to find it included inaccurate information. In another case, misinformation expert Jeff Hancock relied on ChatGPT to help with citations in a filing supporting Minnesota’s deepfake regulation law. That filing also contained hallucinated sources.


These missteps aren’t trivial. In a case involving State Farm, Judge Michael Wilner initially found a legal brief persuasive—until he discovered the supporting case law didn’t exist. “I read their brief, was persuaded… and looked up the decisions to learn more about them—only to find that they didn’t exist,” Wilner wrote.
Still, AI can be valuable when used carefully. Perlman notes that many lawyers use it to sift through large discovery files, review opposing filings, or brainstorm arguments—not as a replacement for legal judgment. “Generative AI can help lawyers work better, faster, and cheaper,” he said. “But it’s no substitute for expertise.”
The risk, he added, is that users place too much trust in AI because its outputs sound polished. “People get lulled into thinking the responses are accurate simply because they’re well-worded,” Perlman warned.
Some lawyers have developed guardrails. Arizona state representative and election attorney Alexander Kolodin treats ChatGPT like a junior associate. He’s used it to help draft legislation and first drafts of legal amendments. “You don’t send out a junior associate’s work without checking the citations,” he said. “It’s not just machines that hallucinate.”


Kolodin said he uses both ChatGPT’s “deep research” mode and the AI tools embedded in LexisNexis—though he claims Lexis has a higher hallucination rate than ChatGPT. “ChatGPT’s hallucinations have dropped a lot in the past year,” he added.
As AI use becomes more widespread, professional oversight is catching up. In 2024, the American Bar Association (ABA) released its first formal guidance on the responsible use of LLMs by attorneys. The ABA stressed that lawyers must maintain technological competence and understand both the benefits and risks of generative AI. The guidance also urged lawyers to evaluate data privacy risks and consider disclosing AI use to clients.


Perlman is optimistic. “Generative AI is likely the most transformative technology the legal profession has ever seen,” he said. “Soon, we won’t be worried about whether lawyers who use AI are competent—we’ll be worried about those who don’t.”
But judges like Wilner remain cautious. “Even with recent advances,” he wrote, “no reasonably competent attorney should outsource legal research and writing to AI—especially without verifying its accuracy.”

Leave a Comment

Your email address will not be published. Required fields are marked *