Judge slams lawyers for ‘bogus AI-generated research’

Judge Penalizes Law Firms for Using AI Without Disclosure
In a recent ruling, California Judge Michael Wilner slapped two prominent law firms with a hefty fine of $31,000 for secretly relying on artificial intelligence during a civil lawsuit. According to Judge Wilner, the firms submitted a brief packed with “false, inaccurate, and misleading legal citations and quotations.” As reported by law professors Eric Goldman and Blake Reid on Bluesky, the judge expressed his frustration over what he deemed a reckless disregard for professional standards.
“I trusted the brief I read,” Judge Wilner wrote in his ruling. “I was convinced by the authorities they referenced, so I dug deeper to explore those cases myself. But they didn’t exist. It was terrifying. I nearly signed off on a judicial order based on fake information.”
According to court documents, the trouble began when a plaintiff’s legal team hired an AI tool to craft an outline for a supplemental brief. This outline, however, was riddled with fabricated content when it was handed over to K&L Gates, another law firm involved in the case. Unfortunately, neither firm bothered to fact-check or verify the information before submitting it.
Judge Wilner discovered that “at least two of the cited authorities were entirely nonexistent.” When pressed for clarification, K&L Gates refiled the brief, only for the judge to uncover even more fabricated references. In response, he issued an Order to Show Cause, prompting both sides to admit the truth under oath. The lawyer responsible for creating the original outline confessed to using Google Gemini and legal research tools like Westlaw Precision with CoCounsel.
A Pattern of Misuse
This isn’t the first instance of lawyers getting into hot water over AI misuse. Former Trump attorney Michael Cohen once cited non-existent court cases after mistakenly treating Google Gemini, then known as Bard, as a regular search engine. Similarly, a judge in Colombia found that attorneys suing an airline had sprinkled their brief with bogus cases generated by ChatGPT.
“This whole episode has been nothing short of alarming,” Judge Wilner concluded. “The initial decision to rely on AI without disclosure was unacceptable. Sending such flawed material to other professionals without warning them about its questionable origin put everyone at risk.”
The Broader Implications
The ruling highlights a growing concern among legal professionals about the ethical implications of using AI in court filings. While AI tools promise efficiency and accuracy, they also carry the risk of spreading misinformation if not properly vetted. As Judge Wilner emphasized, “No reasonably competent attorney should outsource their research or writing to an algorithm without verifying its output.”
The case serves as a stark reminder of the importance of transparency and accountability in legal practice. As AI continues to evolve, it remains crucial for practitioners to tread carefully, ensuring that technology enhances—not undermines—the integrity of the justice system.
Related article
YouTube Integrates Veo 3 AI Video Tool Directly Into Shorts Platform
YouTube Shorts to Feature Veo 3 AI Video Model This SummerYouTube CEO Neal Mohan revealed during his Cannes Lions keynote that the platform's cutting-edge Veo 3 AI video generation technology will debut on YouTube Shorts later this summer. This follo
Microsoft Teases Budget-Friendly Xbox Cloud Gaming Subscription
Microsoft Explores Affordable Options for Xbox Cloud GamingNew developments suggest Microsoft is moving forward with plans to make Xbox Cloud Gaming more budget-friendly. Following earlier reports about a potential free ad-supported version, company
Google Cloud Powers Breakthroughs in Scientific Research and Discovery
The digital revolution is transforming scientific methodologies through unprecedented computational capabilities. Cutting-edge technologies now augment both theoretical frameworks and laboratory experiments, propelling breakthroughs across discipline
Comments (3)
0/200
ThomasRoberts
September 14, 2025 at 8:30:40 PM EDT
法官這次真的發火了!律師用AI不註明出處被抓包,罰款3.1萬美元。🤦♂️ 這些所謂的AI生成法律研究報告根本就是Bogus啊!以後打官司還是老實點比較好...
0
HarryClark
August 12, 2025 at 5:00:59 AM EDT
This AI fiasco in court is wild! Lawyers thought they could sneak in AI-generated research and not get caught? $31k fine is a wake-up call—bet they’ll think twice before pulling that stunt again. 😬
0
AlbertScott
August 5, 2025 at 3:00:59 AM EDT
This is wild! Lawyers getting fined $31k for sneaking AI into their briefs? 🤯 I get it, AI’s a game-changer, but hiding it from the court? That’s just asking for trouble. Curious how often this happens undetected!
0
Judge Penalizes Law Firms for Using AI Without Disclosure
In a recent ruling, California Judge Michael Wilner slapped two prominent law firms with a hefty fine of $31,000 for secretly relying on artificial intelligence during a civil lawsuit. According to Judge Wilner, the firms submitted a brief packed with “false, inaccurate, and misleading legal citations and quotations.” As reported by law professors Eric Goldman and Blake Reid on Bluesky, the judge expressed his frustration over what he deemed a reckless disregard for professional standards. “I trusted the brief I read,” Judge Wilner wrote in his ruling. “I was convinced by the authorities they referenced, so I dug deeper to explore those cases myself. But they didn’t exist. It was terrifying. I nearly signed off on a judicial order based on fake information.” According to court documents, the trouble began when a plaintiff’s legal team hired an AI tool to craft an outline for a supplemental brief. This outline, however, was riddled with fabricated content when it was handed over to K&L Gates, another law firm involved in the case. Unfortunately, neither firm bothered to fact-check or verify the information before submitting it. Judge Wilner discovered that “at least two of the cited authorities were entirely nonexistent.” When pressed for clarification, K&L Gates refiled the brief, only for the judge to uncover even more fabricated references. In response, he issued an Order to Show Cause, prompting both sides to admit the truth under oath. The lawyer responsible for creating the original outline confessed to using Google Gemini and legal research tools like Westlaw Precision with CoCounsel.A Pattern of Misuse
This isn’t the first instance of lawyers getting into hot water over AI misuse. Former Trump attorney Michael Cohen once cited non-existent court cases after mistakenly treating Google Gemini, then known as Bard, as a regular search engine. Similarly, a judge in Colombia found that attorneys suing an airline had sprinkled their brief with bogus cases generated by ChatGPT. “This whole episode has been nothing short of alarming,” Judge Wilner concluded. “The initial decision to rely on AI without disclosure was unacceptable. Sending such flawed material to other professionals without warning them about its questionable origin put everyone at risk.”The Broader Implications
The ruling highlights a growing concern among legal professionals about the ethical implications of using AI in court filings. While AI tools promise efficiency and accuracy, they also carry the risk of spreading misinformation if not properly vetted. As Judge Wilner emphasized, “No reasonably competent attorney should outsource their research or writing to an algorithm without verifying its output.” The case serves as a stark reminder of the importance of transparency and accountability in legal practice. As AI continues to evolve, it remains crucial for practitioners to tread carefully, ensuring that technology enhances—not undermines—the integrity of the justice system.



法官這次真的發火了!律師用AI不註明出處被抓包,罰款3.1萬美元。🤦♂️ 這些所謂的AI生成法律研究報告根本就是Bogus啊!以後打官司還是老實點比較好...




This AI fiasco in court is wild! Lawyers thought they could sneak in AI-generated research and not get caught? $31k fine is a wake-up call—bet they’ll think twice before pulling that stunt again. 😬




This is wild! Lawyers getting fined $31k for sneaking AI into their briefs? 🤯 I get it, AI’s a game-changer, but hiding it from the court? That’s just asking for trouble. Curious how often this happens undetected!












