Anthropic's Lawyer Apologizes After Claude Hallucinates Legal Citation

In a recent development, a lawyer representing Anthropic acknowledged the use of an incorrect citation generated by the company’s Claude AI chatbot during its ongoing legal dispute with music publishers. The admission came via a court filing in a Northern California courtroom on Thursday.
Anthropic stated in the filing, first reported by Bloomberg, that Claude had fabricated the citation with both an incorrect title and author details. The company's lawyers admitted that their routine citation checks failed to detect the error, along with several other inaccuracies caused by Claude's "hallucinations."
Apologizing for the oversight, Anthropic described the incident as "an honest citation mistake and not a fabrication of authority."
Earlier this week, attorneys for Universal Music Group and other music publishers accused Anthropic’s expert witness, employee Olivia Chen, of referencing fabricated articles in her testimony, allegedly using Claude. Following this, Federal Judge Susan van Keulen requested a response from Anthropic regarding these claims.
The lawsuit brought forth by the music publishers is part of multiple conflicts between copyright holders and tech firms over the alleged misuse of their content to develop generative AI tools.
This marks yet another instance where the use of AI in legal proceedings has led to unintended consequences. Earlier this week, a California judge criticized two law firms for submitting "phony AI-generated research" in his court. In January, an Australian attorney was exposed for utilizing ChatGPT in drafting court documents, resulting in flawed citations.
Despite such incidents, startups continue to attract significant funding to automate legal processes. Harvey, which employs generative AI models to aid lawyers, is reportedly in discussions to secure over $250 million at a valuation exceeding $5 billion.
Techcrunch event ### Join us at TechCrunch Sessions: AI
Reserve your spot at our premier AI industry event featuring speakers from OpenAI, Anthropic, and Cohere. For a limited period, tickets are priced at just $292 for a full day of expert presentations, workshops, and valuable networking opportunities.
Exhibit at TechCrunch Sessions: AI
Book your展位 at TC Sessions: AI and showcase your innovations to 1,200+ decision-makers without breaking the bank. Available until May 9 or while spots last.
Berkeley, CA | June 5 REGISTER NOW
Related article
AI-Powered Retail Experiment Fails Spectacularly at Anthropic
Imagine handing over a small shop to an artificial intelligence, entrusting it with everything from pricing to customer interactions. What could go wrong?A recent Anthropic study, released on Friday,
Anthropic Enhances Claude with Seamless Tool Integrations and Advanced Research
Anthropic has unveiled new 'Integrations' for Claude, enabling the AI to connect directly with your preferred work tools. The company also introduced an upgraded 'Advanced Research' feature for deeper
Anthropic's Lawyer Apologizes After Claude Hallucinates Legal Citation
In a recent development, a lawyer representing Anthropic acknowledged the use of an incorrect citation generated by the company’s Claude AI chatbot during its ongoing legal dispute
Comments (1)
0/200
FredYoung
July 31, 2025 at 10:48:18 PM EDT
Claude messing up legal citations? Yikes, that’s a rough day for Anthropic’s team. Hope they double-check those AI outputs next time! 😅
0
In a recent development, a lawyer representing Anthropic acknowledged the use of an incorrect citation generated by the company’s Claude AI chatbot during its ongoing legal dispute with music publishers. The admission came via a court filing in a Northern California courtroom on Thursday.
Anthropic stated in the filing, first reported by Bloomberg, that Claude had fabricated the citation with both an incorrect title and author details. The company's lawyers admitted that their routine citation checks failed to detect the error, along with several other inaccuracies caused by Claude's "hallucinations."
Apologizing for the oversight, Anthropic described the incident as "an honest citation mistake and not a fabrication of authority."
Earlier this week, attorneys for Universal Music Group and other music publishers accused Anthropic’s expert witness, employee Olivia Chen, of referencing fabricated articles in her testimony, allegedly using Claude. Following this, Federal Judge Susan van Keulen requested a response from Anthropic regarding these claims.
The lawsuit brought forth by the music publishers is part of multiple conflicts between copyright holders and tech firms over the alleged misuse of their content to develop generative AI tools.
This marks yet another instance where the use of AI in legal proceedings has led to unintended consequences. Earlier this week, a California judge criticized two law firms for submitting "phony AI-generated research" in his court. In January, an Australian attorney was exposed for utilizing ChatGPT in drafting court documents, resulting in flawed citations.
Despite such incidents, startups continue to attract significant funding to automate legal processes. Harvey, which employs generative AI models to aid lawyers, is reportedly in discussions to secure over $250 million at a valuation exceeding $5 billion.
Techcrunch event ### Join us at TechCrunch Sessions: AI
Reserve your spot at our premier AI industry event featuring speakers from OpenAI, Anthropic, and Cohere. For a limited period, tickets are priced at just $292 for a full day of expert presentations, workshops, and valuable networking opportunities.
Exhibit at TechCrunch Sessions: AI
Book your展位 at TC Sessions: AI and showcase your innovations to 1,200+ decision-makers without breaking the bank. Available until May 9 or while spots last.
Berkeley, CA | June 5 REGISTER NOW



Claude messing up legal citations? Yikes, that’s a rough day for Anthropic’s team. Hope they double-check those AI outputs next time! 😅












