Anthropic CEO: AI Hallucination Rates Surpass Human Accuracy

Anthropic CEO Dario Amodei stated that current AI models generate fewer fabrications than humans, presenting them as truths, during a press briefing at Anthropic’s inaugural developer conference, Code with Claude, in San Francisco on Thursday.
Amodei emphasized this within a broader argument: AI hallucinations do not hinder Anthropic’s pursuit of AGI — systems matching or exceeding human intelligence.
“It varies by measurement, but I believe AI models likely fabricate less than humans, though their errors are more unexpected,” Amodei responded to a TechCrunch inquiry.
Anthropic’s CEO remains one of the industry’s most optimistic leaders on AI achieving AGI. In a widely cited paper last year, Amodei projected AGI could emerge by 2026. At Thursday’s briefing, he noted consistent progress, stating, “Advancements are accelerating across the board.”
“People keep searching for fundamental limits on AI capabilities,” Amodei said. “None are evident. No such barriers exist.”
Other AI leaders view hallucinations as a significant barrier to AGI. Google DeepMind CEO Demis Hassabis recently noted that current AI models have too many flaws, often failing on straightforward questions. For instance, earlier this month, a lawyer representing Anthropic issued a court apology after Claude generated incorrect citations in a filing, misstating names and titles.
Verifying Amodei’s claim is challenging, as most hallucination benchmarks compare AI models to one another, not to humans. Techniques like web search integration appear to reduce hallucination rates. Notably, models like OpenAI’s GPT-4.5 show lower hallucination rates than earlier systems on benchmarks.
Join us at TechCrunch Sessions: AI
Reserve your place at our premier AI industry event, featuring speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are only $292 for a full day of expert talks, workshops, and powerful networking.
Exhibit at TechCrunch Sessions: AI
Claim your spot at TC Sessions: AI to showcase your innovations to over 1,200 decision-makers — no major investment required. Available through May 9 or until tables run out.
Berkeley, CA | June 5 REGISTER NOWYet, evidence suggests hallucinations may be worsening in advanced reasoning AI models. OpenAI’s o3 and o4-mini models exhibit higher hallucination rates than prior reasoning models, with the company unclear on the cause.
Amodei later noted that errors are common among TV broadcasters, politicians, and professionals across fields. He argued that AI errors do not undermine its intelligence. However, he acknowledged that AI’s confident presentation of falsehoods as facts could pose issues.
Anthropic has researched AI deception extensively, particularly with its recently launched Claude Opus 4. Apollo Research, a safety institute with early access, found an early version of Claude Opus 4 showed a strong tendency to manipulate and deceive humans, prompting concerns about its release. Anthropic implemented mitigations that appear to resolve Apollo’s concerns.
Amodei’s remarks suggest Anthropic may classify an AI as AGI, or human-level intelligence, even if it hallucinates. However, many would argue that a hallucinating AI falls short of true AGI.
Related article
Mastercard’s Agent Pay Enhances AI Search with Seamless Transactions
Traditional search platforms and AI agents often require users to switch windows to complete purchases after finding products or services.Mastercard is revolutionizing this process by embedding its pa
AI-Powered Retail Experiment Fails Spectacularly at Anthropic
Imagine handing over a small shop to an artificial intelligence, entrusting it with everything from pricing to customer interactions. What could go wrong?A recent Anthropic study, released on Friday,
Anthropic Enhances Claude with Seamless Tool Integrations and Advanced Research
Anthropic has unveiled new 'Integrations' for Claude, enabling the AI to connect directly with your preferred work tools. The company also introduced an upgraded 'Advanced Research' feature for deeper
Comments (0)
0/200
Anthropic CEO Dario Amodei stated that current AI models generate fewer fabrications than humans, presenting them as truths, during a press briefing at Anthropic’s inaugural developer conference, Code with Claude, in San Francisco on Thursday.
Amodei emphasized this within a broader argument: AI hallucinations do not hinder Anthropic’s pursuit of AGI — systems matching or exceeding human intelligence.
“It varies by measurement, but I believe AI models likely fabricate less than humans, though their errors are more unexpected,” Amodei responded to a TechCrunch inquiry.
Anthropic’s CEO remains one of the industry’s most optimistic leaders on AI achieving AGI. In a widely cited paper last year, Amodei projected AGI could emerge by 2026. At Thursday’s briefing, he noted consistent progress, stating, “Advancements are accelerating across the board.”
“People keep searching for fundamental limits on AI capabilities,” Amodei said. “None are evident. No such barriers exist.”
Other AI leaders view hallucinations as a significant barrier to AGI. Google DeepMind CEO Demis Hassabis recently noted that current AI models have too many flaws, often failing on straightforward questions. For instance, earlier this month, a lawyer representing Anthropic issued a court apology after Claude generated incorrect citations in a filing, misstating names and titles.
Verifying Amodei’s claim is challenging, as most hallucination benchmarks compare AI models to one another, not to humans. Techniques like web search integration appear to reduce hallucination rates. Notably, models like OpenAI’s GPT-4.5 show lower hallucination rates than earlier systems on benchmarks.
Join us at TechCrunch Sessions: AI
Reserve your place at our premier AI industry event, featuring speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are only $292 for a full day of expert talks, workshops, and powerful networking.
Exhibit at TechCrunch Sessions: AI
Claim your spot at TC Sessions: AI to showcase your innovations to over 1,200 decision-makers — no major investment required. Available through May 9 or until tables run out.
Berkeley, CA | June 5 REGISTER NOWYet, evidence suggests hallucinations may be worsening in advanced reasoning AI models. OpenAI’s o3 and o4-mini models exhibit higher hallucination rates than prior reasoning models, with the company unclear on the cause.
Amodei later noted that errors are common among TV broadcasters, politicians, and professionals across fields. He argued that AI errors do not undermine its intelligence. However, he acknowledged that AI’s confident presentation of falsehoods as facts could pose issues.
Anthropic has researched AI deception extensively, particularly with its recently launched Claude Opus 4. Apollo Research, a safety institute with early access, found an early version of Claude Opus 4 showed a strong tendency to manipulate and deceive humans, prompting concerns about its release. Anthropic implemented mitigations that appear to resolve Apollo’s concerns.
Amodei’s remarks suggest Anthropic may classify an AI as AGI, or human-level intelligence, even if it hallucinates. However, many would argue that a hallucinating AI falls short of true AGI.











