Chatbots Distort News, Impacting Even Paid Users

Why does this matter? If chatbots can't even retrieve news as well as Google, it's hard to trust them to interpret and cite that news accurately. This makes the content of their responses, even when linked, much less reliable.
Confidently Giving Wrong Answers
The researchers noted that the chatbots returned wrong answers with "alarming confidence," rarely qualifying their results or admitting to knowledge gaps. ChatGPT, for instance, "never declined to provide an answer," despite 134 out of its 200 responses being incorrect. Out of all eight tools, Copilot was the only one that declined to answer more queries than it responded to.
"All of the tools were consistently more likely to provide an incorrect answer than to acknowledge limitations," the report clarified.
Paid Tiers Aren't More Reliable
Even premium models like Grok-3 Search and Perplexity Pro, while more accurate than their free counterparts, still confidently gave wrong answers. This raises questions about the value of their often high subscription costs.
"This contradiction stems primarily from [the bots'] tendency to provide definitive, but wrong, answers rather than declining to answer the question directly," the report explains. "The fundamental concern extends beyond the chatbots' factual errors to their authoritative conversational tone, which can make it difficult for users to distinguish between accurate and inaccurate information."
"This unearned confidence presents users with a potentially dangerous illusion of reliability and accuracy," the report added.
Fabricating Links
AI models are notorious for hallucinating, but the Tow study found that Gemini and Grok 3 did so most frequently—more than half the time. "Even when Grok correctly identified an article, it often linked to a fabricated URL," the report noted, meaning that Grok could find the right title and publisher but then manufacture the actual article link.
An analysis of Comscore traffic data by Generative AI in the Newsroom, a Northwestern University initiative, confirmed this pattern. Their study from July to November 2024 showed that ChatGPT generated 205 broken URLs in its responses. While publications do occasionally take down stories, which can result in 404 errors, researchers noted that the lack of archival data suggested "the model has hallucinated plausible-looking links to authoritative news outlets when responding to user queries."
Given the growing adoption of AI search engines—Google fell below 90% market share in Q4 of 2024 for the first time in 10 years—these findings are troubling. The company also released AI Mode for certain users last week, replacing its normal search with a chatbot despite the rampant unpopularity of its AI Overviews.
With around 400 million users flocking to ChatGPT weekly, the unreliability and distortion of its citations make it and other popular AI tools potential engines of misinformation, even as they pull content from rigorously fact-checked news sites.
The Tow report concluded that AI tools mis-crediting sources or incorrectly representing their work could backfire on the publishers' reputations.
Ignoring Blocked Crawlers
The situation worsens for publishers as the Tow report found that several chatbots could still retrieve articles from publishers that had blocked their crawlers using Robots Exclusion Protocol (REP), or robots.txt. Paradoxically, however, chatbots failed to correctly answer queries about sites that allow them to access their content.
"Perplexity Pro was the worst offender in this regard, correctly identifying nearly a third of the ninety excerpts from articles it should not have had access to," the report states.
This suggests that not only are AI companies still ignoring REP—as Perplexity and others were caught doing last year—but that publishers in any kind of licensing agreement with them aren't guaranteed to be correctly cited.
Columbia's report is just one symptom of a larger problem. The Generative AI in the Newsroom report also discovered that chatbots rarely direct traffic to the news sites they're extracting information from, which other reports confirm. From July to November 2024, Perplexity passed on 7% of referrals to news sites, while ChatGPT passed on just 3%. In comparison, AI tools tended to favor educational resources like Scribd.com, Coursera, and those attached to universities, sending as much as 30% of traffic their way.
The bottom line: Original reporting remains a more reliable news source than what AI tools regurgitate. Always verify links before accepting what they tell you as fact, and use your critical thinking and media literacy skills to evaluate responses.
Related article
How ChatGPT Works: Capabilities, Applications, and Future Implications
The rapid evolution of artificial intelligence is transforming digital interactions and communication. Leading this transformation is ChatGPT, an advanced conversational AI that sets new standards for natural language processing. This in-depth examin
Salesforce’s Transformer Model Guide: AI Text Summarization Explained
In an era where information overload is the norm, AI-powered text summarization has become an indispensable tool for extracting key insights from lengthy documents. This comprehensive guide examines Salesforce's groundbreaking AI summarization techno
Generate Unique Brand Names Instantly with Namflix AI Business Name Generator - Free Tool!
Crafting Your Perfect Brand Identity with AIIn today's competitive digital marketplace, establishing a distinctive brand identity starts with choosing the perfect name - one that captures your unique value proposition while resonating with your targe
Comments (51)
0/200
NoahGreen
August 22, 2025 at 3:01:25 PM EDT
Paying for premium AI chatbots and still getting fake news? That's a rip-off! 😡 This study just proves we can't trust these bots to get the facts straight.
0
GaryWalker
April 23, 2025 at 10:12:13 PM EDT
プレミアムバージョンを購入したのに、ニュースの正確さが全く期待できませんでした。間違った情報を堂々と出してくるなんて信じられないですね。😓 やっぱり人間が書いたニュースの方が信頼できるかも。
0
DonaldSanchez
April 20, 2025 at 11:53:48 AM EDT
프리미엄 버전을 구입했는데 뉴스 정확도가 형편없네요. confidently 잘못된 정보를 내뱉는 모습을 보면서 웃음이 나왔어요. 😂 돈 아깝네요. 사람 손으로 쓴 뉴스가 더 나을 것 같아요.
0
RalphHill
April 16, 2025 at 5:13:53 AM EDT
Paguei pela versão premium achando que teria notícias precisas, mas que erro! Ele dá informações erradas com tanta confiança que parece um pastor pregando. 😅 Não vale o dinheiro. Talvez seja melhor ficar com notícias escritas por humanos.
0
EdwardTaylor
April 16, 2025 at 4:05:47 AM EDT
プレミアム版のチャットボットに課金したのに、ニュースの情報が全然正しくない!自信満々に間違った情報を提供するなんて、信じられない。人間が書いたニュースに戻るべきだね。😅
0
GregoryAdams
April 15, 2025 at 3:18:40 AM EDT
프리미엄 버전에 돈을 냈는데 뉴스 정보가 틀렸어! 자신 있게 잘못된 정보를 주는 건 정말 실망스러워. 인간이 쓴 뉴스로 돌아가야 해. 😓
0
Why does this matter? If chatbots can't even retrieve news as well as Google, it's hard to trust them to interpret and cite that news accurately. This makes the content of their responses, even when linked, much less reliable.
Confidently Giving Wrong Answers
The researchers noted that the chatbots returned wrong answers with "alarming confidence," rarely qualifying their results or admitting to knowledge gaps. ChatGPT, for instance, "never declined to provide an answer," despite 134 out of its 200 responses being incorrect. Out of all eight tools, Copilot was the only one that declined to answer more queries than it responded to.
"All of the tools were consistently more likely to provide an incorrect answer than to acknowledge limitations," the report clarified.
Paid Tiers Aren't More Reliable
Even premium models like Grok-3 Search and Perplexity Pro, while more accurate than their free counterparts, still confidently gave wrong answers. This raises questions about the value of their often high subscription costs.
"This contradiction stems primarily from [the bots'] tendency to provide definitive, but wrong, answers rather than declining to answer the question directly," the report explains. "The fundamental concern extends beyond the chatbots' factual errors to their authoritative conversational tone, which can make it difficult for users to distinguish between accurate and inaccurate information."
"This unearned confidence presents users with a potentially dangerous illusion of reliability and accuracy," the report added.
Fabricating Links
AI models are notorious for hallucinating, but the Tow study found that Gemini and Grok 3 did so most frequently—more than half the time. "Even when Grok correctly identified an article, it often linked to a fabricated URL," the report noted, meaning that Grok could find the right title and publisher but then manufacture the actual article link.
An analysis of Comscore traffic data by Generative AI in the Newsroom, a Northwestern University initiative, confirmed this pattern. Their study from July to November 2024 showed that ChatGPT generated 205 broken URLs in its responses. While publications do occasionally take down stories, which can result in 404 errors, researchers noted that the lack of archival data suggested "the model has hallucinated plausible-looking links to authoritative news outlets when responding to user queries."
Given the growing adoption of AI search engines—Google fell below 90% market share in Q4 of 2024 for the first time in 10 years—these findings are troubling. The company also released AI Mode for certain users last week, replacing its normal search with a chatbot despite the rampant unpopularity of its AI Overviews.
With around 400 million users flocking to ChatGPT weekly, the unreliability and distortion of its citations make it and other popular AI tools potential engines of misinformation, even as they pull content from rigorously fact-checked news sites.
The Tow report concluded that AI tools mis-crediting sources or incorrectly representing their work could backfire on the publishers' reputations.
Ignoring Blocked Crawlers
The situation worsens for publishers as the Tow report found that several chatbots could still retrieve articles from publishers that had blocked their crawlers using Robots Exclusion Protocol (REP), or robots.txt. Paradoxically, however, chatbots failed to correctly answer queries about sites that allow them to access their content.
"Perplexity Pro was the worst offender in this regard, correctly identifying nearly a third of the ninety excerpts from articles it should not have had access to," the report states.
This suggests that not only are AI companies still ignoring REP—as Perplexity and others were caught doing last year—but that publishers in any kind of licensing agreement with them aren't guaranteed to be correctly cited.
Columbia's report is just one symptom of a larger problem. The Generative AI in the Newsroom report also discovered that chatbots rarely direct traffic to the news sites they're extracting information from, which other reports confirm. From July to November 2024, Perplexity passed on 7% of referrals to news sites, while ChatGPT passed on just 3%. In comparison, AI tools tended to favor educational resources like Scribd.com, Coursera, and those attached to universities, sending as much as 30% of traffic their way.
The bottom line: Original reporting remains a more reliable news source than what AI tools regurgitate. Always verify links before accepting what they tell you as fact, and use your critical thinking and media literacy skills to evaluate responses.




Paying for premium AI chatbots and still getting fake news? That's a rip-off! 😡 This study just proves we can't trust these bots to get the facts straight.




プレミアムバージョンを購入したのに、ニュースの正確さが全く期待できませんでした。間違った情報を堂々と出してくるなんて信じられないですね。😓 やっぱり人間が書いたニュースの方が信頼できるかも。




프리미엄 버전을 구입했는데 뉴스 정확도가 형편없네요. confidently 잘못된 정보를 내뱉는 모습을 보면서 웃음이 나왔어요. 😂 돈 아깝네요. 사람 손으로 쓴 뉴스가 더 나을 것 같아요.




Paguei pela versão premium achando que teria notícias precisas, mas que erro! Ele dá informações erradas com tanta confiança que parece um pastor pregando. 😅 Não vale o dinheiro. Talvez seja melhor ficar com notícias escritas por humanos.




プレミアム版のチャットボットに課金したのに、ニュースの情報が全然正しくない!自信満々に間違った情報を提供するなんて、信じられない。人間が書いたニュースに戻るべきだね。😅




프리미엄 버전에 돈을 냈는데 뉴스 정보가 틀렸어! 자신 있게 잘못된 정보를 주는 건 정말 실망스러워. 인간이 쓴 뉴스로 돌아가야 해. 😓












