AI Not Ready as 'Co-Scientist', Experts Say

Google recently introduced its "AI co-scientist," an AI tool intended to help scientists generate hypotheses and research plans. The company hyped it as a game-changer for uncovering new knowledge, but experts are skeptical about its real-world impact.
"This preliminary tool, while interesting, doesn't seem likely to be seriously used," said Sara Beery, a computer vision researcher at MIT, in an interview with TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community."
Google is just the latest tech giant to claim that AI will revolutionize scientific research, especially in data-heavy fields like biomedicine. OpenAI CEO Sam Altman wrote in an essay earlier this year that "superintelligent" AI could "massively accelerate scientific discovery and innovation." Similarly, Anthropic CEO Dario Amodei has predicted that AI could help develop cures for most cancers.
However, many researchers feel that today's AI tools fall short of these ambitious claims. They argue that applications like Google's AI co-scientist are more about hype than substance, lacking the empirical data to back up the promises.
For instance, Google's blog post on the AI co-scientist boasted about its potential in drug repurposing for acute myeloid leukemia, a type of blood cancer that affects the bone marrow. Yet, the results were so vague that "no legitimate scientist would take them seriously," according to Favia Dubyk, a pathologist at Northwest Medical Center-Tucson in Arizona.
"It could be a good starting point for researchers, but the lack of detail is worrisome and doesn't lend me to trust it," Dubyk told TechCrunch. "The lack of information provided makes it really hard to understand if this can truly be helpful."
This isn't the first time Google has faced criticism from the scientific community for promoting an AI breakthrough without providing enough detail for others to replicate the results.
Back in 2020, Google claimed that one of its AI systems, trained to detect breast tumors, outperformed human radiologists. But researchers from Harvard and Stanford published a rebuttal in Nature, arguing that the lack of detailed methods and code in Google's research "undermined its scientific value."
Scientists have also criticized Google for downplaying the limitations of its AI tools in fields like materials engineering. In 2023, the company claimed that around 40 "new materials" had been synthesized with the help of its AI system, GNoME. However, an independent analysis found that none of these materials were actually new.
"We won't truly understand the strengths and limitations of tools like Google's 'co-scientist' until they undergo rigorous, independent evaluation across diverse scientific disciplines," said Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, in an interview with TechCrunch. "AI often performs well in controlled environments but may fail when applied at scale."
Complex Processes
Developing AI tools to aid scientific discovery is tricky because it's hard to predict all the factors that might throw a wrench in the works. AI can be useful for sifting through a huge list of possibilities, but it's less clear whether it can handle the kind of creative problem-solving that leads to major breakthroughs.
"We've seen throughout history that some of the most important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism," KhudaBukhsh said. "AI, as it stands today, may not be well-suited to replicate that."
Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools like Google's AI co-scientist are focusing on the wrong aspects of scientific work.
Sinapayen sees value in AI that can automate tedious tasks, like summarizing new academic literature or formatting grant applications. But she argues there's little demand for an AI co-scientist that generates hypotheses, a task many researchers find intellectually rewarding.
"For many scientists, myself included, generating hypotheses is the most fun part of the job," Sinapayen told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself? In general, many generative AI researchers seem to misunderstand why humans do what they do, and we end up with proposals for products that automate the very part that we get joy from."
Beery pointed out that the toughest part of the scientific process is often designing and implementing studies to test hypotheses, something current AI systems struggle with. AI can't physically conduct experiments, and it often struggles with problems where data is scarce.
"Most science isn't possible to do entirely virtually — there is frequently a significant component of the scientific process that is physical, like collecting new data and conducting experiments in the lab," Beery said. "One big limitation of systems like Google's AI co-scientist relative to the actual scientific process, which definitely limits its usability, is context about the lab and researcher using the system and their specific research goals, their past work, their skillset, and the resources they have access to."
AI Risks
AI's technical limitations and risks, such as its tendency to "hallucinate" or generate false information, make scientists cautious about relying on it for serious work.
KhudaBukhsh worries that AI tools could end up flooding the scientific literature with noise rather than advancing progress.
It's already happening. A recent study found that AI-generated "junk science" is flooding Google Scholar, Google's free search engine for scholarly literature.
"AI-generated research, if not carefully monitored, could flood the scientific field with lower-quality or even misleading studies, overwhelming the peer-review process," KhudaBukhsh said. "An overwhelmed peer-review process is already a challenge in fields like computer science, where top conferences have seen an exponential rise in submissions."
Even well-designed studies could be compromised by misbehaving AI, Sinapayen warned. While she appreciates the idea of a tool that could assist with literature review and synthesis, she wouldn't trust today's AI to do that job reliably.
"Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI," Sinapayen said. She also raised concerns about how AI systems are trained and the energy they consume. "Even if all the ethical issues were solved, current AI is just not reliable enough for me to base my work on their output one way or another."
Related article
Microsoft Study Reveals AI Models' Limitations in Software Debugging
AI models from OpenAI, Anthropic, and other leading AI labs are increasingly utilized for coding tasks. Google CEO Sundar Pichai noted in October that AI generates 25% of new code at the company, whil
AI-Powered Solutions Could Significantly Reduce Global Carbon Emissions
A recent study by the London School of Economics and Systemiq reveals that artificial intelligence could substantially lower global carbon emissions without sacrificing modern conveniences, positionin
New Study Reveals How Much Data LLMs Actually Memorize
How Much Do AI Models Actually Memorize? New Research Reveals Surprising InsightsWe all know that large language models (LLMs) like ChatGPT, Claude, and Gemini are trained on enormous datasets—trillions of words from books, websites, code, and even multimedia like images and audio. But what exactly
Comments (33)
0/200
PaulWilson
August 8, 2025 at 9:00:59 AM EDT
I read about Google's AI co-scientist, and it sounds like a cool idea, but experts seem to think it’s more hype than substance. Anyone else feel like AI’s being oversold these days? 🤔
0
GaryLewis
August 4, 2025 at 2:48:52 AM EDT
I read about Google's AI co-scientist and it sounds cool, but experts throwing shade makes me wonder if it’s just hype. 🤔 Anyone else think AI’s still got a long way to go before it’s truly helping scientists discover new stuff?
0
PeterYoung
July 23, 2025 at 12:59:47 AM EDT
I find it intriguing that Google's pushing this AI co-scientist angle, but I'm not shocked experts are skeptical. Sounds like a cool concept, yet overhyped tech often fizzles out in practice. Anyone else think it’s more marketing than science? 😏
0
BruceGonzalez
April 24, 2025 at 11:08:16 PM EDT
Google's AI co-scientist sounds cool on paper, but in real life? Not so much. I tried using it for my research, and it's more like a fancy suggestion box than a game-changer. It's okay for brainstorming, but don't expect it to revolutionize your work. Maybe in a few years, it'll be worth the hype. 🤔
0
RogerPerez
April 23, 2025 at 11:00:20 PM EDT
구글의 'AI 공동 과학자'를 사용해 봤는데, 지금은 '공동 추측자' 같아요. 나오는 아이디어는 재미있지만, 과학을 혁신할 정도는 아니네요. 몇 년 후에는 더 유용할지 모르겠지만, 지금은 그냥 그런 정도? 🤔
0
IsabellaLevis
April 21, 2025 at 3:32:19 AM EDT
Googleの「AI共助科学者」を試してみましたが、今のところ「共推測者」のようです。出てくるアイデアは面白いけど、科学を革命するには程遠いですね。数年後にはもっと役立つかもしれませんが、今はまだ。😅
0
Google recently introduced its "AI co-scientist," an AI tool intended to help scientists generate hypotheses and research plans. The company hyped it as a game-changer for uncovering new knowledge, but experts are skeptical about its real-world impact.
"This preliminary tool, while interesting, doesn't seem likely to be seriously used," said Sara Beery, a computer vision researcher at MIT, in an interview with TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community."
Google is just the latest tech giant to claim that AI will revolutionize scientific research, especially in data-heavy fields like biomedicine. OpenAI CEO Sam Altman wrote in an essay earlier this year that "superintelligent" AI could "massively accelerate scientific discovery and innovation." Similarly, Anthropic CEO Dario Amodei has predicted that AI could help develop cures for most cancers.
However, many researchers feel that today's AI tools fall short of these ambitious claims. They argue that applications like Google's AI co-scientist are more about hype than substance, lacking the empirical data to back up the promises.
For instance, Google's blog post on the AI co-scientist boasted about its potential in drug repurposing for acute myeloid leukemia, a type of blood cancer that affects the bone marrow. Yet, the results were so vague that "no legitimate scientist would take them seriously," according to Favia Dubyk, a pathologist at Northwest Medical Center-Tucson in Arizona.
"It could be a good starting point for researchers, but the lack of detail is worrisome and doesn't lend me to trust it," Dubyk told TechCrunch. "The lack of information provided makes it really hard to understand if this can truly be helpful."
This isn't the first time Google has faced criticism from the scientific community for promoting an AI breakthrough without providing enough detail for others to replicate the results.
Back in 2020, Google claimed that one of its AI systems, trained to detect breast tumors, outperformed human radiologists. But researchers from Harvard and Stanford published a rebuttal in Nature, arguing that the lack of detailed methods and code in Google's research "undermined its scientific value."
Scientists have also criticized Google for downplaying the limitations of its AI tools in fields like materials engineering. In 2023, the company claimed that around 40 "new materials" had been synthesized with the help of its AI system, GNoME. However, an independent analysis found that none of these materials were actually new.
"We won't truly understand the strengths and limitations of tools like Google's 'co-scientist' until they undergo rigorous, independent evaluation across diverse scientific disciplines," said Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, in an interview with TechCrunch. "AI often performs well in controlled environments but may fail when applied at scale."
Complex Processes
Developing AI tools to aid scientific discovery is tricky because it's hard to predict all the factors that might throw a wrench in the works. AI can be useful for sifting through a huge list of possibilities, but it's less clear whether it can handle the kind of creative problem-solving that leads to major breakthroughs.
"We've seen throughout history that some of the most important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism," KhudaBukhsh said. "AI, as it stands today, may not be well-suited to replicate that."
Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools like Google's AI co-scientist are focusing on the wrong aspects of scientific work.
Sinapayen sees value in AI that can automate tedious tasks, like summarizing new academic literature or formatting grant applications. But she argues there's little demand for an AI co-scientist that generates hypotheses, a task many researchers find intellectually rewarding.
"For many scientists, myself included, generating hypotheses is the most fun part of the job," Sinapayen told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself? In general, many generative AI researchers seem to misunderstand why humans do what they do, and we end up with proposals for products that automate the very part that we get joy from."
Beery pointed out that the toughest part of the scientific process is often designing and implementing studies to test hypotheses, something current AI systems struggle with. AI can't physically conduct experiments, and it often struggles with problems where data is scarce.
"Most science isn't possible to do entirely virtually — there is frequently a significant component of the scientific process that is physical, like collecting new data and conducting experiments in the lab," Beery said. "One big limitation of systems like Google's AI co-scientist relative to the actual scientific process, which definitely limits its usability, is context about the lab and researcher using the system and their specific research goals, their past work, their skillset, and the resources they have access to."
AI Risks
AI's technical limitations and risks, such as its tendency to "hallucinate" or generate false information, make scientists cautious about relying on it for serious work.
KhudaBukhsh worries that AI tools could end up flooding the scientific literature with noise rather than advancing progress.
It's already happening. A recent study found that AI-generated "junk science" is flooding Google Scholar, Google's free search engine for scholarly literature.
"AI-generated research, if not carefully monitored, could flood the scientific field with lower-quality or even misleading studies, overwhelming the peer-review process," KhudaBukhsh said. "An overwhelmed peer-review process is already a challenge in fields like computer science, where top conferences have seen an exponential rise in submissions."
Even well-designed studies could be compromised by misbehaving AI, Sinapayen warned. While she appreciates the idea of a tool that could assist with literature review and synthesis, she wouldn't trust today's AI to do that job reliably.
"Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI," Sinapayen said. She also raised concerns about how AI systems are trained and the energy they consume. "Even if all the ethical issues were solved, current AI is just not reliable enough for me to base my work on their output one way or another."



I read about Google's AI co-scientist, and it sounds like a cool idea, but experts seem to think it’s more hype than substance. Anyone else feel like AI’s being oversold these days? 🤔




I read about Google's AI co-scientist and it sounds cool, but experts throwing shade makes me wonder if it’s just hype. 🤔 Anyone else think AI’s still got a long way to go before it’s truly helping scientists discover new stuff?




I find it intriguing that Google's pushing this AI co-scientist angle, but I'm not shocked experts are skeptical. Sounds like a cool concept, yet overhyped tech often fizzles out in practice. Anyone else think it’s more marketing than science? 😏




Google's AI co-scientist sounds cool on paper, but in real life? Not so much. I tried using it for my research, and it's more like a fancy suggestion box than a game-changer. It's okay for brainstorming, but don't expect it to revolutionize your work. Maybe in a few years, it'll be worth the hype. 🤔




구글의 'AI 공동 과학자'를 사용해 봤는데, 지금은 '공동 추측자' 같아요. 나오는 아이디어는 재미있지만, 과학을 혁신할 정도는 아니네요. 몇 년 후에는 더 유용할지 모르겠지만, 지금은 그냥 그런 정도? 🤔




Googleの「AI共助科学者」を試してみましたが、今のところ「共推測者」のようです。出てくるアイデアは面白いけど、科学を革命するには程遠いですね。数年後にはもっと役立つかもしれませんが、今はまだ。😅












