option
Home
News
AI Not Ready as 'Co-Scientist', Experts Say

AI Not Ready as 'Co-Scientist', Experts Say

April 10, 2025
138

AI Not Ready as

Google recently introduced its "AI co-scientist," an AI tool intended to help scientists generate hypotheses and research plans. The company hyped it as a game-changer for uncovering new knowledge, but experts are skeptical about its real-world impact.

"This preliminary tool, while interesting, doesn't seem likely to be seriously used," said Sara Beery, a computer vision researcher at MIT, in an interview with TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community."

Google is just the latest tech giant to claim that AI will revolutionize scientific research, especially in data-heavy fields like biomedicine. OpenAI CEO Sam Altman wrote in an essay earlier this year that "superintelligent" AI could "massively accelerate scientific discovery and innovation." Similarly, Anthropic CEO Dario Amodei has predicted that AI could help develop cures for most cancers.

However, many researchers feel that today's AI tools fall short of these ambitious claims. They argue that applications like Google's AI co-scientist are more about hype than substance, lacking the empirical data to back up the promises.

For instance, Google's blog post on the AI co-scientist boasted about its potential in drug repurposing for acute myeloid leukemia, a type of blood cancer that affects the bone marrow. Yet, the results were so vague that "no legitimate scientist would take them seriously," according to Favia Dubyk, a pathologist at Northwest Medical Center-Tucson in Arizona.

"It could be a good starting point for researchers, but the lack of detail is worrisome and doesn't lend me to trust it," Dubyk told TechCrunch. "The lack of information provided makes it really hard to understand if this can truly be helpful."

This isn't the first time Google has faced criticism from the scientific community for promoting an AI breakthrough without providing enough detail for others to replicate the results.

Back in 2020, Google claimed that one of its AI systems, trained to detect breast tumors, outperformed human radiologists. But researchers from Harvard and Stanford published a rebuttal in Nature, arguing that the lack of detailed methods and code in Google's research "undermined its scientific value."

Scientists have also criticized Google for downplaying the limitations of its AI tools in fields like materials engineering. In 2023, the company claimed that around 40 "new materials" had been synthesized with the help of its AI system, GNoME. However, an independent analysis found that none of these materials were actually new.

"We won't truly understand the strengths and limitations of tools like Google's 'co-scientist' until they undergo rigorous, independent evaluation across diverse scientific disciplines," said Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, in an interview with TechCrunch. "AI often performs well in controlled environments but may fail when applied at scale."

Complex Processes

Developing AI tools to aid scientific discovery is tricky because it's hard to predict all the factors that might throw a wrench in the works. AI can be useful for sifting through a huge list of possibilities, but it's less clear whether it can handle the kind of creative problem-solving that leads to major breakthroughs.

"We've seen throughout history that some of the most important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism," KhudaBukhsh said. "AI, as it stands today, may not be well-suited to replicate that."

Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools like Google's AI co-scientist are focusing on the wrong aspects of scientific work.

Sinapayen sees value in AI that can automate tedious tasks, like summarizing new academic literature or formatting grant applications. But she argues there's little demand for an AI co-scientist that generates hypotheses, a task many researchers find intellectually rewarding.

"For many scientists, myself included, generating hypotheses is the most fun part of the job," Sinapayen told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself? In general, many generative AI researchers seem to misunderstand why humans do what they do, and we end up with proposals for products that automate the very part that we get joy from."

Beery pointed out that the toughest part of the scientific process is often designing and implementing studies to test hypotheses, something current AI systems struggle with. AI can't physically conduct experiments, and it often struggles with problems where data is scarce.

"Most science isn't possible to do entirely virtually — there is frequently a significant component of the scientific process that is physical, like collecting new data and conducting experiments in the lab," Beery said. "One big limitation of systems like Google's AI co-scientist relative to the actual scientific process, which definitely limits its usability, is context about the lab and researcher using the system and their specific research goals, their past work, their skillset, and the resources they have access to."

AI Risks

AI's technical limitations and risks, such as its tendency to "hallucinate" or generate false information, make scientists cautious about relying on it for serious work.

KhudaBukhsh worries that AI tools could end up flooding the scientific literature with noise rather than advancing progress.

It's already happening. A recent study found that AI-generated "junk science" is flooding Google Scholar, Google's free search engine for scholarly literature.

"AI-generated research, if not carefully monitored, could flood the scientific field with lower-quality or even misleading studies, overwhelming the peer-review process," KhudaBukhsh said. "An overwhelmed peer-review process is already a challenge in fields like computer science, where top conferences have seen an exponential rise in submissions."

Even well-designed studies could be compromised by misbehaving AI, Sinapayen warned. While she appreciates the idea of a tool that could assist with literature review and synthesis, she wouldn't trust today's AI to do that job reliably.

"Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI," Sinapayen said. She also raised concerns about how AI systems are trained and the energy they consume. "Even if all the ethical issues were solved, current AI is just not reliable enough for me to base my work on their output one way or another."

Related article
Google Cloud Powers Breakthroughs in Scientific Research and Discovery Google Cloud Powers Breakthroughs in Scientific Research and Discovery The digital revolution is transforming scientific methodologies through unprecedented computational capabilities. Cutting-edge technologies now augment both theoretical frameworks and laboratory experiments, propelling breakthroughs across discipline
AI Accelerates Scientific Research for Greater Real-World Impact AI Accelerates Scientific Research for Greater Real-World Impact Google has consistently harnessed AI as a catalyst for scientific progress, with today's pace of discovery reaching extraordinary new levels. This acceleration has transformed the research cycle, turning fundamental breakthroughs into practical appli
Ethics in AI: Tackling Bias and Compliance Challenges in Automation Ethics in AI: Tackling Bias and Compliance Challenges in Automation As automation becomes deeply embedded across industries, ethical considerations are emerging as critical priorities. Decision-making algorithms now influence crucial aspects of society including employment opportunities, financial services, medical c
Comments (35)
0/200
PatrickLewis
PatrickLewis August 24, 2025 at 5:01:17 AM EDT

I was hyped about Google's AI co-scientist, but experts raining on the parade makes sense. Sounds like it’s more flash than substance right now. 🤔 Still, curious to see where this goes!

GeorgeWilliams
GeorgeWilliams August 17, 2025 at 9:00:59 AM EDT

I was super excited about Google's AI co-scientist at first, but now I’m kinda bummed experts think it’s overhyped. 😕 Sounds like it’s more of a fancy assistant than a real game-changer. Anyone else feel it’s just not ready to shake up science yet?

PaulWilson
PaulWilson August 8, 2025 at 9:00:59 AM EDT

I read about Google's AI co-scientist, and it sounds like a cool idea, but experts seem to think it’s more hype than substance. Anyone else feel like AI’s being oversold these days? 🤔

GaryLewis
GaryLewis August 4, 2025 at 2:48:52 AM EDT

I read about Google's AI co-scientist and it sounds cool, but experts throwing shade makes me wonder if it’s just hype. 🤔 Anyone else think AI’s still got a long way to go before it’s truly helping scientists discover new stuff?

PeterYoung
PeterYoung July 23, 2025 at 12:59:47 AM EDT

I find it intriguing that Google's pushing this AI co-scientist angle, but I'm not shocked experts are skeptical. Sounds like a cool concept, yet overhyped tech often fizzles out in practice. Anyone else think it’s more marketing than science? 😏

BruceGonzalez
BruceGonzalez April 24, 2025 at 11:08:16 PM EDT

Google's AI co-scientist sounds cool on paper, but in real life? Not so much. I tried using it for my research, and it's more like a fancy suggestion box than a game-changer. It's okay for brainstorming, but don't expect it to revolutionize your work. Maybe in a few years, it'll be worth the hype. 🤔

Back to Top
OR