option
Home News AI Not Ready as 'Co-Scientist', Experts Say

AI Not Ready as 'Co-Scientist', Experts Say

release date release date April 10, 2025
Author Author PatrickGarcia
views views 21

AI Not Ready as

Google recently introduced its "AI co-scientist," an AI tool intended to help scientists generate hypotheses and research plans. The company hyped it as a game-changer for uncovering new knowledge, but experts are skeptical about its real-world impact.

"This preliminary tool, while interesting, doesn't seem likely to be seriously used," said Sara Beery, a computer vision researcher at MIT, in an interview with TechCrunch. "I'm not sure that there is demand for this type of hypothesis-generation system from the scientific community."

Google is just the latest tech giant to claim that AI will revolutionize scientific research, especially in data-heavy fields like biomedicine. OpenAI CEO Sam Altman wrote in an essay earlier this year that "superintelligent" AI could "massively accelerate scientific discovery and innovation." Similarly, Anthropic CEO Dario Amodei has predicted that AI could help develop cures for most cancers.

However, many researchers feel that today's AI tools fall short of these ambitious claims. They argue that applications like Google's AI co-scientist are more about hype than substance, lacking the empirical data to back up the promises.

For instance, Google's blog post on the AI co-scientist boasted about its potential in drug repurposing for acute myeloid leukemia, a type of blood cancer that affects the bone marrow. Yet, the results were so vague that "no legitimate scientist would take them seriously," according to Favia Dubyk, a pathologist at Northwest Medical Center-Tucson in Arizona.

"It could be a good starting point for researchers, but the lack of detail is worrisome and doesn't lend me to trust it," Dubyk told TechCrunch. "The lack of information provided makes it really hard to understand if this can truly be helpful."

This isn't the first time Google has faced criticism from the scientific community for promoting an AI breakthrough without providing enough detail for others to replicate the results.

Back in 2020, Google claimed that one of its AI systems, trained to detect breast tumors, outperformed human radiologists. But researchers from Harvard and Stanford published a rebuttal in Nature, arguing that the lack of detailed methods and code in Google's research "undermined its scientific value."

Scientists have also criticized Google for downplaying the limitations of its AI tools in fields like materials engineering. In 2023, the company claimed that around 40 "new materials" had been synthesized with the help of its AI system, GNoME. However, an independent analysis found that none of these materials were actually new.

"We won't truly understand the strengths and limitations of tools like Google's 'co-scientist' until they undergo rigorous, independent evaluation across diverse scientific disciplines," said Ashique KhudaBukhsh, an assistant professor of software engineering at Rochester Institute of Technology, in an interview with TechCrunch. "AI often performs well in controlled environments but may fail when applied at scale."

Complex Processes

Developing AI tools to aid scientific discovery is tricky because it's hard to predict all the factors that might throw a wrench in the works. AI can be useful for sifting through a huge list of possibilities, but it's less clear whether it can handle the kind of creative problem-solving that leads to major breakthroughs.

"We've seen throughout history that some of the most important scientific advancements, like the development of mRNA vaccines, were driven by human intuition and perseverance in the face of skepticism," KhudaBukhsh said. "AI, as it stands today, may not be well-suited to replicate that."

Lana Sinapayen, an AI researcher at Sony Computer Science Laboratories in Japan, believes that tools like Google's AI co-scientist are focusing on the wrong aspects of scientific work.

Sinapayen sees value in AI that can automate tedious tasks, like summarizing new academic literature or formatting grant applications. But she argues there's little demand for an AI co-scientist that generates hypotheses, a task many researchers find intellectually rewarding.

"For many scientists, myself included, generating hypotheses is the most fun part of the job," Sinapayen told TechCrunch. "Why would I want to outsource my fun to a computer, and then be left with only the hard work to do myself? In general, many generative AI researchers seem to misunderstand why humans do what they do, and we end up with proposals for products that automate the very part that we get joy from."

Beery pointed out that the toughest part of the scientific process is often designing and implementing studies to test hypotheses, something current AI systems struggle with. AI can't physically conduct experiments, and it often struggles with problems where data is scarce.

"Most science isn't possible to do entirely virtually — there is frequently a significant component of the scientific process that is physical, like collecting new data and conducting experiments in the lab," Beery said. "One big limitation of systems like Google's AI co-scientist relative to the actual scientific process, which definitely limits its usability, is context about the lab and researcher using the system and their specific research goals, their past work, their skillset, and the resources they have access to."

AI Risks

AI's technical limitations and risks, such as its tendency to "hallucinate" or generate false information, make scientists cautious about relying on it for serious work.

KhudaBukhsh worries that AI tools could end up flooding the scientific literature with noise rather than advancing progress.

It's already happening. A recent study found that AI-generated "junk science" is flooding Google Scholar, Google's free search engine for scholarly literature.

"AI-generated research, if not carefully monitored, could flood the scientific field with lower-quality or even misleading studies, overwhelming the peer-review process," KhudaBukhsh said. "An overwhelmed peer-review process is already a challenge in fields like computer science, where top conferences have seen an exponential rise in submissions."

Even well-designed studies could be compromised by misbehaving AI, Sinapayen warned. While she appreciates the idea of a tool that could assist with literature review and synthesis, she wouldn't trust today's AI to do that job reliably.

"Those are things that various existing tools are claiming to do, but those are not jobs that I would personally leave up to current AI," Sinapayen said. She also raised concerns about how AI systems are trained and the energy they consume. "Even if all the ethical issues were solved, current AI is just not reliable enough for me to base my work on their output one way or another."

Related article
Cómo estamos usando IA para ayudar a las ciudades a abordar el calor extremo Cómo estamos usando IA para ayudar a las ciudades a abordar el calor extremo Parece que 2024 podría romper el récord del año más caluroso hasta el momento, superando 2023. Esta tendencia es particularmente dura para las personas que viven en las islas de calor urbano: aquellos lugares en las ciudades donde el concreto y el asfalto absorben los rayos del sol y luego irradian el calor directamente. Estas áreas pueden calentarse
Las caras sintéticas 'degradadas' pueden mejorar la tecnología de reconocimiento facial Las caras sintéticas 'degradadas' pueden mejorar la tecnología de reconocimiento facial Los investigadores de la Universidad Estatal de Michigan han presentado una forma innovadora de usar caras sintéticas para una causa noble, lo que aumenta la precisión de los sistemas de reconocimiento de imágenes. En lugar de contribuir al fenómeno de Deepfakes, estas caras sintéticas están diseñadas para imitar las imperfecciones que se encuentran en Real-
Deepseek's AIS descubre verdaderos deseos humanos Deepseek's AIS descubre verdaderos deseos humanos El avance de Deepseek en los modelos de recompensas de IA: mejorar el razonamiento de la IA y la respuesta de la IA China Deepseek, en colaboración con la Universidad de Tsinghua, ha logrado un hito significativo en la investigación de IA. Su enfoque innovador para los modelos de recompensa de IA promete revolucionar cómo aprenden los sistemas de IA
Comments (30)
0/200
JamesGreen
JamesGreen April 11, 2025 at 4:37:06 AM GMT

I tried the AI co-scientist tool and honestly, it's not as revolutionary as Google claims. It's more like a fancy suggestion box than a real help in generating hypotheses. It's okay for brainstorming but don't expect it to do the heavy lifting. Maybe it'll get better with updates, but right now, it's just meh.

AnthonyJohnson
AnthonyJohnson April 11, 2025 at 4:37:06 AM GMT

El co-científico de IA de Google no es tan impresionante. Es útil para generar algunas ideas, pero no cambia el juego como dicen. La verdad es que esperaba más. Si mejoran la herramienta en el futuro, podría ser útil, pero por ahora, no es gran cosa.

EllaJohnson
EllaJohnson April 11, 2025 at 4:37:06 AM GMT

GoogleのAI共同科学者は期待外れだった。仮説を生成するのに役立つかと思ったけど、実際にはそれほどでもない。ブレインストーミングには使えるけど、もっと革新的なものを期待していた。将来のアップデートに期待したいけど、今は微妙。

AlbertAllen
AlbertAllen April 11, 2025 at 4:37:06 AM GMT

Testei o co-cientista de IA do Google e não achei grande coisa. É mais uma caixa de sugestões do que uma ajuda real na geração de hipóteses. Serve para brainstorming, mas não é revolucionário como dizem. Talvez melhore com atualizações, mas por enquanto, é só isso.

FrankRodriguez
FrankRodriguez April 11, 2025 at 4:37:06 AM GMT

Tôi đã thử công cụ co-scientist AI của Google và thật sự nó không thay đổi trò chơi như họ nói. Nó giống như một hộp gợi ý hơn là một sự giúp đỡ thực sự trong việc tạo ra giả thuyết. Nó ổn cho việc brainstorm nhưng đừng mong đợi nó làm việc nặng nhọc. Có lẽ sẽ tốt hơn với các bản cập nhật, nhưng hiện tại thì chỉ ở mức bình thường.

FredWhite
FredWhite April 12, 2025 at 4:32:58 AM GMT

I tried the AI co-scientist tool and honestly, it's not as revolutionary as Google claims. It's more like a fancy hypothesis generator, but it doesn't really help with the actual research. Maybe it'll get better with updates, but right now, it's just okay. Keep your expectations in check!

Back to Top
OR