option
Home
News
AI Startups Criticized for Using Peer Review as PR Tactic

AI Startups Criticized for Using Peer Review as PR Tactic

April 10, 2025
175

AI Startups Criticized for Using Peer Review as PR Tactic

A storm is brewing in the academic world over the use of AI-generated studies at this year's ICLR conference, a major event focused on artificial intelligence. Three AI labs, Sakana, Intology, and Autoscience, have stirred controversy by submitting AI-generated studies to ICLR workshops. Sakana took a transparent approach, informing ICLR leaders and obtaining consent from peer reviewers before submitting their AI-generated papers. However, Intology and Autoscience did not follow suit, submitting their studies without prior notification, as confirmed by an ICLR spokesperson to TechCrunch. The academic community has been vocal on social media, with many criticizing Intology and Autoscience for exploiting the peer review process. Prithviraj Ammanabrolu, an assistant professor at UC San Diego, expressed his frustration on X, highlighting the lack of consent from reviewers who provide their time and effort for free. He urged for full disclosure to editors about the use of AI in generating these studies. Peer review is already a demanding task, with a recent Nature survey indicating that 40% of academics spend two to four hours reviewing a single study. The workload is increasing, as evidenced by the 41% rise in submissions to the NeurIPS conference last year, totaling 17,491 papers. The issue of AI-generated content in academia is not new, with estimates suggesting that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 contained synthetic text. However, using peer review as a means to benchmark and promote AI technology is a more recent development. Intology boasted about receiving unanimously positive reviews for their AI-generated papers on X, even quoting workshop reviewers praising the "clever ideas" in one of their studies. This self-promotion did not sit well with academics. Ashwinee Panda, a postdoctoral fellow at the University of Maryland, criticized the lack of respect shown to human reviewers by submitting AI-generated papers without their consent. Panda noted that Sakana had approached her workshop at ICLR, but she declined to participate, emphasizing the importance of respecting reviewers' time and rights. Skepticism about the value of AI-generated papers is widespread among researchers. Sakana acknowledged that their AI made "embarrassing" citation errors and that only one of their three submitted papers would have met the conference's standards. In a move towards transparency, Sakana withdrew their paper from ICLR. Alexander Doria, co-founder of AI startup Pleias, suggested the need for a "regulated company/public agency" to conduct paid, high-quality evaluations of AI-generated studies. He argued that researchers should be fully compensated for their time and that academia should not be used as a free resource for AI evaluations.
Related article
Microsoft Study Reveals AI Models' Limitations in Software Debugging Microsoft Study Reveals AI Models' Limitations in Software Debugging AI models from OpenAI, Anthropic, and other leading AI labs are increasingly utilized for coding tasks. Google CEO Sundar Pichai noted in October that AI generates 25% of new code at the company, whil
AI-Powered Solutions Could Significantly Reduce Global Carbon Emissions AI-Powered Solutions Could Significantly Reduce Global Carbon Emissions A recent study by the London School of Economics and Systemiq reveals that artificial intelligence could substantially lower global carbon emissions without sacrificing modern conveniences, positionin
New Study Reveals How Much Data LLMs Actually Memorize New Study Reveals How Much Data LLMs Actually Memorize How Much Do AI Models Actually Memorize? New Research Reveals Surprising InsightsWe all know that large language models (LLMs) like ChatGPT, Claude, and Gemini are trained on enormous datasets—trillions of words from books, websites, code, and even multimedia like images and audio. But what exactly
Comments (31)
0/200
JuanEvans
JuanEvans July 21, 2025 at 9:25:03 PM EDT

It's wild how AI startups are turning peer review into a PR stunt! 😅 Sakana and others submitting AI-generated studies to ICLR is bold, but it feels like they're more focused on headlines than real science. Anyone else think this could backfire big time?

LawrenceMiller
LawrenceMiller April 14, 2025 at 6:29:12 AM EDT

Я в замешательстве по поводу этих исследований, созданных ИИ, на ICLR. С одной стороны, круто, что ИИ может генерировать исследования, но использовать это для PR? Это кажется немного неправильным. Это расширяет границы, но не уверен, что это лучший способ.

AlbertAllen
AlbertAllen April 13, 2025 at 10:45:33 AM EDT

I'm kinda torn about these AI startups using peer review as a PR move. On one hand, it's clever marketing, but on the other, it feels like they're gaming the system. I mean, if the studies are legit, why not just say so? Feels a bit shady to me.

JackMartin
JackMartin April 13, 2025 at 6:48:53 AM EDT

ICLRでのAI生成の研究論文について、複雑な気持ちです。AIが研究を生成できるのは面白いけど、PRに利用するのは少し違和感があります。境界を押し広げる試みではあるけど、必ずしも最良の方法とは言えないかもしれませんね。

JonathanKing
JonathanKing April 13, 2025 at 3:58:51 AM EDT

Estoy dividido sobre estas startups de IA que usan la revisión por pares como una táctica de relaciones públicas. Por un lado, es un marketing inteligente, pero por otro, parece que están jugando con el sistema. Si los estudios son legítimos, ¿por qué no decirlo simplemente? Me parece un poco turbio.

BillyThomas
BillyThomas April 12, 2025 at 2:30:32 PM EDT

Me siento dividido con lo de los estudios generados por IA en ICLR. Por un lado, es genial que la IA pueda producir investigación, pero usarlo para relaciones públicas, no sé, me parece un poco fuera de lugar. Está empujando límites, pero no estoy seguro de que sea de la mejor manera.

Back to Top
OR