option
Home
News
AI Startups Criticized for Using Peer Review as PR Tactic

AI Startups Criticized for Using Peer Review as PR Tactic

April 10, 2025
148

AI Startups Criticized for Using Peer Review as PR Tactic

A storm is brewing in the academic world over the use of AI-generated studies at this year's ICLR conference, a major event focused on artificial intelligence. Three AI labs, Sakana, Intology, and Autoscience, have stirred controversy by submitting AI-generated studies to ICLR workshops. Sakana took a transparent approach, informing ICLR leaders and obtaining consent from peer reviewers before submitting their AI-generated papers. However, Intology and Autoscience did not follow suit, submitting their studies without prior notification, as confirmed by an ICLR spokesperson to TechCrunch. The academic community has been vocal on social media, with many criticizing Intology and Autoscience for exploiting the peer review process. Prithviraj Ammanabrolu, an assistant professor at UC San Diego, expressed his frustration on X, highlighting the lack of consent from reviewers who provide their time and effort for free. He urged for full disclosure to editors about the use of AI in generating these studies. Peer review is already a demanding task, with a recent Nature survey indicating that 40% of academics spend two to four hours reviewing a single study. The workload is increasing, as evidenced by the 41% rise in submissions to the NeurIPS conference last year, totaling 17,491 papers. The issue of AI-generated content in academia is not new, with estimates suggesting that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 contained synthetic text. However, using peer review as a means to benchmark and promote AI technology is a more recent development. Intology boasted about receiving unanimously positive reviews for their AI-generated papers on X, even quoting workshop reviewers praising the "clever ideas" in one of their studies. This self-promotion did not sit well with academics. Ashwinee Panda, a postdoctoral fellow at the University of Maryland, criticized the lack of respect shown to human reviewers by submitting AI-generated papers without their consent. Panda noted that Sakana had approached her workshop at ICLR, but she declined to participate, emphasizing the importance of respecting reviewers' time and rights. Skepticism about the value of AI-generated papers is widespread among researchers. Sakana acknowledged that their AI made "embarrassing" citation errors and that only one of their three submitted papers would have met the conference's standards. In a move towards transparency, Sakana withdrew their paper from ICLR. Alexander Doria, co-founder of AI startup Pleias, suggested the need for a "regulated company/public agency" to conduct paid, high-quality evaluations of AI-generated studies. He argued that researchers should be fully compensated for their time and that academia should not be used as a free resource for AI evaluations.
Related article
Authentic Focusing System Developed for Affordable Augmented Reality Authentic Focusing System Developed for Affordable Augmented Reality Revolutionizing Projection-Based Augmented RealityResearchers from the prestigious Institute of Electrical and Electronics Engineers (IEEE) have made a groundbreaking leap forward
How we’re using AI to help cities tackle extreme heat How we’re using AI to help cities tackle extreme heat It's looking like 2024 might just break the record for the hottest year yet, surpassing 2023. This trend is particularly tough on folks living in urban heat islands—those spots in cities where concrete and asphalt soak up the sun's rays and then radiate the heat right back out. These areas can warm
'Degraded' Synthetic Faces May Enhance Facial Recognition Technology 'Degraded' Synthetic Faces May Enhance Facial Recognition Technology Researchers at Michigan State University have come up with an innovative way to use synthetic faces for a noble cause—enhancing the accuracy of image recognition systems. Instead of contributing to the deepfakes phenomenon, these synthetic faces are designed to mimic the imperfections found in real-
Comments (30)
0/200
AlbertAllen
AlbertAllen April 13, 2025 at 12:00:00 AM GMT

I'm kinda torn about these AI startups using peer review as a PR move. On one hand, it's clever marketing, but on the other, it feels like they're gaming the system. I mean, if the studies are legit, why not just say so? Feels a bit shady to me.

GeorgeMartinez
GeorgeMartinez April 11, 2025 at 12:00:00 AM GMT

AIスタートアップがピアレビューをPRに利用するのは賛否両論ですね。マーケティングとしては賢いけど、システムを利用しているようにも感じます。本当に研究が正当なら、そう言えばいいのに。ちょっと怪しい気がします。

JerryMoore
JerryMoore April 11, 2025 at 12:00:00 AM GMT

AI 스타트업들이 피어 리뷰를 PR 수단으로 사용하는 것에 대해 찬반양론이 있어요. 마케팅으로는 똑똑하지만, 시스템을 이용하는 것 같기도 해요. 연구가 정당하다면 그냥 그렇게 말하면 되는데, 좀 수상쩍어요.

JonathanKing
JonathanKing April 13, 2025 at 12:00:00 AM GMT

Estoy dividido sobre estas startups de IA que usan la revisión por pares como una táctica de relaciones públicas. Por un lado, es un marketing inteligente, pero por otro, parece que están jugando con el sistema. Si los estudios son legítimos, ¿por qué no decirlo simplemente? Me parece un poco turbio.

DouglasAnderson
DouglasAnderson April 11, 2025 at 12:00:00 AM GMT

Ich bin mir unsicher, was diese KI-Startups angeht, die Peer-Reviews als PR-Taktik nutzen. Einerseits ist es cleverer Marketing, andererseits fühlt es sich an, als würden sie das System austricksen. Wenn die Studien legitim sind, warum sagen sie das nicht einfach? Kommt mir ein bisschen schmutzig vor.

RogerKing
RogerKing April 10, 2025 at 12:00:00 AM GMT

I'm torn about this whole AI-generated studies thing at ICLR. On one hand, it's cool that AI can produce research, but on the other, using it for PR? That feels a bit off. I guess it's pushing boundaries, but maybe not in the best way. What do you think?

Back to Top
OR