option
Home
News
DeepMind's AGI Safety Paper Fails to Sway Skeptics

DeepMind's AGI Safety Paper Fails to Sway Skeptics

April 10, 2025
105

DeepMind

On Wednesday, Google DeepMind dropped a hefty 145-page paper diving deep into their approach to AGI safety. AGI, or artificial general intelligence, is the kind of AI that can tackle any task a human can, and it's a hot topic in the AI world. Some folks think it's just a fantasy, while others, like the big shots at Anthropic, believe it's right around the bend and could cause some serious trouble if we don't get our safety measures in check. DeepMind's paper, penned with help from co-founder Shane Legg, reckons AGI might show up by 2030 and could lead to what they call "severe harm." They don't spell it out exactly, but they toss around scary phrases like "existential risks" that could "permanently destroy humanity." "We're betting on seeing an Exceptional AGI before this decade's out," the authors note. "An Exceptional AGI is a system that can match the skills of the top 1% of adults on a bunch of non-physical tasks, including tricky stuff like learning new skills." Right from the start, the paper compares DeepMind's way of handling AGI risk with how Anthropic and OpenAI do it. It says Anthropic isn't as big on "robust training, monitoring, and security," while OpenAI's all about "automating" a type of AI safety research called alignment research. The paper also throws some shade on the idea of superintelligent AI — AI that's better at jobs than any human. (OpenAI recently said they're shifting their focus from AGI to superintelligence.) Without some major new breakthroughs, DeepMind's authors aren't buying that superintelligent systems are coming anytime soon — or maybe ever. But they do think it's possible that our current methods could lead to "recursive AI improvement," where AI does its own AI research to make even smarter AI systems. And that, they warn, could be super dangerous. Overall, the paper suggests we need to develop ways to keep bad actors away from AGI, better understand what AI systems are up to, and make the environments where AI operates more secure. They admit a lot of these ideas are still in their early stages and have "open research problems," but they urge us not to ignore the safety issues that might be coming our way. "AGI could bring amazing benefits or serious harm," the authors point out. "So, to build AGI the right way, it's crucial for the top AI developers to plan ahead and tackle those big risks." Not everyone's on board with the paper's ideas, though. Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told TechCrunch she thinks AGI is too fuzzy a concept to be "rigorously evaluated scientifically." Another AI researcher, Matthew Guzdial from the University of Alberta, said he's not convinced recursive AI improvement is doable right now. "Recursive improvement is what the intelligence singularity arguments are based on," Guzdial told TechCrunch, "but we've never seen any evidence that it actually works." Sandra Wachter, who studies tech and regulation at Oxford, points out a more pressing worry: AI reinforcing itself with "inaccurate outputs." "With more and more AI-generated content online and real data getting replaced, models are learning from their own outputs that are full of inaccuracies or hallucinations," she told TechCrunch. "Since chatbots are mostly used for searching and finding the truth, we're always at risk of being fed false information that's presented in a very convincing way." As thorough as it is, DeepMind's paper probably won't end the debates about how likely AGI really is — and which AI safety issues need the most attention right now.
Related article
OpenAI Reaffirms Nonprofit Roots in Major Corporate Overhaul OpenAI Reaffirms Nonprofit Roots in Major Corporate Overhaul OpenAI remains steadfast in its nonprofit mission as it undergoes a significant corporate restructuring, balancing growth with its commitment to ethical AI development.CEO Sam Altman outlined the comp
Google Launches On-Device Gemini AI Model for Robots Google Launches On-Device Gemini AI Model for Robots Google DeepMind Unveils Gemini Robotics On-Device for Offline Robot ControlGoogle DeepMind just dropped an exciting update in the robotics space—Gemini Robotics On-Device, a new language model that lets robots perform tasks without needing an internet connection. This builds on their earlier Gemini
New Study Reveals How Much Data LLMs Actually Memorize New Study Reveals How Much Data LLMs Actually Memorize How Much Do AI Models Actually Memorize? New Research Reveals Surprising InsightsWe all know that large language models (LLMs) like ChatGPT, Claude, and Gemini are trained on enormous datasets—trillions of words from books, websites, code, and even multimedia like images and audio. But what exactly
Comments (46)
0/200
ArthurYoung
ArthurYoung August 9, 2025 at 7:00:59 AM EDT

DeepMind's 145-page AGI safety paper sounds like a beast! I’m curious if it’s more hype than substance—anyone read it yet? 🤔

GregoryRodriguez
GregoryRodriguez April 22, 2025 at 11:58:08 AM EDT

DeepMind's AGI safety paper? Honestly, it didn't convince me at all 🤔. 145 pages and I'm still skeptical. AGI sounds like sci-fi to me, but hey, if they can make it safe, I'm all for it! Maybe next time they'll have something more solid.

GeorgeJones
GeorgeJones April 20, 2025 at 8:35:58 AM EDT

DeepMind의 AGI 안전 논문? 솔직히 전혀 설득력이 없었어요 🤔. 145페이지를 읽었는데도 여전히 회의적이에요. AGI는 제게는 SF처럼 들려요, 하지만 안전하게 할 수 있다면 찬성해요! 다음번에는 더 설득력 있는 것을 기대할게요.

CharlesLee
CharlesLee April 18, 2025 at 11:24:35 AM EDT

O artigo de segurança de AGI da DeepMind? Honestamente, não me convenceu em nada 🤔. 145 páginas e ainda estou cético. AGI parece ficção científica para mim, mas, ei, se eles conseguirem torná-lo seguro, estou a favor! Talvez na próxima eles tenham algo mais sólido.

CarlTaylor
CarlTaylor April 18, 2025 at 2:26:53 AM EDT

Tentei ler o papel de segurança de AGI do DeepMind, mas é tão denso! 😵‍💫 Parece que estão tentando nos convencer de que AGI é real, mas ainda não estou convencido. Talvez se tornasse mais digerível, eu ficaria mais convencido. Ainda assim, parabéns pelo esforço!

LawrenceJones
LawrenceJones April 16, 2025 at 10:57:12 AM EDT

Intenté leer el artículo de seguridad de AGI de DeepMind, pero es tan denso! 😵‍💫 Parece que quieren convencernos de que AGI es real, pero todavía no lo compro. Tal vez si lo hicieran más digerible, estaría más convencido. Aún así, felicitaciones por el esfuerzo!

Back to Top
OR