option
Home News DeepMind's AGI Safety Paper Fails to Sway Skeptics

DeepMind's AGI Safety Paper Fails to Sway Skeptics

release date release date April 10, 2025
Author Author KevinBrown
views views 33

DeepMind

On Wednesday, Google DeepMind dropped a hefty 145-page paper diving deep into their approach to AGI safety. AGI, or artificial general intelligence, is the kind of AI that can tackle any task a human can, and it's a hot topic in the AI world. Some folks think it's just a fantasy, while others, like the big shots at Anthropic, believe it's right around the bend and could cause some serious trouble if we don't get our safety measures in check. DeepMind's paper, penned with help from co-founder Shane Legg, reckons AGI might show up by 2030 and could lead to what they call "severe harm." They don't spell it out exactly, but they toss around scary phrases like "existential risks" that could "permanently destroy humanity." "We're betting on seeing an Exceptional AGI before this decade's out," the authors note. "An Exceptional AGI is a system that can match the skills of the top 1% of adults on a bunch of non-physical tasks, including tricky stuff like learning new skills." Right from the start, the paper compares DeepMind's way of handling AGI risk with how Anthropic and OpenAI do it. It says Anthropic isn't as big on "robust training, monitoring, and security," while OpenAI's all about "automating" a type of AI safety research called alignment research. The paper also throws some shade on the idea of superintelligent AI — AI that's better at jobs than any human. (OpenAI recently said they're shifting their focus from AGI to superintelligence.) Without some major new breakthroughs, DeepMind's authors aren't buying that superintelligent systems are coming anytime soon — or maybe ever. But they do think it's possible that our current methods could lead to "recursive AI improvement," where AI does its own AI research to make even smarter AI systems. And that, they warn, could be super dangerous. Overall, the paper suggests we need to develop ways to keep bad actors away from AGI, better understand what AI systems are up to, and make the environments where AI operates more secure. They admit a lot of these ideas are still in their early stages and have "open research problems," but they urge us not to ignore the safety issues that might be coming our way. "AGI could bring amazing benefits or serious harm," the authors point out. "So, to build AGI the right way, it's crucial for the top AI developers to plan ahead and tackle those big risks." Not everyone's on board with the paper's ideas, though. Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told TechCrunch she thinks AGI is too fuzzy a concept to be "rigorously evaluated scientifically." Another AI researcher, Matthew Guzdial from the University of Alberta, said he's not convinced recursive AI improvement is doable right now. "Recursive improvement is what the intelligence singularity arguments are based on," Guzdial told TechCrunch, "but we've never seen any evidence that it actually works." Sandra Wachter, who studies tech and regulation at Oxford, points out a more pressing worry: AI reinforcing itself with "inaccurate outputs." "With more and more AI-generated content online and real data getting replaced, models are learning from their own outputs that are full of inaccuracies or hallucinations," she told TechCrunch. "Since chatbots are mostly used for searching and finding the truth, we're always at risk of being fed false information that's presented in a very convincing way." As thorough as it is, DeepMind's paper probably won't end the debates about how likely AGI really is — and which AI safety issues need the most attention right now.
Related article
OpenAI Strikes Back: Sues Elon Musk for Alleged Efforts to Undermine AI Competitor OpenAI Strikes Back: Sues Elon Musk for Alleged Efforts to Undermine AI Competitor OpenAI has launched a fierce legal counterattack against its co-founder, Elon Musk, and his competing AI company, xAI. In a dramatic escalation of their ongoing feud, OpenAI accuses Musk of waging a "relentless" and "malicious" campaign to undermine the company he helped start. According to court d
Law of Accelerating Returns Explained: Pathway to AGI Development Law of Accelerating Returns Explained: Pathway to AGI Development In a recent interview, Elon Musk shared his optimistic view on the timeline for the advent of Artificial General Intelligence (AGI), stating it could be as soon as *“3 to 6 years”*. Similarly, Demis Hassabis, CEO of Google's DeepMind, expressed at The Wall Street Journal’s Future of Everything Festi
Google Believes AI Can Simplify Electrical Grid's Bureaucracy Google Believes AI Can Simplify Electrical Grid's Bureaucracy The tech world is buzzing with concern over a potential power crisis, fueled by the skyrocketing demand from AI. Yet, amidst this worry, there's a glimmer of hope: a massive amount of new power capacity, measured in terawatts, is just waiting to be connected to the grid. The key? Cutting through the
Comments (45)
0/200
DouglasHarris
DouglasHarris April 10, 2025 at 8:10:06 AM GMT

DeepMind's 145-page paper on AGI safety? Honestly, it's a bit too much. I skimmed through it and still couldn't grasp the full picture. It's great they're trying, but it feels like they're just throwing jargon at skeptics. Maybe simplify it next time, guys!

WilliamYoung
WilliamYoung April 10, 2025 at 8:10:06 AM GMT

ディープマインドのAGI安全に関する145ページの論文?正直、ちょっと多すぎます。ざっと読んだけど、全体像がつかめませんでした。試みは素晴らしいけど、懐疑派に専門用語を投げつけているように感じます。次はもっと簡単にしてほしいですね!

SamuelEvans
SamuelEvans April 10, 2025 at 8:10:06 AM GMT

딥마인드의 AGI 안전에 관한 145페이지 논문? 솔직히 너무 많아요. 대충 훑어봤는데도 전체적인 그림을 잡을 수 없었어요. 시도는 훌륭하지만, 회의론자들에게 전문 용어를 던지는 것 같아요. 다음에는 좀 더 간단하게 해주세요!

NicholasThomas
NicholasThomas April 10, 2025 at 8:10:06 AM GMT

O artigo de 145 páginas da DeepMind sobre segurança de AGI? Honestamente, é um pouco demais. Dei uma olhada rápida e ainda não consegui entender o quadro completo. É ótimo que estejam tentando, mas parece que estão jogando jargões nos céticos. Talvez simplifiquem da próxima vez, pessoal!

KennethJones
KennethJones April 10, 2025 at 8:10:06 AM GMT

¿El artículo de 145 páginas de DeepMind sobre la seguridad de AGI? Honestamente, es un poco demasiado. Lo hojeé y aún no pude captar la imagen completa. Es genial que lo intenten, pero parece que solo están lanzando jerga a los escépticos. ¡Quizás lo simplifiquen la próxima vez, chicos!

JoseAdams
JoseAdams April 10, 2025 at 11:27:37 AM GMT

DeepMind's AGI safety paper is super detailed, but it didn't convince everyone. I get the whole AGI thing, but it feels like they're still far from making it a reality. Maybe next time they'll have more solid proof!

Back to Top
OR