DeepMind的AGI安全纸无法解决怀疑论者
2025年04月10日
KevinBrown
33

周三,Google DeepMind放弃了一张长达145页的纸,深入研究了他们的AGI安全方法。 AGI或人工通用智能是可以解决人类可以解决任何任务的AI,这是AI世界中的热门话题。有些人认为这只是一个幻想,而另一些人则像拟人化的大镜头一样,认为它就在弯道周围,如果我们没有检查安全措施,可能会造成严重的麻烦。 DeepMind的论文在联合创始人Shane Legg的帮助下写下,认为Agi可能会到2030年出现,并可能导致他们所谓的“严重伤害”。他们并没有完全阐明它,但是他们围绕着可怕的短语,例如“生存风险”,这些风险可能会永久摧毁人类。作者指出:“我们押注在这十年之前看到一个出色的AGI。” “一个杰出的AGI是一个可以在一系列非物理任务中与前1%的成年人的技能相匹配的系统,包括棘手的事情,例如学习新技能。”从一开始,该论文将DeepMind处理AGI风险的方式与人类和OpenAi的做法进行了比较。它说,拟人化在“强大的培训,监测和安全性”方面并不那么大,而Openai则是“自动化”一种称为Alignment Research的AI安全研究。该论文还对超级智能AI的想法(AI)的想法造成了一些阴影,而AI在工作方面比任何人都更好。 (Openai最近表示,他们将重点从AGI转移到了超级智能。)如果没有一些重大的新突破,DeepMind的作者并没有购买超级智能系统即将到来的任何时间,或者也许有史以来。但是他们确实认为我们当前的方法可能会导致“递归AI的改进”,在这种方法中,AI进行了自己的AI研究以使甚至更智能的AI系统。他们警告说,这可能是超级危险的。总体而言,本文建议我们需要开发方法,以使坏演员远离AGI,更好地了解AI系统的工作,并使AI更安全的环境更加安全。他们承认,许多这些想法仍处于早期阶段,并存在“开放研究问题”,但他们敦促我们不要忽略可能正在出现的安全问题。作者指出:“ AGI可以带来惊人的好处或严重伤害。” “因此,要以正确的方式建立AGI,对于AI顶级开发人员来说,提前计划并应对这些大风险至关重要。”不过,并不是每个人都掌握本文的想法。非营利性人工智能现任研究所的首席AI科学家Heidy Khlaaf告诉TechCrunch,她认为Agi太模糊了一个概念,无法“对科学进行严格的评估”。另一位AI研究人员,来自艾伯塔大学的Matthew Guzdial说,他不相信递归AI的改进是可行的。 Guzdial告诉TechCrunch:“递归改进是情报上的奇异性论点所基于的,但我们从未见过任何证据表明它实际上有效。”在牛津大学进行技术和法规的桑德拉·沃克特(Sandra Wachter)指出,更加紧迫的担忧:AI以“不准确的产出”加强了自己。她告诉TechCrunch:“随着越来越多的AI生成的内容在线替换了实际数据,模型正在从自己的输出中学习,这些输出充满了不准确或幻觉。” “由于聊天机器人主要用于搜索和寻找真相,因此我们总是有被以非常令人信服的方式呈现的虚假信息的风险。”尽管如此,DeepMind的论文可能不会结束有关AGI的真正可能性的辩论,而AI安全问题现在需要最大的关注。
相关文章
OpenAI Strikes Back: Sues Elon Musk for Alleged Efforts to Undermine AI Competitor
OpenAI has launched a fierce legal counterattack against its co-founder, Elon Musk, and his competing AI company, xAI. In a dramatic escalation of their ongoing feud, OpenAI accuses Musk of waging a "relentless" and "malicious" campaign to undermine the company he helped start.
According to court d
Law of Accelerating Returns Explained: Pathway to AGI Development
In a recent interview, Elon Musk shared his optimistic view on the timeline for the advent of Artificial General Intelligence (AGI), stating it could be as soon as *“3 to 6 years”*. Similarly, Demis Hassabis, CEO of Google's DeepMind, expressed at The Wall Street Journal’s Future of Everything Festi
Google Believes AI Can Simplify Electrical Grid's Bureaucracy
The tech world is buzzing with concern over a potential power crisis, fueled by the skyrocketing demand from AI. Yet, amidst this worry, there's a glimmer of hope: a massive amount of new power capacity, measured in terawatts, is just waiting to be connected to the grid. The key? Cutting through the
评论 (45)
0/200
DouglasHarris
2025年04月10日 08:10:06
DeepMind's 145-page paper on AGI safety? Honestly, it's a bit too much. I skimmed through it and still couldn't grasp the full picture. It's great they're trying, but it feels like they're just throwing jargon at skeptics. Maybe simplify it next time, guys!
0
WilliamYoung
2025年04月10日 08:10:06
ディープマインドのAGI安全に関する145ページの論文?正直、ちょっと多すぎます。ざっと読んだけど、全体像がつかめませんでした。試みは素晴らしいけど、懐疑派に専門用語を投げつけているように感じます。次はもっと簡単にしてほしいですね!
0
SamuelEvans
2025年04月10日 08:10:06
딥마인드의 AGI 안전에 관한 145페이지 논문? 솔직히 너무 많아요. 대충 훑어봤는데도 전체적인 그림을 잡을 수 없었어요. 시도는 훌륭하지만, 회의론자들에게 전문 용어를 던지는 것 같아요. 다음에는 좀 더 간단하게 해주세요!
0
NicholasThomas
2025年04月10日 08:10:06
O artigo de 145 páginas da DeepMind sobre segurança de AGI? Honestamente, é um pouco demais. Dei uma olhada rápida e ainda não consegui entender o quadro completo. É ótimo que estejam tentando, mas parece que estão jogando jargões nos céticos. Talvez simplifiquem da próxima vez, pessoal!
0
KennethJones
2025年04月10日 08:10:06
¿El artículo de 145 páginas de DeepMind sobre la seguridad de AGI? Honestamente, es un poco demasiado. Lo hojeé y aún no pude captar la imagen completa. Es genial que lo intenten, pero parece que solo están lanzando jerga a los escépticos. ¡Quizás lo simplifiquen la próxima vez, chicos!
0
JoseAdams
2025年04月10日 11:27:37
DeepMind's AGI safety paper is super detailed, but it didn't convince everyone. I get the whole AGI thing, but it feels like they're still far from making it a reality. Maybe next time they'll have more solid proof!
0









DeepMind's 145-page paper on AGI safety? Honestly, it's a bit too much. I skimmed through it and still couldn't grasp the full picture. It's great they're trying, but it feels like they're just throwing jargon at skeptics. Maybe simplify it next time, guys!




ディープマインドのAGI安全に関する145ページの論文?正直、ちょっと多すぎます。ざっと読んだけど、全体像がつかめませんでした。試みは素晴らしいけど、懐疑派に専門用語を投げつけているように感じます。次はもっと簡単にしてほしいですね!




딥마인드의 AGI 안전에 관한 145페이지 논문? 솔직히 너무 많아요. 대충 훑어봤는데도 전체적인 그림을 잡을 수 없었어요. 시도는 훌륭하지만, 회의론자들에게 전문 용어를 던지는 것 같아요. 다음에는 좀 더 간단하게 해주세요!




O artigo de 145 páginas da DeepMind sobre segurança de AGI? Honestamente, é um pouco demais. Dei uma olhada rápida e ainda não consegui entender o quadro completo. É ótimo que estejam tentando, mas parece que estão jogando jargões nos céticos. Talvez simplifiquem da próxima vez, pessoal!




¿El artículo de 145 páginas de DeepMind sobre la seguridad de AGI? Honestamente, es un poco demasiado. Lo hojeé y aún no pude captar la imagen completa. Es genial que lo intenten, pero parece que solo están lanzando jerga a los escépticos. ¡Quizás lo simplifiquen la próxima vez, chicos!




DeepMind's AGI safety paper is super detailed, but it didn't convince everyone. I get the whole AGI thing, but it feels like they're still far from making it a reality. Maybe next time they'll have more solid proof!












