Eric Schmidt Opposes AGI Manhattan Project

In a policy paper released on Wednesday, former Google CEO Eric Schmidt, along with Scale AI CEO Alexandr Wang and Center for AI Safety Director Dan Hendrycks, advised against the U.S. launching a Manhattan Project-style initiative to develop AI systems with "superhuman" intelligence, commonly referred to as AGI.
Titled "Superintelligence Strategy," the paper warns that a U.S. effort to monopolize superintelligent AI could provoke a strong response from China, possibly in the form of a cyberattack, which might disrupt global relations.
The co-authors argue, "A Manhattan Project for AGI assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it. What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."
This paper, penned by three key figures in the American AI sector, arrives shortly after a U.S. congressional commission suggested a "Manhattan Project-style" initiative to fund AGI development, drawing parallels to the U.S. atomic bomb project of the 1940s. U.S. Secretary of Energy Chris Wright recently declared the U.S. to be at "the start of a new Manhattan Project" on AI, speaking at a supercomputer site with OpenAI co-founder Greg Brockman by his side.
The Superintelligence Strategy paper challenges the recent push by several American policy and industry leaders who believe a government-backed AGI program is the best way to keep up with China.
Schmidt, Wang, and Hendrycks see the U.S. in a situation akin to a standoff over AGI, similar to the concept of mutually assured destruction. Just as nations avoid monopolizing nuclear weapons to prevent preemptive strikes, the authors suggest the U.S. should be wary of rushing to dominate highly advanced AI systems.
While comparing AI to nuclear weapons might seem over the top, global leaders already view AI as a crucial military asset. The Pentagon has noted that AI is accelerating the military's kill chain.
The authors introduce the idea of Mutual Assured AI Malfunction (MAIM), where governments might take preemptive action to disable threatening AI projects rather than waiting for adversaries to weaponize AGI.
Schmidt, Wang, and Hendrycks recommend that the U.S. shift its focus from "winning the race to superintelligence" to developing methods to deter other countries from creating superintelligent AI. They suggest the government should "expand its arsenal of cyberattacks to disable threatening AI projects" controlled by other nations and restrict adversaries' access to advanced AI chips and open-source models.
The paper highlights a split in the AI policy community between the "doomers," who believe catastrophic AI outcomes are inevitable and advocate for slowing AI progress, and the "ostriches," who push for accelerating AI development and hope for the best.
The authors propose a third path: a cautious approach to AGI development that emphasizes defensive strategies.
This stance is particularly noteworthy from Schmidt, who has previously emphasized the need for the U.S. to aggressively compete with China in AI development. Just months ago, Schmidt wrote an op-ed stating that DeepSeek marked a pivotal moment in the U.S.-China AI race.
Despite the Trump administration's determination to advance America's AI development, the co-authors remind us that U.S. decisions on AGI have global implications.
As the world observes America's push into AI, Schmidt and his co-authors suggest a more defensive strategy might be the wiser choice.
Related article
OpenAI Enhances AI Model Behind Its Operator Agent
OpenAI Takes Operator to the Next LevelOpenAI is giving its autonomous AI agent, Operator, a major upgrade. The upcoming changes mean Operator will soon rely on a model based on o3
OpenAI’s o3 AI model scores lower on a benchmark than the company initially implied
Why Benchmark Discrepancies Matter in AIWhen it comes to AI, numbers often tell the story—and sometimes, those numbers don’t quite add up. Take OpenAI’s o3 model, for instance. The
Ziff Davis, Owner of IGN and CNET, Files Lawsuit Against OpenAI
Ziff Davis Files Copyright Infringement Lawsuit Against OpenAIIn a move that’s sent ripples through the tech and publishing worlds, Ziff Davis—a massive conglomerate behind brands
Comments (20)
0/200
LawrenceLee
April 10, 2025 at 12:00:00 AM GMT
Eric Schmidt's stance on not rushing into a Manhattan Project for AGI makes sense. We need to be careful with superhuman AI. It's a bit scary to think about, but I agree we should take our time. Let's not mess this up!
0
DouglasRodriguez
April 10, 2025 at 12:00:00 AM GMT
エリック・シュミットがAGIのマンハッタン・プロジェクトに反対する立場は理解できます。超人的なAIには慎重であるべきです。少し恐ろしいですが、時間をかけて進めるべきだと思います。これを台無しにしないようにしましょう!
0
HenryJackson
April 10, 2025 at 12:00:00 AM GMT
에릭 슈미트가 AGI의 맨해튼 프로젝트에 반대하는 입장은 이해가 갑니다. 초인적인 AI에 대해 신중해야 합니다. 조금 무섭지만, 시간을 들여 진행해야 한다고 생각해요. 이것을 망치지 말아야죠!
0
NicholasThomas
April 10, 2025 at 12:00:00 AM GMT
A posição de Eric Schmidt contra um Projeto Manhattan para AGI faz sentido. Precisamos ser cuidadosos com a IA super-humana. É um pouco assustador pensar nisso, mas concordo que devemos levar nosso tempo. Vamos não estragar isso!
0
KennethRoberts
April 10, 2025 at 12:00:00 AM GMT
एरिक श्मिट का AGI के लिए मैनहट्टन प्रोजेक्ट का विरोध करना समझ में आता है। हमें सुपरह्यूमन AI के साथ सावधान रहना चाहिए। इसके बारे में सोचना थोड़ा डरावना है, लेकिन मैं सहमत हूँ कि हमें अपना समय लेना चाहिए। इसे खराब न करें!
0
BruceWilson
April 16, 2025 at 12:00:00 AM GMT
Eric Schmidt's take on not rushing into an AGI Manhattan Project makes a lot of sense. We don't need superhuman AI right now, do we? But I wish the paper had more concrete suggestions on what to do instead. Still, food for thought! 🤔🚀
0




Eric Schmidt's stance on not rushing into a Manhattan Project for AGI makes sense. We need to be careful with superhuman AI. It's a bit scary to think about, but I agree we should take our time. Let's not mess this up!




エリック・シュミットがAGIのマンハッタン・プロジェクトに反対する立場は理解できます。超人的なAIには慎重であるべきです。少し恐ろしいですが、時間をかけて進めるべきだと思います。これを台無しにしないようにしましょう!




에릭 슈미트가 AGI의 맨해튼 프로젝트에 반대하는 입장은 이해가 갑니다. 초인적인 AI에 대해 신중해야 합니다. 조금 무섭지만, 시간을 들여 진행해야 한다고 생각해요. 이것을 망치지 말아야죠!




A posição de Eric Schmidt contra um Projeto Manhattan para AGI faz sentido. Precisamos ser cuidadosos com a IA super-humana. É um pouco assustador pensar nisso, mas concordo que devemos levar nosso tempo. Vamos não estragar isso!




एरिक श्मिट का AGI के लिए मैनहट्टन प्रोजेक्ट का विरोध करना समझ में आता है। हमें सुपरह्यूमन AI के साथ सावधान रहना चाहिए। इसके बारे में सोचना थोड़ा डरावना है, लेकिन मैं सहमत हूँ कि हमें अपना समय लेना चाहिए। इसे खराब न करें!




Eric Schmidt's take on not rushing into an AGI Manhattan Project makes a lot of sense. We don't need superhuman AI right now, do we? But I wish the paper had more concrete suggestions on what to do instead. Still, food for thought! 🤔🚀












