Eric Schmidt Opposes AGI Manhattan Project

In a policy paper released on Wednesday, former Google CEO Eric Schmidt, along with Scale AI CEO Alexandr Wang and Center for AI Safety Director Dan Hendrycks, advised against the U.S. launching a Manhattan Project-style initiative to develop AI systems with "superhuman" intelligence, commonly referred to as AGI.
Titled "Superintelligence Strategy," the paper warns that a U.S. effort to monopolize superintelligent AI could provoke a strong response from China, possibly in the form of a cyberattack, which might disrupt global relations.
The co-authors argue, "A Manhattan Project for AGI assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it. What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."
This paper, penned by three key figures in the American AI sector, arrives shortly after a U.S. congressional commission suggested a "Manhattan Project-style" initiative to fund AGI development, drawing parallels to the U.S. atomic bomb project of the 1940s. U.S. Secretary of Energy Chris Wright recently declared the U.S. to be at "the start of a new Manhattan Project" on AI, speaking at a supercomputer site with OpenAI co-founder Greg Brockman by his side.
The Superintelligence Strategy paper challenges the recent push by several American policy and industry leaders who believe a government-backed AGI program is the best way to keep up with China.
Schmidt, Wang, and Hendrycks see the U.S. in a situation akin to a standoff over AGI, similar to the concept of mutually assured destruction. Just as nations avoid monopolizing nuclear weapons to prevent preemptive strikes, the authors suggest the U.S. should be wary of rushing to dominate highly advanced AI systems.
While comparing AI to nuclear weapons might seem over the top, global leaders already view AI as a crucial military asset. The Pentagon has noted that AI is accelerating the military's kill chain.
The authors introduce the idea of Mutual Assured AI Malfunction (MAIM), where governments might take preemptive action to disable threatening AI projects rather than waiting for adversaries to weaponize AGI.
Schmidt, Wang, and Hendrycks recommend that the U.S. shift its focus from "winning the race to superintelligence" to developing methods to deter other countries from creating superintelligent AI. They suggest the government should "expand its arsenal of cyberattacks to disable threatening AI projects" controlled by other nations and restrict adversaries' access to advanced AI chips and open-source models.
The paper highlights a split in the AI policy community between the "doomers," who believe catastrophic AI outcomes are inevitable and advocate for slowing AI progress, and the "ostriches," who push for accelerating AI development and hope for the best.
The authors propose a third path: a cautious approach to AGI development that emphasizes defensive strategies.
This stance is particularly noteworthy from Schmidt, who has previously emphasized the need for the U.S. to aggressively compete with China in AI development. Just months ago, Schmidt wrote an op-ed stating that DeepSeek marked a pivotal moment in the U.S.-China AI race.
Despite the Trump administration's determination to advance America's AI development, the co-authors remind us that U.S. decisions on AGI have global implications.
As the world observes America's push into AI, Schmidt and his co-authors suggest a more defensive strategy might be the wiser choice.
Related article
Nonprofit leverages AI agents to boost charity fundraising efforts
While major tech corporations promote AI "agents" as productivity boosters for businesses, one nonprofit organization is demonstrating their potential for social good. Sage Future, a philanthropic research group backed by Open Philanthropy, recently
Top AI Labs Warn Humanity Is Losing Grasp on Understanding AI Systems
In an unprecedented show of unity, researchers from OpenAI, Google DeepMind, Anthropic and Meta have set aside competitive differences to issue a collective warning about responsible AI development. Over 40 leading scientists from these typically riv
AGI Set to Revolutionize Human Thought with a Universal Language Breakthrough
The emergence of Artificial General Intelligence presents transformative potential to reshape human communication through the creation of a universal language framework. Unlike narrow AI systems designed for specialized tasks, AGI possesses human-lik
Comments (28)
0/200
WillieLee
August 22, 2025 at 9:01:25 PM EDT
Eric Schmidt's take on AGI is refreshing! No need for a rushed, mega-project vibe—slow and steady wins the AI race, right? 🐢
0
PaulLewis
July 27, 2025 at 9:19:30 PM EDT
Eric Schmidt's take on AGI is refreshing! No need for a rushed, mega-project vibe—slow and steady wins the race, right? Curious how this will play out with AI ethics debates. 🤔
0
PeterRodriguez
July 23, 2025 at 12:59:47 AM EDT
Eric Schmidt's take on pausing the AGI race is refreshing! It's like saying, 'Hey, let's not sprint toward a sci-fi apocalypse.' Superhuman AI sounds cool, but I’d rather we take our time to avoid any Skynet vibes. 😅 What’s next, a global AI ethics council?
0
BruceSmith
April 20, 2025 at 1:32:22 PM EDT
La postura de Eric Schmidt de no apresurarse en un Proyecto Manhattan para AGI es bastante inteligente. No necesitamos otra carrera hacia el fondo con IA superhumana. Vamos a tomarnos nuestro tiempo y hacerlo bien, o podríamos terminar con más problemas que soluciones. 🤔
0
MichaelDavis
April 18, 2025 at 2:29:08 PM EDT
A visão de Eric Schmidt sobre AGI está certa! Não precisamos de um projeto Manhattan para desenvolver IA super-humana. Devemos ser cautelosos e pensar nas implicações. Seu artigo com Wang e Hendrycks é leitura obrigatória! 👀📚
0
MiaDavis
April 18, 2025 at 12:24:42 PM EDT
에릭 슈미트의 AGI에 대한 의견이 맞아요! 초인공지능 AI를 서두를 필요가 없어요. 신중하게 생각해야 합니다. 그의 논문은 꼭 읽어봐야 해요! 👀📚
0



Eric Schmidt's take on AGI is refreshing! No need for a rushed, mega-project vibe—slow and steady wins the AI race, right? 🐢




Eric Schmidt's take on AGI is refreshing! No need for a rushed, mega-project vibe—slow and steady wins the race, right? Curious how this will play out with AI ethics debates. 🤔




Eric Schmidt's take on pausing the AGI race is refreshing! It's like saying, 'Hey, let's not sprint toward a sci-fi apocalypse.' Superhuman AI sounds cool, but I’d rather we take our time to avoid any Skynet vibes. 😅 What’s next, a global AI ethics council?




La postura de Eric Schmidt de no apresurarse en un Proyecto Manhattan para AGI es bastante inteligente. No necesitamos otra carrera hacia el fondo con IA superhumana. Vamos a tomarnos nuestro tiempo y hacerlo bien, o podríamos terminar con más problemas que soluciones. 🤔




A visão de Eric Schmidt sobre AGI está certa! Não precisamos de um projeto Manhattan para desenvolver IA super-humana. Devemos ser cautelosos e pensar nas implicações. Seu artigo com Wang e Hendrycks é leitura obrigatória! 👀📚




에릭 슈미트의 AGI에 대한 의견이 맞아요! 초인공지능 AI를 서두를 필요가 없어요. 신중하게 생각해야 합니다. 그의 논문은 꼭 읽어봐야 해요! 👀📚












