Eric Schmidt Opposes AGI Manhattan Project

In a policy paper released on Wednesday, former Google CEO Eric Schmidt, along with Scale AI CEO Alexandr Wang and Center for AI Safety Director Dan Hendrycks, advised against the U.S. launching a Manhattan Project-style initiative to develop AI systems with "superhuman" intelligence, commonly referred to as AGI.
Titled "Superintelligence Strategy," the paper warns that a U.S. effort to monopolize superintelligent AI could provoke a strong response from China, possibly in the form of a cyberattack, which might disrupt global relations.
The co-authors argue, "A Manhattan Project for AGI assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it. What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure."
This paper, penned by three key figures in the American AI sector, arrives shortly after a U.S. congressional commission suggested a "Manhattan Project-style" initiative to fund AGI development, drawing parallels to the U.S. atomic bomb project of the 1940s. U.S. Secretary of Energy Chris Wright recently declared the U.S. to be at "the start of a new Manhattan Project" on AI, speaking at a supercomputer site with OpenAI co-founder Greg Brockman by his side.
The Superintelligence Strategy paper challenges the recent push by several American policy and industry leaders who believe a government-backed AGI program is the best way to keep up with China.
Schmidt, Wang, and Hendrycks see the U.S. in a situation akin to a standoff over AGI, similar to the concept of mutually assured destruction. Just as nations avoid monopolizing nuclear weapons to prevent preemptive strikes, the authors suggest the U.S. should be wary of rushing to dominate highly advanced AI systems.
While comparing AI to nuclear weapons might seem over the top, global leaders already view AI as a crucial military asset. The Pentagon has noted that AI is accelerating the military's kill chain.
The authors introduce the idea of Mutual Assured AI Malfunction (MAIM), where governments might take preemptive action to disable threatening AI projects rather than waiting for adversaries to weaponize AGI.
Schmidt, Wang, and Hendrycks recommend that the U.S. shift its focus from "winning the race to superintelligence" to developing methods to deter other countries from creating superintelligent AI. They suggest the government should "expand its arsenal of cyberattacks to disable threatening AI projects" controlled by other nations and restrict adversaries' access to advanced AI chips and open-source models.
The paper highlights a split in the AI policy community between the "doomers," who believe catastrophic AI outcomes are inevitable and advocate for slowing AI progress, and the "ostriches," who push for accelerating AI development and hope for the best.
The authors propose a third path: a cautious approach to AGI development that emphasizes defensive strategies.
This stance is particularly noteworthy from Schmidt, who has previously emphasized the need for the U.S. to aggressively compete with China in AI development. Just months ago, Schmidt wrote an op-ed stating that DeepSeek marked a pivotal moment in the U.S.-China AI race.
Despite the Trump administration's determination to advance America's AI development, the co-authors remind us that U.S. decisions on AGI have global implications.
As the world observes America's push into AI, Schmidt and his co-authors suggest a more defensive strategy might be the wiser choice.
Related article
Google Unveils Production-Ready Gemini 2.5 AI Models to Rival OpenAI in Enterprise Market
Google intensified its AI strategy Monday, launching its advanced Gemini 2.5 models for enterprise use and introducing a cost-efficient variant to compete on price and performance.The Alphabet-owned c
Meta Offers High Pay for AI Talent, Denies $100M Signing Bonuses
Meta is attracting AI researchers to its new superintelligence lab with substantial multimillion-dollar compensation packages. However, claims of $100 million "signing bonuses" are untrue, per a recru
OpenAI Marketing Chief Takes Leave for Breast Cancer Treatment
Kate Rouch, OpenAI’s marketing leader, is taking a three-month leave to focus on treatment for invasive breast cancer.In a LinkedIn post, Rouch announced that Gary Briggs, former Meta CMO, will act as
Comments (27)
0/200
PaulLewis
July 27, 2025 at 9:19:30 PM EDT
Eric Schmidt's take on AGI is refreshing! No need for a rushed, mega-project vibe—slow and steady wins the race, right? Curious how this will play out with AI ethics debates. 🤔
0
PeterRodriguez
July 23, 2025 at 12:59:47 AM EDT
Eric Schmidt's take on pausing the AGI race is refreshing! It's like saying, 'Hey, let's not sprint toward a sci-fi apocalypse.' Superhuman AI sounds cool, but I’d rather we take our time to avoid any Skynet vibes. 😅 What’s next, a global AI ethics council?
0
BruceSmith
April 20, 2025 at 1:32:22 PM EDT
La postura de Eric Schmidt de no apresurarse en un Proyecto Manhattan para AGI es bastante inteligente. No necesitamos otra carrera hacia el fondo con IA superhumana. Vamos a tomarnos nuestro tiempo y hacerlo bien, o podríamos terminar con más problemas que soluciones. 🤔
0
MichaelDavis
April 18, 2025 at 2:29:08 PM EDT
A visão de Eric Schmidt sobre AGI está certa! Não precisamos de um projeto Manhattan para desenvolver IA super-humana. Devemos ser cautelosos e pensar nas implicações. Seu artigo com Wang e Hendrycks é leitura obrigatória! 👀📚
0
MiaDavis
April 18, 2025 at 12:24:42 PM EDT
에릭 슈미트의 AGI에 대한 의견이 맞아요! 초인공지능 AI를 서두를 필요가 없어요. 신중하게 생각해야 합니다. 그의 논문은 꼭 읽어봐야 해요! 👀📚
0
TerryPerez
April 18, 2025 at 9:20:43 AM EDT
एरिक श्मिट का AGI पर नज़रिया सही है! हमें सुपरह्यूमन AI के लिए मैनहट्टन प्रोजेक्ट जैसी जल्दबाजी की ज़रूरत नहीं है। हमें सावधान रहना चाहिए और प्रभावों के बारे में सोचना चाहिए। उनका पेपर वांग और हेंड्रिक्स के साथ ज़रूर पढ़ना चाहिए! 👀📚
0



Eric Schmidt's take on AGI is refreshing! No need for a rushed, mega-project vibe—slow and steady wins the race, right? Curious how this will play out with AI ethics debates. 🤔




Eric Schmidt's take on pausing the AGI race is refreshing! It's like saying, 'Hey, let's not sprint toward a sci-fi apocalypse.' Superhuman AI sounds cool, but I’d rather we take our time to avoid any Skynet vibes. 😅 What’s next, a global AI ethics council?




La postura de Eric Schmidt de no apresurarse en un Proyecto Manhattan para AGI es bastante inteligente. No necesitamos otra carrera hacia el fondo con IA superhumana. Vamos a tomarnos nuestro tiempo y hacerlo bien, o podríamos terminar con más problemas que soluciones. 🤔




A visão de Eric Schmidt sobre AGI está certa! Não precisamos de um projeto Manhattan para desenvolver IA super-humana. Devemos ser cautelosos e pensar nas implicações. Seu artigo com Wang e Hendrycks é leitura obrigatória! 👀📚




에릭 슈미트의 AGI에 대한 의견이 맞아요! 초인공지능 AI를 서두를 필요가 없어요. 신중하게 생각해야 합니다. 그의 논문은 꼭 읽어봐야 해요! 👀📚




एरिक श्मिट का AGI पर नज़रिया सही है! हमें सुपरह्यूमन AI के लिए मैनहट्टन प्रोजेक्ट जैसी जल्दबाजी की ज़रूरत नहीं है। हमें सावधान रहना चाहिए और प्रभावों के बारे में सोचना चाहिए। उनका पेपर वांग और हेंड्रिक्स के साथ ज़रूर पढ़ना चाहिए! 👀📚












