Home News Eric Schmidt Opposes AGI Manhattan Project

Eric Schmidt Opposes AGI Manhattan Project

April 10, 2025
BrianMartinez
5

Eric Schmidt Opposes AGI Manhattan Project

In a policy paper released on Wednesday, former Google CEO Eric Schmidt, along with Scale AI CEO Alexandr Wang and Center for AI Safety Director Dan Hendrycks, advised against the U.S. launching a Manhattan Project-style initiative to develop AI systems with "superhuman" intelligence, commonly referred to as AGI. Titled "Superintelligence Strategy," the paper warns that a U.S. effort to monopolize superintelligent AI could provoke a strong response from China, possibly in the form of a cyberattack, which might disrupt global relations. The co-authors argue, "A Manhattan Project for AGI assumes that rivals will acquiesce to an enduring imbalance or omnicide rather than move to prevent it. What begins as a push for a superweapon and global control risks prompting hostile countermeasures and escalating tensions, thereby undermining the very stability the strategy purports to secure." This paper, penned by three key figures in the American AI sector, arrives shortly after a U.S. congressional commission suggested a "Manhattan Project-style" initiative to fund AGI development, drawing parallels to the U.S. atomic bomb project of the 1940s. U.S. Secretary of Energy Chris Wright recently declared the U.S. to be at "the start of a new Manhattan Project" on AI, speaking at a supercomputer site with OpenAI co-founder Greg Brockman by his side. The Superintelligence Strategy paper challenges the recent push by several American policy and industry leaders who believe a government-backed AGI program is the best way to keep up with China. Schmidt, Wang, and Hendrycks see the U.S. in a situation akin to a standoff over AGI, similar to the concept of mutually assured destruction. Just as nations avoid monopolizing nuclear weapons to prevent preemptive strikes, the authors suggest the U.S. should be wary of rushing to dominate highly advanced AI systems. While comparing AI to nuclear weapons might seem over the top, global leaders already view AI as a crucial military asset. The Pentagon has noted that AI is accelerating the military's kill chain. The authors introduce the idea of Mutual Assured AI Malfunction (MAIM), where governments might take preemptive action to disable threatening AI projects rather than waiting for adversaries to weaponize AGI. Schmidt, Wang, and Hendrycks recommend that the U.S. shift its focus from "winning the race to superintelligence" to developing methods to deter other countries from creating superintelligent AI. They suggest the government should "expand its arsenal of cyberattacks to disable threatening AI projects" controlled by other nations and restrict adversaries' access to advanced AI chips and open-source models. The paper highlights a split in the AI policy community between the "doomers," who believe catastrophic AI outcomes are inevitable and advocate for slowing AI progress, and the "ostriches," who push for accelerating AI development and hope for the best. The authors propose a third path: a cautious approach to AGI development that emphasizes defensive strategies. This stance is particularly noteworthy from Schmidt, who has previously emphasized the need for the U.S. to aggressively compete with China in AI development. Just months ago, Schmidt wrote an op-ed stating that DeepSeek marked a pivotal moment in the U.S.-China AI race. Despite the Trump administration's determination to advance America's AI development, the co-authors remind us that U.S. decisions on AGI have global implications. As the world observes America's push into AI, Schmidt and his co-authors suggest a more defensive strategy might be the wiser choice.
Related article
Analysis Reveals AI's Responses on China Vary by Language Analysis Reveals AI's Responses on China Vary by Language Exploring AI Censorship: A Language-Based AnalysisIt's no secret that AI models from Chinese labs, such as DeepSeek, are subject to strict censorship rules. A 2023 regulation from China's ruling party explicitly prohibits these models from generating content that could undermine national unity or so
ChatGPT's Unsolicited Use of User Names Sparks 'Creepy' Concerns Among Some ChatGPT's Unsolicited Use of User Names Sparks 'Creepy' Concerns Among Some Some users of ChatGPT have recently encountered an odd new feature: the chatbot occasionally uses their name while working through problems. This wasn't part of its usual behavior before, and many users report that ChatGPT mentions their names without ever being told what to call them. Opinions on
OpenAI Enhances ChatGPT to Recall Previous Conversations OpenAI Enhances ChatGPT to Recall Previous Conversations OpenAI made a big announcement on Thursday about rolling out a fresh feature in ChatGPT called "memory." This nifty tool is designed to make your chats with the AI more personalized by remembering what you've talked about before. Imagine not having to repeat yourself every time you start a new conve
Comments (20)
0/200
LawrenceLee April 10, 2025 at 3:01:10 PM GMT

Eric Schmidt's stance on not rushing into a Manhattan Project for AGI makes sense. We need to be careful with superhuman AI. It's a bit scary to think about, but I agree we should take our time. Let's not mess this up!

DouglasRodriguez April 10, 2025 at 3:01:10 PM GMT

エリック・シュミットがAGIのマンハッタン・プロジェクトに反対する立場は理解できます。超人的なAIには慎重であるべきです。少し恐ろしいですが、時間をかけて進めるべきだと思います。これを台無しにしないようにしましょう!

HenryJackson April 10, 2025 at 3:01:10 PM GMT

에릭 슈미트가 AGI의 맨해튼 프로젝트에 반대하는 입장은 이해가 갑니다. 초인적인 AI에 대해 신중해야 합니다. 조금 무섭지만, 시간을 들여 진행해야 한다고 생각해요. 이것을 망치지 말아야죠!

NicholasThomas April 10, 2025 at 3:01:10 PM GMT

A posição de Eric Schmidt contra um Projeto Manhattan para AGI faz sentido. Precisamos ser cuidadosos com a IA super-humana. É um pouco assustador pensar nisso, mas concordo que devemos levar nosso tempo. Vamos não estragar isso!

KennethRoberts April 10, 2025 at 3:01:10 PM GMT

एरिक श्मिट का AGI के लिए मैनहट्टन प्रोजेक्ट का विरोध करना समझ में आता है। हमें सुपरह्यूमन AI के साथ सावधान रहना चाहिए। इसके बारे में सोचना थोड़ा डरावना है, लेकिन मैं सहमत हूँ कि हमें अपना समय लेना चाहिए। इसे खराब न करें!

BruceWilson April 16, 2025 at 1:34:13 AM GMT

Eric Schmidt's take on not rushing into an AGI Manhattan Project makes a lot of sense. We don't need superhuman AI right now, do we? But I wish the paper had more concrete suggestions on what to do instead. Still, food for thought! 🤔🚀

OR