AI Leaders Ground the AGI Debate in Reality

At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large language models (LLMs) like those behind ChatGPT and Gemini. They're convinced these models could soon hit human-level or even super-human intelligence. Take Dario Amodei from Anthropic, for instance. He's penned essays suggesting that by 2026, we might see AI smarter than Nobel laureates in various fields. Meanwhile, OpenAI's Sam Altman has been vocal about knowing how to build "superintelligent" AI, predicting it could turbocharge scientific discovery.
But not everyone's buying into this rosy picture. Some AI leaders are skeptical about LLMs reaching AGI, let alone superintelligence, without significant breakthroughs. These skeptics, once quiet, are now speaking up more.
Skepticism in the AI Community
Take Thomas Wolf, co-founder and chief science officer at Hugging Face. In a recent article, he called parts of Amodei’s vision "wishful thinking at best." Drawing on his PhD in statistical and quantum physics, Wolf argues that Nobel-level breakthroughs come from asking new questions, not just answering known ones—something AI is good at but not great for pioneering new ideas.
"I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there," Wolf shared in an interview with TechCrunch. He wrote his piece because he felt the hype around AGI was overshadowing the need for a serious discussion on how to achieve it. Wolf sees a future where AI transforms the world, but not necessarily one where it reaches human-level intelligence or superintelligence.
The AI community is often divided by those who believe in AGI and those who don't, with the latter sometimes labeled as "anti-technology" or simply pessimistic. Wolf, however, considers himself an "informed optimist," pushing for AI advancement while staying grounded in reality.
Other Voices in the AI Debate
Google DeepMind’s CEO, Demis Hassabis, reportedly told his team that AGI might still be a decade away, pointing out the many tasks AI can't handle yet. Meta's Chief AI Scientist, Yann LeCun, has also expressed doubts about LLMs achieving AGI, calling the idea "nonsense" at Nvidia GTC and pushing for new architectures to underpin superintelligence.
Kenneth Stanley, a former lead researcher at OpenAI and now an executive at Lila Sciences, is working on the nitty-gritty of building advanced AI. His startup, which recently raised $200 million, is focused on automating scientific innovation. Stanley's work delves into AI's ability to generate original, creative ideas—a field known as open-endedness.
"I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings," Stanley told TechCrunch. He agrees with Wolf that being knowledgeable doesn't automatically lead to original ideas.
The Role of Creativity in AI
Stanley believes creativity is crucial for AGI, but admits it's a tough nut to crack. While optimists like Amodei highlight AI "reasoning" models as a step toward AGI, Stanley argues that creativity requires a different kind of intelligence. "Reasoning is almost antithetical to [creativity]," he explained. "Reasoning models focus on reaching a specific goal, which can limit the kind of opportunistic thinking needed for creativity."
Stanley suggests that to build truly intelligent AI, we need to replicate human taste for new ideas algorithmically. While AI excels in areas like math and programming, where answers are clear, it struggles with more subjective, creative tasks that don't have a "correct" answer.
"People shy away from [subjectivity] in science—the word is almost toxic," Stanley noted. "But there's nothing to prevent us from dealing with subjectivity [algorithmically]. It's just part of the data stream."
He's encouraged by the growing focus on open-endedness, with research labs at Lila Sciences, Google DeepMind, and AI startup Sakana tackling the issue. Stanley sees more people talking about creativity in AI but believes there's still much work ahead.
The Realists of AI
Wolf and LeCun might be considered the "AI realists": leaders who approach AGI and superintelligence with grounded questions about their feasibility. Their aim isn't to dismiss AI advancements but to spark a broader conversation about what's holding AI back from reaching AGI and superintelligence—and to tackle those challenges head-on.
Related article
OpenAI Reaffirms Nonprofit Roots in Major Corporate Overhaul
OpenAI remains steadfast in its nonprofit mission as it undergoes a significant corporate restructuring, balancing growth with its commitment to ethical AI development.CEO Sam Altman outlined the comp
AI Leaders Ground the AGI Debate in Reality
At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large langua
OpenAI Strikes Back: Sues Elon Musk for Alleged Efforts to Undermine AI Competitor
OpenAI has launched a fierce legal counterattack against its co-founder, Elon Musk, and his competing AI company, xAI. In a dramatic escalation of their ongoing feud, OpenAI accuses Musk of waging a "relentless" and "malicious" campaign to undermine the company he helped start.
According to court d
Comments (7)
0/200
RichardHarris
August 2, 2025 at 11:07:14 AM EDT
That dinner convo sounds intense! 😮 Asking if AI can hit human-level smarts is like tossing a grenade into a tech nerd party. I bet those CEOs were all over the place with their takes.
0
FrankJackson
July 27, 2025 at 9:19:30 PM EDT
This article really got me thinking—AGI sounds like sci-fi, but are we actually close? I’m kinda skeptical it’ll match human smarts anytime soon. 😅 Still, cool to see CEOs so hyped!
0
MarkRoberts
May 10, 2025 at 6:50:03 PM EDT
La discusión de los líderes de IA sobre el AGI fue muy iluminadora. Anclarse en la realidad fue refrescante. Algunos puntos fueron interesantes, aunque desearía más profundidad en ciertos aspectos. En general, fue una charla sólida con valiosas ideas.
0
CharlesRoberts
May 10, 2025 at 2:06:40 PM EDT
A discussão sobre AGI pelos líderes de IA foi reveladora. Foi bom ver o assunto ancorado na realidade. Alguns pontos foram interessantes, mas gostaria de mais profundidade em certas áreas. No geral, foi uma palestra sólida com boas ideias.
0
StevenNelson
May 10, 2025 at 9:16:38 AM EDT
AIリーダーによるAGIに関する議論はとても興味深かったです。現実に根ざした話が新鮮でした。いくつかのポイントは刺激的でしたが、一部の分野ではもっと深い議論が欲しかったです。全体的に見ると、価値のあるインサイトが詰まった良い講演でした。
0
RalphSanchez
May 10, 2025 at 6:08:18 AM EDT
AI 리더들의 AGI에 대한 토론은 참신했어요. 현실적인 접근이 신선하더라고요. 몇 가지 주장은 흥미로웠지만 특정 분야에서는 더 깊이 있는 논의가 있었으면 좋겠어요. 전반적으로 유익한 강연이었습니다.
0
At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large language models (LLMs) like those behind ChatGPT and Gemini. They're convinced these models could soon hit human-level or even super-human intelligence. Take Dario Amodei from Anthropic, for instance. He's penned essays suggesting that by 2026, we might see AI smarter than Nobel laureates in various fields. Meanwhile, OpenAI's Sam Altman has been vocal about knowing how to build "superintelligent" AI, predicting it could turbocharge scientific discovery.
But not everyone's buying into this rosy picture. Some AI leaders are skeptical about LLMs reaching AGI, let alone superintelligence, without significant breakthroughs. These skeptics, once quiet, are now speaking up more.
Skepticism in the AI Community
Take Thomas Wolf, co-founder and chief science officer at Hugging Face. In a recent article, he called parts of Amodei’s vision "wishful thinking at best." Drawing on his PhD in statistical and quantum physics, Wolf argues that Nobel-level breakthroughs come from asking new questions, not just answering known ones—something AI is good at but not great for pioneering new ideas.
"I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there," Wolf shared in an interview with TechCrunch. He wrote his piece because he felt the hype around AGI was overshadowing the need for a serious discussion on how to achieve it. Wolf sees a future where AI transforms the world, but not necessarily one where it reaches human-level intelligence or superintelligence.
The AI community is often divided by those who believe in AGI and those who don't, with the latter sometimes labeled as "anti-technology" or simply pessimistic. Wolf, however, considers himself an "informed optimist," pushing for AI advancement while staying grounded in reality.
Other Voices in the AI Debate
Google DeepMind’s CEO, Demis Hassabis, reportedly told his team that AGI might still be a decade away, pointing out the many tasks AI can't handle yet. Meta's Chief AI Scientist, Yann LeCun, has also expressed doubts about LLMs achieving AGI, calling the idea "nonsense" at Nvidia GTC and pushing for new architectures to underpin superintelligence.
Kenneth Stanley, a former lead researcher at OpenAI and now an executive at Lila Sciences, is working on the nitty-gritty of building advanced AI. His startup, which recently raised $200 million, is focused on automating scientific innovation. Stanley's work delves into AI's ability to generate original, creative ideas—a field known as open-endedness.
"I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings," Stanley told TechCrunch. He agrees with Wolf that being knowledgeable doesn't automatically lead to original ideas.
The Role of Creativity in AI
Stanley believes creativity is crucial for AGI, but admits it's a tough nut to crack. While optimists like Amodei highlight AI "reasoning" models as a step toward AGI, Stanley argues that creativity requires a different kind of intelligence. "Reasoning is almost antithetical to [creativity]," he explained. "Reasoning models focus on reaching a specific goal, which can limit the kind of opportunistic thinking needed for creativity."
Stanley suggests that to build truly intelligent AI, we need to replicate human taste for new ideas algorithmically. While AI excels in areas like math and programming, where answers are clear, it struggles with more subjective, creative tasks that don't have a "correct" answer.
"People shy away from [subjectivity] in science—the word is almost toxic," Stanley noted. "But there's nothing to prevent us from dealing with subjectivity [algorithmically]. It's just part of the data stream."
He's encouraged by the growing focus on open-endedness, with research labs at Lila Sciences, Google DeepMind, and AI startup Sakana tackling the issue. Stanley sees more people talking about creativity in AI but believes there's still much work ahead.
The Realists of AI
Wolf and LeCun might be considered the "AI realists": leaders who approach AGI and superintelligence with grounded questions about their feasibility. Their aim isn't to dismiss AI advancements but to spark a broader conversation about what's holding AI back from reaching AGI and superintelligence—and to tackle those challenges head-on.


That dinner convo sounds intense! 😮 Asking if AI can hit human-level smarts is like tossing a grenade into a tech nerd party. I bet those CEOs were all over the place with their takes.




This article really got me thinking—AGI sounds like sci-fi, but are we actually close? I’m kinda skeptical it’ll match human smarts anytime soon. 😅 Still, cool to see CEOs so hyped!




La discusión de los líderes de IA sobre el AGI fue muy iluminadora. Anclarse en la realidad fue refrescante. Algunos puntos fueron interesantes, aunque desearía más profundidad en ciertos aspectos. En general, fue una charla sólida con valiosas ideas.




A discussão sobre AGI pelos líderes de IA foi reveladora. Foi bom ver o assunto ancorado na realidade. Alguns pontos foram interessantes, mas gostaria de mais profundidade em certas áreas. No geral, foi uma palestra sólida com boas ideias.




AIリーダーによるAGIに関する議論はとても興味深かったです。現実に根ざした話が新鮮でした。いくつかのポイントは刺激的でしたが、一部の分野ではもっと深い議論が欲しかったです。全体的に見ると、価値のあるインサイトが詰まった良い講演でした。




AI 리더들의 AGI에 대한 토론은 참신했어요. 현실적인 접근이 신선하더라고요. 몇 가지 주장은 흥미로웠지만 특정 분야에서는 더 깊이 있는 논의가 있었으면 좋겠어요. 전반적으로 유익한 강연이었습니다.












