AI Leaders Ground the AGI Debate in Reality

At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large language models (LLMs) like those behind ChatGPT and Gemini. They're convinced these models could soon hit human-level or even super-human intelligence. Take Dario Amodei from Anthropic, for instance. He's penned essays suggesting that by 2026, we might see AI smarter than Nobel laureates in various fields. Meanwhile, OpenAI's Sam Altman has been vocal about knowing how to build "superintelligent" AI, predicting it could turbocharge scientific discovery.
But not everyone's buying into this rosy picture. Some AI leaders are skeptical about LLMs reaching AGI, let alone superintelligence, without significant breakthroughs. These skeptics, once quiet, are now speaking up more.
Skepticism in the AI Community
Take Thomas Wolf, co-founder and chief science officer at Hugging Face. In a recent article, he called parts of Amodei’s vision "wishful thinking at best." Drawing on his PhD in statistical and quantum physics, Wolf argues that Nobel-level breakthroughs come from asking new questions, not just answering known ones—something AI is good at but not great for pioneering new ideas.
"I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there," Wolf shared in an interview with TechCrunch. He wrote his piece because he felt the hype around AGI was overshadowing the need for a serious discussion on how to achieve it. Wolf sees a future where AI transforms the world, but not necessarily one where it reaches human-level intelligence or superintelligence.
The AI community is often divided by those who believe in AGI and those who don't, with the latter sometimes labeled as "anti-technology" or simply pessimistic. Wolf, however, considers himself an "informed optimist," pushing for AI advancement while staying grounded in reality.
Other Voices in the AI Debate
Google DeepMind’s CEO, Demis Hassabis, reportedly told his team that AGI might still be a decade away, pointing out the many tasks AI can't handle yet. Meta's Chief AI Scientist, Yann LeCun, has also expressed doubts about LLMs achieving AGI, calling the idea "nonsense" at Nvidia GTC and pushing for new architectures to underpin superintelligence.
Kenneth Stanley, a former lead researcher at OpenAI and now an executive at Lila Sciences, is working on the nitty-gritty of building advanced AI. His startup, which recently raised $200 million, is focused on automating scientific innovation. Stanley's work delves into AI's ability to generate original, creative ideas—a field known as open-endedness.
"I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings," Stanley told TechCrunch. He agrees with Wolf that being knowledgeable doesn't automatically lead to original ideas.
The Role of Creativity in AI
Stanley believes creativity is crucial for AGI, but admits it's a tough nut to crack. While optimists like Amodei highlight AI "reasoning" models as a step toward AGI, Stanley argues that creativity requires a different kind of intelligence. "Reasoning is almost antithetical to [creativity]," he explained. "Reasoning models focus on reaching a specific goal, which can limit the kind of opportunistic thinking needed for creativity."
Stanley suggests that to build truly intelligent AI, we need to replicate human taste for new ideas algorithmically. While AI excels in areas like math and programming, where answers are clear, it struggles with more subjective, creative tasks that don't have a "correct" answer.
"People shy away from [subjectivity] in science—the word is almost toxic," Stanley noted. "But there's nothing to prevent us from dealing with subjectivity [algorithmically]. It's just part of the data stream."
He's encouraged by the growing focus on open-endedness, with research labs at Lila Sciences, Google DeepMind, and AI startup Sakana tackling the issue. Stanley sees more people talking about creativity in AI but believes there's still much work ahead.
The Realists of AI
Wolf and LeCun might be considered the "AI realists": leaders who approach AGI and superintelligence with grounded questions about their feasibility. Their aim isn't to dismiss AI advancements but to spark a broader conversation about what's holding AI back from reaching AGI and superintelligence—and to tackle those challenges head-on.
Related article
AGI Set to Revolutionize Human Thought with a Universal Language Breakthrough
The emergence of Artificial General Intelligence presents transformative potential to reshape human communication through the creation of a universal language framework. Unlike narrow AI systems designed for specialized tasks, AGI possesses human-lik
OpenAI Reaffirms Nonprofit Roots in Major Corporate Overhaul
OpenAI remains steadfast in its nonprofit mission as it undergoes a significant corporate restructuring, balancing growth with its commitment to ethical AI development.CEO Sam Altman outlined the comp
AI Leaders Ground the AGI Debate in Reality
At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large langua
Comments (8)
0/200
StevenGonzalez
September 12, 2025 at 2:30:43 PM EDT
기사 보니까 인공지능이 인간 수준의 지능에 도달할 수 있을지에 대한 토론이 뜨겁네요 🧐 하지만 솔직히 말해서, 우리는 아직도 기본적인 감정 인식도 제대로 못하는 AI를 보고 있는데 AGI는 너무 먼 이야기 아닌가요? ㅋㅋ
0
RichardHarris
August 2, 2025 at 11:07:14 AM EDT
That dinner convo sounds intense! 😮 Asking if AI can hit human-level smarts is like tossing a grenade into a tech nerd party. I bet those CEOs were all over the place with their takes.
0
FrankJackson
July 27, 2025 at 9:19:30 PM EDT
This article really got me thinking—AGI sounds like sci-fi, but are we actually close? I’m kinda skeptical it’ll match human smarts anytime soon. 😅 Still, cool to see CEOs so hyped!
0
MarkRoberts
May 10, 2025 at 6:50:03 PM EDT
La discusión de los líderes de IA sobre el AGI fue muy iluminadora. Anclarse en la realidad fue refrescante. Algunos puntos fueron interesantes, aunque desearía más profundidad en ciertos aspectos. En general, fue una charla sólida con valiosas ideas.
0
CharlesRoberts
May 10, 2025 at 2:06:40 PM EDT
A discussão sobre AGI pelos líderes de IA foi reveladora. Foi bom ver o assunto ancorado na realidade. Alguns pontos foram interessantes, mas gostaria de mais profundidade em certas áreas. No geral, foi uma palestra sólida com boas ideias.
0
StevenNelson
May 10, 2025 at 9:16:38 AM EDT
AIリーダーによるAGIに関する議論はとても興味深かったです。現実に根ざした話が新鮮でした。いくつかのポイントは刺激的でしたが、一部の分野ではもっと深い議論が欲しかったです。全体的に見ると、価値のあるインサイトが詰まった良い講演でした。
0
At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large language models (LLMs) like those behind ChatGPT and Gemini. They're convinced these models could soon hit human-level or even super-human intelligence. Take Dario Amodei from Anthropic, for instance. He's penned essays suggesting that by 2026, we might see AI smarter than Nobel laureates in various fields. Meanwhile, OpenAI's Sam Altman has been vocal about knowing how to build "superintelligent" AI, predicting it could turbocharge scientific discovery.
But not everyone's buying into this rosy picture. Some AI leaders are skeptical about LLMs reaching AGI, let alone superintelligence, without significant breakthroughs. These skeptics, once quiet, are now speaking up more.
Skepticism in the AI Community
Take Thomas Wolf, co-founder and chief science officer at Hugging Face. In a recent article, he called parts of Amodei’s vision "wishful thinking at best." Drawing on his PhD in statistical and quantum physics, Wolf argues that Nobel-level breakthroughs come from asking new questions, not just answering known ones—something AI is good at but not great for pioneering new ideas.
"I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there," Wolf shared in an interview with TechCrunch. He wrote his piece because he felt the hype around AGI was overshadowing the need for a serious discussion on how to achieve it. Wolf sees a future where AI transforms the world, but not necessarily one where it reaches human-level intelligence or superintelligence.
The AI community is often divided by those who believe in AGI and those who don't, with the latter sometimes labeled as "anti-technology" or simply pessimistic. Wolf, however, considers himself an "informed optimist," pushing for AI advancement while staying grounded in reality.
Other Voices in the AI Debate
Google DeepMind’s CEO, Demis Hassabis, reportedly told his team that AGI might still be a decade away, pointing out the many tasks AI can't handle yet. Meta's Chief AI Scientist, Yann LeCun, has also expressed doubts about LLMs achieving AGI, calling the idea "nonsense" at Nvidia GTC and pushing for new architectures to underpin superintelligence.
Kenneth Stanley, a former lead researcher at OpenAI and now an executive at Lila Sciences, is working on the nitty-gritty of building advanced AI. His startup, which recently raised $200 million, is focused on automating scientific innovation. Stanley's work delves into AI's ability to generate original, creative ideas—a field known as open-endedness.
"I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings," Stanley told TechCrunch. He agrees with Wolf that being knowledgeable doesn't automatically lead to original ideas.
The Role of Creativity in AI
Stanley believes creativity is crucial for AGI, but admits it's a tough nut to crack. While optimists like Amodei highlight AI "reasoning" models as a step toward AGI, Stanley argues that creativity requires a different kind of intelligence. "Reasoning is almost antithetical to [creativity]," he explained. "Reasoning models focus on reaching a specific goal, which can limit the kind of opportunistic thinking needed for creativity."
Stanley suggests that to build truly intelligent AI, we need to replicate human taste for new ideas algorithmically. While AI excels in areas like math and programming, where answers are clear, it struggles with more subjective, creative tasks that don't have a "correct" answer.
"People shy away from [subjectivity] in science—the word is almost toxic," Stanley noted. "But there's nothing to prevent us from dealing with subjectivity [algorithmically]. It's just part of the data stream."
He's encouraged by the growing focus on open-endedness, with research labs at Lila Sciences, Google DeepMind, and AI startup Sakana tackling the issue. Stanley sees more people talking about creativity in AI but believes there's still much work ahead.
The Realists of AI
Wolf and LeCun might be considered the "AI realists": leaders who approach AGI and superintelligence with grounded questions about their feasibility. Their aim isn't to dismiss AI advancements but to spark a broader conversation about what's holding AI back from reaching AGI and superintelligence—and to tackle those challenges head-on.


기사 보니까 인공지능이 인간 수준의 지능에 도달할 수 있을지에 대한 토론이 뜨겁네요 🧐 하지만 솔직히 말해서, 우리는 아직도 기본적인 감정 인식도 제대로 못하는 AI를 보고 있는데 AGI는 너무 먼 이야기 아닌가요? ㅋㅋ




That dinner convo sounds intense! 😮 Asking if AI can hit human-level smarts is like tossing a grenade into a tech nerd party. I bet those CEOs were all over the place with their takes.




This article really got me thinking—AGI sounds like sci-fi, but are we actually close? I’m kinda skeptical it’ll match human smarts anytime soon. 😅 Still, cool to see CEOs so hyped!




La discusión de los líderes de IA sobre el AGI fue muy iluminadora. Anclarse en la realidad fue refrescante. Algunos puntos fueron interesantes, aunque desearía más profundidad en ciertos aspectos. En general, fue una charla sólida con valiosas ideas.




A discussão sobre AGI pelos líderes de IA foi reveladora. Foi bom ver o assunto ancorado na realidade. Alguns pontos foram interessantes, mas gostaria de mais profundidade em certas áreas. No geral, foi uma palestra sólida com boas ideias.




AIリーダーによるAGIに関する議論はとても興味深かったです。現実に根ざした話が新鮮でした。いくつかのポイントは刺激的でしたが、一部の分野ではもっと深い議論が欲しかったです。全体的に見ると、価値のあるインサイトが詰まった良い講演でした。












