AI Leaders Ground the AGI Debate in Reality
May 9, 2025
DanielThomas
0

At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large language models (LLMs) like those behind ChatGPT and Gemini. They're convinced these models could soon hit human-level or even super-human intelligence. Take Dario Amodei from Anthropic, for instance. He's penned essays suggesting that by 2026, we might see AI smarter than Nobel laureates in various fields. Meanwhile, OpenAI's Sam Altman has been vocal about knowing how to build "superintelligent" AI, predicting it could turbocharge scientific discovery.
But not everyone's buying into this rosy picture. Some AI leaders are skeptical about LLMs reaching AGI, let alone superintelligence, without significant breakthroughs. These skeptics, once quiet, are now speaking up more.
Skepticism in the AI Community
Take Thomas Wolf, co-founder and chief science officer at Hugging Face. In a recent article, he called parts of Amodei’s vision "wishful thinking at best." Drawing on his PhD in statistical and quantum physics, Wolf argues that Nobel-level breakthroughs come from asking new questions, not just answering known ones—something AI is good at but not great for pioneering new ideas.
"I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there," Wolf shared in an interview with TechCrunch. He wrote his piece because he felt the hype around AGI was overshadowing the need for a serious discussion on how to achieve it. Wolf sees a future where AI transforms the world, but not necessarily one where it reaches human-level intelligence or superintelligence.
The AI community is often divided by those who believe in AGI and those who don't, with the latter sometimes labeled as "anti-technology" or simply pessimistic. Wolf, however, considers himself an "informed optimist," pushing for AI advancement while staying grounded in reality.
Other Voices in the AI Debate
Google DeepMind’s CEO, Demis Hassabis, reportedly told his team that AGI might still be a decade away, pointing out the many tasks AI can't handle yet. Meta's Chief AI Scientist, Yann LeCun, has also expressed doubts about LLMs achieving AGI, calling the idea "nonsense" at Nvidia GTC and pushing for new architectures to underpin superintelligence.
Kenneth Stanley, a former lead researcher at OpenAI and now an executive at Lila Sciences, is working on the nitty-gritty of building advanced AI. His startup, which recently raised $200 million, is focused on automating scientific innovation. Stanley's work delves into AI's ability to generate original, creative ideas—a field known as open-endedness.
"I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings," Stanley told TechCrunch. He agrees with Wolf that being knowledgeable doesn't automatically lead to original ideas.
The Role of Creativity in AI
Stanley believes creativity is crucial for AGI, but admits it's a tough nut to crack. While optimists like Amodei highlight AI "reasoning" models as a step toward AGI, Stanley argues that creativity requires a different kind of intelligence. "Reasoning is almost antithetical to [creativity]," he explained. "Reasoning models focus on reaching a specific goal, which can limit the kind of opportunistic thinking needed for creativity."
Stanley suggests that to build truly intelligent AI, we need to replicate human taste for new ideas algorithmically. While AI excels in areas like math and programming, where answers are clear, it struggles with more subjective, creative tasks that don't have a "correct" answer.
"People shy away from [subjectivity] in science—the word is almost toxic," Stanley noted. "But there's nothing to prevent us from dealing with subjectivity [algorithmically]. It's just part of the data stream."
He's encouraged by the growing focus on open-endedness, with research labs at Lila Sciences, Google DeepMind, and AI startup Sakana tackling the issue. Stanley sees more people talking about creativity in AI but believes there's still much work ahead.
The Realists of AI
Wolf and LeCun might be considered the "AI realists": leaders who approach AGI and superintelligence with grounded questions about their feasibility. Their aim isn't to dismiss AI advancements but to spark a broader conversation about what's holding AI back from reaching AGI and superintelligence—and to tackle those challenges head-on.
Related article
OpenAI Strikes Back: Sues Elon Musk for Alleged Efforts to Undermine AI Competitor
OpenAI has launched a fierce legal counterattack against its co-founder, Elon Musk, and his competing AI company, xAI. In a dramatic escalation of their ongoing feud, OpenAI accuses Musk of waging a "relentless" and "malicious" campaign to undermine the company he helped start.
According to court d
Law of Accelerating Returns Explained: Pathway to AGI Development
In a recent interview, Elon Musk shared his optimistic view on the timeline for the advent of Artificial General Intelligence (AGI), stating it could be as soon as *“3 to 6 years”*. Similarly, Demis Hassabis, CEO of Google's DeepMind, expressed at The Wall Street Journal’s Future of Everything Festi
New AGI Test Proves Challenging, Stumps Majority of AI Models
The Arc Prize Foundation, co-founded by renowned AI researcher François Chollet, recently unveiled a new benchmark called ARC-AGI-2 in a blog post. This test aims to push the boundaries of AI's general intelligence, and so far, it's proving to be a tough nut to crack for most AI models.According to
Comments (0)
0/200






At a recent dinner with business leaders in San Francisco, I threw out a question that seemed to freeze the room: could today's AI ever reach human-like intelligence or beyond? It's a topic that stirs more debate than you might expect.
In 2025, tech CEOs are buzzing with optimism about large language models (LLMs) like those behind ChatGPT and Gemini. They're convinced these models could soon hit human-level or even super-human intelligence. Take Dario Amodei from Anthropic, for instance. He's penned essays suggesting that by 2026, we might see AI smarter than Nobel laureates in various fields. Meanwhile, OpenAI's Sam Altman has been vocal about knowing how to build "superintelligent" AI, predicting it could turbocharge scientific discovery.
But not everyone's buying into this rosy picture. Some AI leaders are skeptical about LLMs reaching AGI, let alone superintelligence, without significant breakthroughs. These skeptics, once quiet, are now speaking up more.
Skepticism in the AI Community
Take Thomas Wolf, co-founder and chief science officer at Hugging Face. In a recent article, he called parts of Amodei’s vision "wishful thinking at best." Drawing on his PhD in statistical and quantum physics, Wolf argues that Nobel-level breakthroughs come from asking new questions, not just answering known ones—something AI is good at but not great for pioneering new ideas.
"I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there," Wolf shared in an interview with TechCrunch. He wrote his piece because he felt the hype around AGI was overshadowing the need for a serious discussion on how to achieve it. Wolf sees a future where AI transforms the world, but not necessarily one where it reaches human-level intelligence or superintelligence.
The AI community is often divided by those who believe in AGI and those who don't, with the latter sometimes labeled as "anti-technology" or simply pessimistic. Wolf, however, considers himself an "informed optimist," pushing for AI advancement while staying grounded in reality.
Other Voices in the AI Debate
Google DeepMind’s CEO, Demis Hassabis, reportedly told his team that AGI might still be a decade away, pointing out the many tasks AI can't handle yet. Meta's Chief AI Scientist, Yann LeCun, has also expressed doubts about LLMs achieving AGI, calling the idea "nonsense" at Nvidia GTC and pushing for new architectures to underpin superintelligence.
Kenneth Stanley, a former lead researcher at OpenAI and now an executive at Lila Sciences, is working on the nitty-gritty of building advanced AI. His startup, which recently raised $200 million, is focused on automating scientific innovation. Stanley's work delves into AI's ability to generate original, creative ideas—a field known as open-endedness.
"I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings," Stanley told TechCrunch. He agrees with Wolf that being knowledgeable doesn't automatically lead to original ideas.
The Role of Creativity in AI
Stanley believes creativity is crucial for AGI, but admits it's a tough nut to crack. While optimists like Amodei highlight AI "reasoning" models as a step toward AGI, Stanley argues that creativity requires a different kind of intelligence. "Reasoning is almost antithetical to [creativity]," he explained. "Reasoning models focus on reaching a specific goal, which can limit the kind of opportunistic thinking needed for creativity."
Stanley suggests that to build truly intelligent AI, we need to replicate human taste for new ideas algorithmically. While AI excels in areas like math and programming, where answers are clear, it struggles with more subjective, creative tasks that don't have a "correct" answer.
"People shy away from [subjectivity] in science—the word is almost toxic," Stanley noted. "But there's nothing to prevent us from dealing with subjectivity [algorithmically]. It's just part of the data stream."
He's encouraged by the growing focus on open-endedness, with research labs at Lila Sciences, Google DeepMind, and AI startup Sakana tackling the issue. Stanley sees more people talking about creativity in AI but believes there's still much work ahead.
The Realists of AI
Wolf and LeCun might be considered the "AI realists": leaders who approach AGI and superintelligence with grounded questions about their feasibility. Their aim isn't to dismiss AI advancements but to spark a broader conversation about what's holding AI back from reaching AGI and superintelligence—and to tackle those challenges head-on.











