Sam Altman's TED 2025 Interview: The Year's Most Uncomfortable Yet Crucial AI Discussion
April 20, 2025
JackMitchell
5

During the TED 2025 conference in Vancouver, OpenAI CEO Sam Altman shared some staggering stats about his company's growth. He revealed that OpenAI now boasts 800 million weekly active users, a figure that's growing at what Altman described as "unbelievable" rates. In an interview that was as intense as it was informative, Altman expressed his awe at the company's trajectory, saying, "I have never seen growth in any company, one that I’ve been involved with or not, like this." He admitted that while the rapid expansion of ChatGPT is exhilarating, it's also taking a toll on the team: "The growth of ChatGPT — it is really fun. I feel deeply honored. But it is crazy to live through, and our teams are exhausted and stressed."
The interview, which wrapped up the last day of TED 2025: Humanity Reimagined, not only highlighted OpenAI's meteoric rise but also shed light on the increasing scrutiny the company is facing. As its technology continues to reshape society at a breakneck pace, even some of its supporters are expressing concern.
‘Our GPUs are melting’: OpenAI struggles to scale amid unprecedented demand
Altman painted a vivid picture of a company grappling with its own success. He mentioned that OpenAI's GPUs are "melting" under the strain of the company's popular new image generation features. "All day long, I call people and beg them to give us their GPUs. We are so incredibly constrained," he confessed. This rapid growth comes at a time when rumors are swirling that OpenAI might launch its own social network to rival Elon Musk's X, though Altman neither confirmed nor denied these reports during the TED interview.
Recently, OpenAI closed a monumental $40 billion funding round, catapulting its valuation to $300 billion—the largest private tech funding in history. This influx of cash should help alleviate some of the infrastructure bottlenecks they're facing.
From non-profit to $300 billion giant: Altman responds to ‘Ring of Power’ accusations
Throughout the 47-minute conversation, TED head Chris Anderson grilled Altman on OpenAI's evolution from a non-profit research lab to a for-profit behemoth valued at $300 billion. Anderson echoed concerns from critics, including Elon Musk, who suggested Altman has been "corrupted by the Ring of Power," a reference to "The Lord of the Rings."
Altman defended the company's journey, emphasizing their mission to develop and distribute AGI safely for the benefit of humanity. "Our goal is to make AGI and distribute it, make it safe for the broad benefit of humanity. I think by all accounts, we have done a lot in that direction. Clearly, our tactics have shifted over time… We didn’t think we would have to build a company around this. We learned a lot about how it goes and the realities of what these systems were going to take from capital," he explained.
When asked how he copes with the immense power he now holds, Altman responded with a touch of humor and humility: "Shockingly, the same as before. I think you can get used to anything step by step… You’re the same person. I’m sure I’m not in all sorts of ways, but I don’t feel any different."
‘Divvying up revenue’: OpenAI plans to pay artists whose styles are used by AI
One of the more concrete policy announcements from the interview was Altman's revelation that OpenAI is working on a compensation system for artists whose styles are mimicked by AI. "I think there are incredible new business models that we and others are excited to explore," Altman said when questioned about potential IP theft in AI-generated images. He posed the question, "If you say, ‘I want to generate art in the style of these seven people, all of whom have consented to that,’ how do you divvy up how much money goes to each one?"
Currently, OpenAI's image generator will not mimic the style of living artists without consent, but it can generate art in the style of movements, genres, or studios. Altman hinted at a potential revenue-sharing model, though specifics are still under wraps.
Autonomous AI agents: The ‘most consequential safety challenge’ OpenAI has faced
The conversation took a serious turn when discussing "agentic AI"—autonomous systems capable of performing actions on the internet on behalf of users. OpenAI's new "Operator" tool, which can book restaurants, has raised concerns about safety and accountability.
Anderson pressed Altman on the potential dangers: "A single person could let that agent out there, and the agent could decide, ‘Well, in order to execute on that function, I got to copy myself everywhere.’ Are there red lines that you have clearly drawn internally, where you know what the danger moments are?"
Altman referenced OpenAI's "preparedness framework" but was vague about how the company would prevent misuse of autonomous agents. "AI that you give access to your systems, your information, the ability to click around on your computer… when they make a mistake, it’s much higher stakes," he admitted. "You will not use our agents if you do not trust that they’re not going to empty your bank account or delete your data."
’14 definitions from 10 researchers’: Inside OpenAI’s struggle to define AGI
In a candid moment, Altman revealed that even within OpenAI, there's no clear consensus on what artificial general intelligence (AGI) actually is—the company's ultimate goal. "It’s like the joke, if you’ve got 10 OpenAI researchers in a room and asked to define AGI, you’d get 14 definitions," he quipped.
He suggested that instead of fixating on a specific moment when AGI arrives, we should recognize that "the models are just going to get smarter and more capable and smarter and more capable on this long exponential… We’re going to have to contend and get wonderful benefits from this incredible system."
Loosening the guardrails: OpenAI’s new approach to content moderation
Altman also announced a significant policy shift regarding content moderation, revealing that OpenAI has relaxed restrictions on its image generation models. "We’ve given the users much more freedom on what we would traditionally think about as speech harms," he explained. "I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides."
This change could indicate a broader trend toward empowering users to control AI outputs, aligning with Altman's view that the hundreds of millions of users, rather than "small elite summits," should set the guardrails. "One of the cool new things about AI is our AI can talk to everybody on Earth, and we can learn the collective value preference of what everybody wants, rather than have a bunch of people who are blessed by society to sit in a room and make these decisions," he said.
‘My kid will never be smarter than AI’: Altman’s vision of an AI-powered future
The interview concluded with Altman contemplating the world his newborn son will grow up in—one where AI will surpass human intelligence. "My kid will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable," he said. "It’ll be a world of incredible material abundance… where the rate of change is incredibly fast and amazing new things are happening."
Anderson offered a sobering final thought: "Over the next few years, you’re going to have some of the biggest opportunities, the biggest moral challenges, the biggest decisions to make of perhaps any human in history."
The billion-user balancing act: How OpenAI navigates power, profit, and purpose
Altman's appearance at TED comes at a pivotal moment for OpenAI and the wider AI industry. The company is facing increasing legal challenges, including copyright lawsuits from authors and publishers, while simultaneously pushing the limits of what AI can achieve.
Recent developments like ChatGPT's viral image generation feature and video generation tool Sora have showcased capabilities that were unimaginable just months ago. Yet, these tools have also ignited debates about copyright, authenticity, and the future of creative work.
Altman's willingness to tackle tough questions about safety, ethics, and the societal impact of AI demonstrates his awareness of the stakes. However, critics might point out that specific details on safeguards and policies were somewhat elusive during the conversation.
The interview also laid bare the core tensions at the heart of OpenAI's mission: the need to advance AI technology quickly while ensuring safety, balancing profit with societal benefit, respecting creative rights while democratizing creative tools, and navigating between elite expertise and public preference.
As Anderson noted in his closing remarks, the decisions Altman and his peers make in the coming years could have profound effects on humanity's future. Whether OpenAI can fulfill its mission of ensuring "all of humanity benefits from artificial general intelligence" remains an open question.
Related article
Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
The Year of AI Agents: A Closer Look at 2025's Expectations and Realities2025 was heralded by many experts as the year when AI agents—specialized AI systems powered by advanced large language and multimodal models from companies like OpenAI, Anthropic, Google, and DeepSeek—would finally take center
Debates over AI benchmarking have reached Pokémon
Even the beloved world of Pokémon isn't immune to the drama surrounding AI benchmarks. A recent viral post on X stirred up quite the buzz, claiming that Google's latest Gemini model had outpaced Anthropic's leading Claude model in the classic Pokémon video game trilogy. According to the post, Gemini
GAIA Introduces New Benchmark in Quest for True Intelligence Beyond ARC-AGI
Intelligence is everywhere, yet gauging it accurately feels like trying to catch a cloud with your bare hands. We use tests and benchmarks, like college entrance exams, to get a rough idea. Each year, students cram for these tests, sometimes even scoring a perfect 100%. But does that perfect score m
Comments (5)
0/200
AndrewHernández
April 20, 2025 at 10:09:38 AM GMT
Sam Altman's TED talk was eye-opening! The stats about OpenAI's growth are mind-blowing. It's crazy to think they have 800 million weekly users now! The interview got a bit uncomfortable at times, but it was definitely a must-watch for anyone interested in AI's future. Keep up the good work, Sam! 🚀
0
LucasWalker
April 20, 2025 at 10:09:38 AM GMT
サム・オルタマンのTEDトークは衝撃的だった!OpenAIの成長統計は信じられないほどだ。週間ユーザーが8億人とは驚きだね。インタビューは時折不快になることもあったけど、AIの未来に興味がある人には必見だよ。頑張れ、サム!🚀
0
DanielThomas
April 20, 2025 at 10:09:38 AM GMT
샘 알트만의 TED 강연 정말 놀라웠어요! OpenAI의 성장 통계가 정말 믿기지 않아요. 주간 사용자가 8억 명이라니! 인터뷰가 때로는 불편했지만, AI의 미래에 관심 있는 사람이라면 꼭 봐야 할 강연이에요. 잘하세요, 샘! 🚀
0
GeorgeNelson
April 20, 2025 at 10:09:38 AM GMT
A palestra do Sam Altman no TED foi reveladora! As estatísticas de crescimento da OpenAI são impressionantes. É loucura pensar que agora têm 800 milhões de usuários semanais! A entrevista ficou um pouco desconfortável em alguns momentos, mas foi definitivamente uma palestra imperdível para quem se interessa pelo futuro da IA. Continue o bom trabalho, Sam! 🚀
0
AnthonyJohnson
April 20, 2025 at 10:09:38 AM GMT
¡La charla de Sam Altman en TED fue reveladora! Las estadísticas de crecimiento de OpenAI son impresionantes. ¡Es una locura pensar que ahora tienen 800 millones de usuarios semanales! La entrevista se puso un poco incómoda en algunos momentos, pero definitivamente fue una charla que no te puedes perder si te interesa el futuro de la IA. ¡Sigue así, Sam! 🚀
0






During the TED 2025 conference in Vancouver, OpenAI CEO Sam Altman shared some staggering stats about his company's growth. He revealed that OpenAI now boasts 800 million weekly active users, a figure that's growing at what Altman described as "unbelievable" rates. In an interview that was as intense as it was informative, Altman expressed his awe at the company's trajectory, saying, "I have never seen growth in any company, one that I’ve been involved with or not, like this." He admitted that while the rapid expansion of ChatGPT is exhilarating, it's also taking a toll on the team: "The growth of ChatGPT — it is really fun. I feel deeply honored. But it is crazy to live through, and our teams are exhausted and stressed."
The interview, which wrapped up the last day of TED 2025: Humanity Reimagined, not only highlighted OpenAI's meteoric rise but also shed light on the increasing scrutiny the company is facing. As its technology continues to reshape society at a breakneck pace, even some of its supporters are expressing concern.
‘Our GPUs are melting’: OpenAI struggles to scale amid unprecedented demand
Altman painted a vivid picture of a company grappling with its own success. He mentioned that OpenAI's GPUs are "melting" under the strain of the company's popular new image generation features. "All day long, I call people and beg them to give us their GPUs. We are so incredibly constrained," he confessed. This rapid growth comes at a time when rumors are swirling that OpenAI might launch its own social network to rival Elon Musk's X, though Altman neither confirmed nor denied these reports during the TED interview.
Recently, OpenAI closed a monumental $40 billion funding round, catapulting its valuation to $300 billion—the largest private tech funding in history. This influx of cash should help alleviate some of the infrastructure bottlenecks they're facing.
From non-profit to $300 billion giant: Altman responds to ‘Ring of Power’ accusations
Throughout the 47-minute conversation, TED head Chris Anderson grilled Altman on OpenAI's evolution from a non-profit research lab to a for-profit behemoth valued at $300 billion. Anderson echoed concerns from critics, including Elon Musk, who suggested Altman has been "corrupted by the Ring of Power," a reference to "The Lord of the Rings."
Altman defended the company's journey, emphasizing their mission to develop and distribute AGI safely for the benefit of humanity. "Our goal is to make AGI and distribute it, make it safe for the broad benefit of humanity. I think by all accounts, we have done a lot in that direction. Clearly, our tactics have shifted over time… We didn’t think we would have to build a company around this. We learned a lot about how it goes and the realities of what these systems were going to take from capital," he explained.
When asked how he copes with the immense power he now holds, Altman responded with a touch of humor and humility: "Shockingly, the same as before. I think you can get used to anything step by step… You’re the same person. I’m sure I’m not in all sorts of ways, but I don’t feel any different."
‘Divvying up revenue’: OpenAI plans to pay artists whose styles are used by AI
One of the more concrete policy announcements from the interview was Altman's revelation that OpenAI is working on a compensation system for artists whose styles are mimicked by AI. "I think there are incredible new business models that we and others are excited to explore," Altman said when questioned about potential IP theft in AI-generated images. He posed the question, "If you say, ‘I want to generate art in the style of these seven people, all of whom have consented to that,’ how do you divvy up how much money goes to each one?"
Currently, OpenAI's image generator will not mimic the style of living artists without consent, but it can generate art in the style of movements, genres, or studios. Altman hinted at a potential revenue-sharing model, though specifics are still under wraps.
Autonomous AI agents: The ‘most consequential safety challenge’ OpenAI has faced
The conversation took a serious turn when discussing "agentic AI"—autonomous systems capable of performing actions on the internet on behalf of users. OpenAI's new "Operator" tool, which can book restaurants, has raised concerns about safety and accountability.
Anderson pressed Altman on the potential dangers: "A single person could let that agent out there, and the agent could decide, ‘Well, in order to execute on that function, I got to copy myself everywhere.’ Are there red lines that you have clearly drawn internally, where you know what the danger moments are?"
Altman referenced OpenAI's "preparedness framework" but was vague about how the company would prevent misuse of autonomous agents. "AI that you give access to your systems, your information, the ability to click around on your computer… when they make a mistake, it’s much higher stakes," he admitted. "You will not use our agents if you do not trust that they’re not going to empty your bank account or delete your data."
’14 definitions from 10 researchers’: Inside OpenAI’s struggle to define AGI
In a candid moment, Altman revealed that even within OpenAI, there's no clear consensus on what artificial general intelligence (AGI) actually is—the company's ultimate goal. "It’s like the joke, if you’ve got 10 OpenAI researchers in a room and asked to define AGI, you’d get 14 definitions," he quipped.
He suggested that instead of fixating on a specific moment when AGI arrives, we should recognize that "the models are just going to get smarter and more capable and smarter and more capable on this long exponential… We’re going to have to contend and get wonderful benefits from this incredible system."
Loosening the guardrails: OpenAI’s new approach to content moderation
Altman also announced a significant policy shift regarding content moderation, revealing that OpenAI has relaxed restrictions on its image generation models. "We’ve given the users much more freedom on what we would traditionally think about as speech harms," he explained. "I think part of model alignment is following what the user of a model wants it to do within the very broad bounds of what society decides."
This change could indicate a broader trend toward empowering users to control AI outputs, aligning with Altman's view that the hundreds of millions of users, rather than "small elite summits," should set the guardrails. "One of the cool new things about AI is our AI can talk to everybody on Earth, and we can learn the collective value preference of what everybody wants, rather than have a bunch of people who are blessed by society to sit in a room and make these decisions," he said.
‘My kid will never be smarter than AI’: Altman’s vision of an AI-powered future
The interview concluded with Altman contemplating the world his newborn son will grow up in—one where AI will surpass human intelligence. "My kid will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable," he said. "It’ll be a world of incredible material abundance… where the rate of change is incredibly fast and amazing new things are happening."
Anderson offered a sobering final thought: "Over the next few years, you’re going to have some of the biggest opportunities, the biggest moral challenges, the biggest decisions to make of perhaps any human in history."
The billion-user balancing act: How OpenAI navigates power, profit, and purpose
Altman's appearance at TED comes at a pivotal moment for OpenAI and the wider AI industry. The company is facing increasing legal challenges, including copyright lawsuits from authors and publishers, while simultaneously pushing the limits of what AI can achieve.
Recent developments like ChatGPT's viral image generation feature and video generation tool Sora have showcased capabilities that were unimaginable just months ago. Yet, these tools have also ignited debates about copyright, authenticity, and the future of creative work.
Altman's willingness to tackle tough questions about safety, ethics, and the societal impact of AI demonstrates his awareness of the stakes. However, critics might point out that specific details on safeguards and policies were somewhat elusive during the conversation.
The interview also laid bare the core tensions at the heart of OpenAI's mission: the need to advance AI technology quickly while ensuring safety, balancing profit with societal benefit, respecting creative rights while democratizing creative tools, and navigating between elite expertise and public preference.
As Anderson noted in his closing remarks, the decisions Altman and his peers make in the coming years could have profound effects on humanity's future. Whether OpenAI can fulfill its mission of ensuring "all of humanity benefits from artificial general intelligence" remains an open question.




Sam Altman's TED talk was eye-opening! The stats about OpenAI's growth are mind-blowing. It's crazy to think they have 800 million weekly users now! The interview got a bit uncomfortable at times, but it was definitely a must-watch for anyone interested in AI's future. Keep up the good work, Sam! 🚀




サム・オルタマンのTEDトークは衝撃的だった!OpenAIの成長統計は信じられないほどだ。週間ユーザーが8億人とは驚きだね。インタビューは時折不快になることもあったけど、AIの未来に興味がある人には必見だよ。頑張れ、サム!🚀




샘 알트만의 TED 강연 정말 놀라웠어요! OpenAI의 성장 통계가 정말 믿기지 않아요. 주간 사용자가 8억 명이라니! 인터뷰가 때로는 불편했지만, AI의 미래에 관심 있는 사람이라면 꼭 봐야 할 강연이에요. 잘하세요, 샘! 🚀




A palestra do Sam Altman no TED foi reveladora! As estatísticas de crescimento da OpenAI são impressionantes. É loucura pensar que agora têm 800 milhões de usuários semanais! A entrevista ficou um pouco desconfortável em alguns momentos, mas foi definitivamente uma palestra imperdível para quem se interessa pelo futuro da IA. Continue o bom trabalho, Sam! 🚀




¡La charla de Sam Altman en TED fue reveladora! Las estadísticas de crecimiento de OpenAI son impresionantes. ¡Es una locura pensar que ahora tienen 800 millones de usuarios semanales! La entrevista se puso un poco incómoda en algunos momentos, pero definitivamente fue una charla que no te puedes perder si te interesa el futuro de la IA. ¡Sigue así, Sam! 🚀












