AI Power Surge: Anthropic CEO Warns of Race to Understand
Right after the AI Action Summit in Paris wrapped up, Anthropic's co-founder and CEO, Dario Amodei, didn't hold back. He called the event a "missed opportunity" and stressed that we need to ramp up the focus and urgency on several key issues, considering how fast AI tech is moving. He shared these thoughts in a statement released on Tuesday.
Anthropic teamed up with the French startup Dust for a developer-focused event in Paris, where TechCrunch got to chat with Amodei on stage. He shared his perspective and pushed for a balanced approach to AI innovation and governance, steering clear of both extreme optimism and harsh criticism.
Amodei, who used to be a neuroscientist, said, "I basically looked inside real brains for a living. And now we're looking inside artificial brains for a living. So we will, over the next few months, have some exciting advances in the area of interpretability—where we're really starting to understand how the models operate." But he also pointed out that it's a race. "It's a race between making the models more powerful, which is incredibly fast for us and incredibly fast for others—you can't really slow down, right? ... Our understanding has to keep up with our ability to build things. I think that's the only way," he added.
Since the first AI summit in Bletchley, UK, the conversation around AI governance has shifted a lot, influenced by the current geopolitical climate. U.S. Vice President JD Vance, speaking at the AI Action Summit, said, "I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity."
Amodei, however, is trying to bridge the gap between safety and opportunity. He believes that focusing more on safety can actually be an opportunity. "At the original summit, the UK Bletchley Summit, there were a lot of discussions on testing and measurement for various risks. And I don't think these things slowed down the technology very much at all," he said at the Anthropic event. "If anything, doing this kind of measurement has helped us better understand our models, which in the end, helps us produce better models."
Even when emphasizing safety, Amodei made it clear that Anthropic is still all in on building frontier AI models. "I don't want to do anything to reduce the promise. We're providing models every day that people can build on and that are used to do amazing things. And we definitely should not stop doing that," he said. Later, he added, "When people are talking a lot about the risks, I kind of get annoyed, and I say: 'oh, man, no one's really done a good job of really laying out how great this technology could be.'"
When the topic turned to Chinese LLM-maker DeepSeek's recent models, Amodei downplayed their achievements, calling the public reaction "inorganic." He said, "Honestly, my reaction was very little. We had seen V3, which is the base model for DeepSeek R1, back in December. And that was an impressive model. The model that was released in December was on this kind of very normal cost reduction curve that we've seen in our models and other models." What caught his attention was that the model wasn't coming from the usual "three or four frontier labs" in the U.S., like Google, OpenAI, and Anthropic. He expressed concern about authoritarian governments dominating the technology. As for DeepSeek's claimed training costs, he dismissed them as "just not accurate and not based on facts."
While Amodei didn't announce any new models at the event, he hinted at upcoming releases with enhanced reasoning capabilities. "We're generally focused on trying to make our own take on reasoning models that are better differentiated. We worry about making sure we have enough capacity, that the models get smarter, and we worry about safety things," he said.
Anthropic is also tackling the model selection challenge. If you're a ChatGPT Plus user, for instance, it can be tough to decide which model to use for your next message.

Image Credits:Screenshot of ChatGPT The same goes for developers using large language model (LLM) APIs in their apps, who need to balance accuracy, response speed, and costs.
Amodei questioned the distinction between normal and reasoning models. "We've been a little bit puzzled by the idea that there are normal models and there are reasoning models and that they're sort of different from each other," he said. "If I'm talking to you, you don't have two brains and one of them responds right away and like, the other waits a longer time."
He believes there should be a smoother transition between pre-trained models like Claude 3.5 Sonnet or GPT-4o and models trained with reinforcement learning that can produce chain-of-thoughts (CoT), like OpenAI's o1 or DeepSeek's R1. "We think that these should exist as part of one single continuous entity. And we may not be there yet, but Anthropic really wants to move things in that direction," Amodei said. "We should have a smoother transition from that to pre-trained models—rather than 'here's thing A and here's thing B.'"
As AI companies like Anthropic keep pushing out better models, Amodei sees huge potential for disruption across industries. "We're working with some pharma companies to use Claude to write clinical studies, and they've been able to reduce the time it takes to write the clinical study report from 12 weeks to three days," he said.
He envisions a "renaissance of disruptive innovation" in AI applications across sectors like legal, financial, insurance, productivity, software, and energy. "I think there's going to be—basically—a renaissance of disruptive innovation in the AI application space. And we want to help it, we want to support it all," he concluded.
Read our full coverage of the Artificial Intelligence Action Summit in Paris.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.
Related article
AI-Powered UGC Creation: Free, Fast, & Effective Strategies
In the ever-evolving world of digital marketing, user-generated content (UGC) has emerged as a vital asset for brands aiming to foster trust and authenticity. However, the process of gathering and managing UGC often presents challenges, such as extended shipping times, inconsistent quality from crea
Cheri Cheri Lady: The Timeless Synthwave Classic Explained
Modern Talking's 'Cheri Cheri Lady' isn't just a catchy '80s synth-pop tune; it's a cultural touchstone that still resonates with fans around the globe. Launched in 1985, this track's infectious melody and synth-driven sound not only defined a generation but also left a lasting imprint on the music
AI Struggles to Emulate Historical Language
A team of researchers from the United States and Canada have discovered that large language models like ChatGPT struggle to accurately replicate historical idioms without extensive and costly pretraining. This challenge makes ambitious projects, such as using AI to complete Charles Dickens's last un
Comments (50)
0/200
GeorgeMiller
April 10, 2025 at 12:00:00 AM GMT
Dario Amodei's comments at the AI Action Summit were a wake-up call. His urgency about understanding AI's rapid development is something we should all take seriously. It's a bit scary but also motivating. More events like this, please!
0
EricYoung
April 10, 2025 at 12:00:00 AM GMT
AIアクションサミットでのダリオ・アモデイの発言は目覚まし時計のようなものでした。AIの急速な発展を理解する緊急性についての彼の言葉は、私たち全員が真剣に受け止めるべきです。少し怖いけど、やる気も出ます。これと同じようなイベントをもっと開催してほしいです!
0
MarkWilson
April 10, 2025 at 12:00:00 AM GMT
AI 액션 서밋에서 다리오 아모데이의 발언은 경종을 울리는 것 같았어요. AI의 빠른 발전을 이해하는 긴박함에 대한 그의 말은 우리 모두가 진지하게 받아들여야 해요. 조금 무섭지만 동기부여도 돼요. 이런 이벤트를 더 많이 열어주세요!
0
BenWalker
April 10, 2025 at 12:00:00 AM GMT
Os comentários de Dario Amodei no AI Action Summit foram um alerta. A urgência dele sobre entender o desenvolvimento rápido da IA é algo que todos devemos levar a sério. É um pouco assustador, mas também motivador. Por favor, mais eventos como este!
0
AvaHill
April 10, 2025 at 12:00:00 AM GMT
Los comentarios de Dario Amodei en el AI Action Summit fueron una llamada de atención. Su urgencia sobre entender el rápido desarrollo de la IA es algo que todos deberíamos tomar en serio. Es un poco aterrador pero también motivador. ¡Por favor, más eventos como este!
0
JustinWilson
April 10, 2025 at 12:00:00 AM GMT
Just heard about AI Power Surge from Anthropic's CEO, and it's eye-opening. The race to understand AI is real and we're missing out big time. This tool really makes you think about the urgency of AI development. Maybe it's time to get more involved? Definitely worth checking out!
0
Right after the AI Action Summit in Paris wrapped up, Anthropic's co-founder and CEO, Dario Amodei, didn't hold back. He called the event a "missed opportunity" and stressed that we need to ramp up the focus and urgency on several key issues, considering how fast AI tech is moving. He shared these thoughts in a statement released on Tuesday.
Anthropic teamed up with the French startup Dust for a developer-focused event in Paris, where TechCrunch got to chat with Amodei on stage. He shared his perspective and pushed for a balanced approach to AI innovation and governance, steering clear of both extreme optimism and harsh criticism.
Amodei, who used to be a neuroscientist, said, "I basically looked inside real brains for a living. And now we're looking inside artificial brains for a living. So we will, over the next few months, have some exciting advances in the area of interpretability—where we're really starting to understand how the models operate." But he also pointed out that it's a race. "It's a race between making the models more powerful, which is incredibly fast for us and incredibly fast for others—you can't really slow down, right? ... Our understanding has to keep up with our ability to build things. I think that's the only way," he added.
Since the first AI summit in Bletchley, UK, the conversation around AI governance has shifted a lot, influenced by the current geopolitical climate. U.S. Vice President JD Vance, speaking at the AI Action Summit, said, "I'm not here this morning to talk about AI safety, which was the title of the conference a couple of years ago. I'm here to talk about AI opportunity."
Amodei, however, is trying to bridge the gap between safety and opportunity. He believes that focusing more on safety can actually be an opportunity. "At the original summit, the UK Bletchley Summit, there were a lot of discussions on testing and measurement for various risks. And I don't think these things slowed down the technology very much at all," he said at the Anthropic event. "If anything, doing this kind of measurement has helped us better understand our models, which in the end, helps us produce better models."
Even when emphasizing safety, Amodei made it clear that Anthropic is still all in on building frontier AI models. "I don't want to do anything to reduce the promise. We're providing models every day that people can build on and that are used to do amazing things. And we definitely should not stop doing that," he said. Later, he added, "When people are talking a lot about the risks, I kind of get annoyed, and I say: 'oh, man, no one's really done a good job of really laying out how great this technology could be.'"
When the topic turned to Chinese LLM-maker DeepSeek's recent models, Amodei downplayed their achievements, calling the public reaction "inorganic." He said, "Honestly, my reaction was very little. We had seen V3, which is the base model for DeepSeek R1, back in December. And that was an impressive model. The model that was released in December was on this kind of very normal cost reduction curve that we've seen in our models and other models." What caught his attention was that the model wasn't coming from the usual "three or four frontier labs" in the U.S., like Google, OpenAI, and Anthropic. He expressed concern about authoritarian governments dominating the technology. As for DeepSeek's claimed training costs, he dismissed them as "just not accurate and not based on facts."
While Amodei didn't announce any new models at the event, he hinted at upcoming releases with enhanced reasoning capabilities. "We're generally focused on trying to make our own take on reasoning models that are better differentiated. We worry about making sure we have enough capacity, that the models get smarter, and we worry about safety things," he said.
Anthropic is also tackling the model selection challenge. If you're a ChatGPT Plus user, for instance, it can be tough to decide which model to use for your next message.
Amodei questioned the distinction between normal and reasoning models. "We've been a little bit puzzled by the idea that there are normal models and there are reasoning models and that they're sort of different from each other," he said. "If I'm talking to you, you don't have two brains and one of them responds right away and like, the other waits a longer time."
He believes there should be a smoother transition between pre-trained models like Claude 3.5 Sonnet or GPT-4o and models trained with reinforcement learning that can produce chain-of-thoughts (CoT), like OpenAI's o1 or DeepSeek's R1. "We think that these should exist as part of one single continuous entity. And we may not be there yet, but Anthropic really wants to move things in that direction," Amodei said. "We should have a smoother transition from that to pre-trained models—rather than 'here's thing A and here's thing B.'"
As AI companies like Anthropic keep pushing out better models, Amodei sees huge potential for disruption across industries. "We're working with some pharma companies to use Claude to write clinical studies, and they've been able to reduce the time it takes to write the clinical study report from 12 weeks to three days," he said.
He envisions a "renaissance of disruptive innovation" in AI applications across sectors like legal, financial, insurance, productivity, software, and energy. "I think there's going to be—basically—a renaissance of disruptive innovation in the AI application space. And we want to help it, we want to support it all," he concluded.
Read our full coverage of the Artificial Intelligence Action Summit in Paris.
TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.




Dario Amodei's comments at the AI Action Summit were a wake-up call. His urgency about understanding AI's rapid development is something we should all take seriously. It's a bit scary but also motivating. More events like this, please!




AIアクションサミットでのダリオ・アモデイの発言は目覚まし時計のようなものでした。AIの急速な発展を理解する緊急性についての彼の言葉は、私たち全員が真剣に受け止めるべきです。少し怖いけど、やる気も出ます。これと同じようなイベントをもっと開催してほしいです!




AI 액션 서밋에서 다리오 아모데이의 발언은 경종을 울리는 것 같았어요. AI의 빠른 발전을 이해하는 긴박함에 대한 그의 말은 우리 모두가 진지하게 받아들여야 해요. 조금 무섭지만 동기부여도 돼요. 이런 이벤트를 더 많이 열어주세요!




Os comentários de Dario Amodei no AI Action Summit foram um alerta. A urgência dele sobre entender o desenvolvimento rápido da IA é algo que todos devemos levar a sério. É um pouco assustador, mas também motivador. Por favor, mais eventos como este!




Los comentarios de Dario Amodei en el AI Action Summit fueron una llamada de atención. Su urgencia sobre entender el rápido desarrollo de la IA es algo que todos deberíamos tomar en serio. Es un poco aterrador pero también motivador. ¡Por favor, más eventos como este!




Just heard about AI Power Surge from Anthropic's CEO, and it's eye-opening. The race to understand AI is real and we're missing out big time. This tool really makes you think about the urgency of AI development. Maybe it's time to get more involved? Definitely worth checking out!












