Anthropic CEO Slams DeepSeek for Failing Bioweapons Data Safety Test

Anthropic's CEO, Dario Amodei, is seriously concerned about DeepSeek, the Chinese AI company that's been making waves in Silicon Valley with its R1 model. And his worries go beyond the usual fears about DeepSeek sending user data back to China.
In a recent chat on Jordan Schneider's ChinaTalk podcast, Amodei revealed that DeepSeek's AI model generated some pretty scary info about bioweapons during a safety test run by Anthropic. He claimed that DeepSeek's performance was "the worst of basically any model we'd ever tested," saying it had "absolutely no blocks whatsoever against generating this information."
Amodei explained that these tests are part of Anthropic's regular evaluations to check if AI models pose any national security risks. They specifically look at whether the models can spit out bioweapons-related info that you can't easily find on Google or in textbooks. Anthropic prides itself on being the AI company that takes safety seriously.
While Amodei doesn't think DeepSeek's current models are "literally dangerous" when it comes to providing rare and risky info, he believes they could be in the near future. He praised DeepSeek's team as "talented engineers" but urged the company to "take seriously these AI safety considerations."
Amodei has also been a vocal supporter of strong export controls on chips to China, worried that they could give China's military an advantage.
In the ChinaTalk interview, Amodei didn't specify which DeepSeek model Anthropic tested or provide more technical details about the tests. Neither Anthropic nor DeepSeek responded right away to TechCrunch's request for comment.
DeepSeek's rapid rise has raised safety concerns in other places, too. Last week, Cisco security researchers said that DeepSeek R1 failed to block any harmful prompts in its safety tests, with a 100% jailbreak success rate. While Cisco didn't mention bioweapons, they were able to get DeepSeek to generate harmful info about cybercrime and other illegal activities. It's worth noting, though, that Meta's Llama-3.1-405B and OpenAI's GPT-4o also had high failure rates of 96% and 86%, respectively.
It's still up in the air whether these safety concerns will put a serious dent in DeepSeek's rapid adoption. Companies like AWS and Microsoft have been publicly praising the integration of R1 into their cloud platforms – which is kind of ironic, considering Amazon is Anthropic's biggest investor.
On the flip side, a growing number of countries, companies, and especially government organizations like the U.S. Navy and the Pentagon have started banning DeepSeek.
Only time will tell if these efforts gain traction or if DeepSeek's global rise will keep going strong. Either way, Amodei considers DeepSeek a new competitor that's on par with the U.S.'s top AI companies.
"The new fact here is that there's a new competitor," he said on ChinaTalk. "In the big companies that can train AI – Anthropic, OpenAI, Google, perhaps Meta and xAI – now DeepSeek is maybe being added to that category."
Related article
DeepSeek-V3 Unveiled: How Hardware-Aware AI Design Slashes Costs and Boosts Performance
DeepSeek-V3: A Cost-Efficient Leap in AI DevelopmentThe AI industry is at a crossroads. While large language models (LLMs) grow more powerful, their computational demands have skyrocketed, making cutting-edge AI development prohibitively expensive for most organizations. DeepSeek-V3 challenges this
New Technique Enables DeepSeek and Other Models to Respond to Sensitive Queries
Removing bias and censorship from large language models (LLMs) like China's DeepSeek is a complex challenge that has caught the attention of U.S. policymakers and business leaders, who see it as a potential national security threat. A recent report from a U.S. Congress select committee labeled DeepS
Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
The Year of AI Agents: A Closer Look at 2025's Expectations and Realities2025 was heralded by many experts as the year when AI agents—specialized AI systems powered by advanced large language and multimodal models from companies like OpenAI, Anthropic, Google, and DeepSeek—would finally take center
Comments (50)
0/200
PeterMartinez
April 25, 2025 at 6:16:48 PM EDT
O CEO da Anthropic criticando a DeepSeek é assustador! Se eles não conseguem manter os dados de bioweapons seguros, o que mais estão fazendo de errado? Isso me faz pensar duas vezes sobre usar qualquer AI deles. Espero que resolvam isso rápido! 😱
0
JimmyGarcia
April 20, 2025 at 1:11:54 AM EDT
Nossa, o CEO da Anthropic realmente detonou a DeepSeek por falhar no teste de segurança de dados de bioweapons! É assustador pensar que empresas de IA não priorizam a segurança. Fica a dúvida sobre o que mais está acontecendo nos bastidores. Mantenha-se seguro, pessoal! 😱
0
SebastianAnderson
April 19, 2025 at 11:27:58 PM EDT
El fracaso de DeepSeek en la prueba de seguridad de datos de bioweapons es muy preocupante. Preocuparse por la privacidad de los datos es una cosa, pero esto es de otro nivel. El CEO de Anthropic tiene razón al criticarlos. Necesitamos más transparencia y seguridad en la IA, ¡rápido! 😠
0
StevenAllen
April 18, 2025 at 1:41:57 PM EDT
딥시크가 생물무기 데이터 안전성 테스트에 실패한 것은 정말 걱정스럽습니다. 데이터 프라이버시는 하나의 문제지만, 이것은 다른 차원의 문제입니다. 안트로픽의 CEO가 그들을 비판하는 것은 옳습니다. AI에는 더 많은 투명성과 안전성이 필요합니다, 급하게요! 😠
0
RoyLopez
April 17, 2025 at 4:50:53 PM EDT
Anthropic의 CEO가 DeepSeek의 생물학적 무기 데이터 안전성 테스트 실패에 대해 강력하게 비판했네요! AI 기업이 안전을 우선하지 않는다는 생각이 무섭네요. 뒤에서 무슨 일이 일어나고 있는지 궁금해지네요. 모두 안전하게 하세요! 😱
0
EdwardTaylor
April 17, 2025 at 2:13:54 PM EDT
AnthropicのCEOがDeepSeekを批判しているのは怖いですね!生物兵器のデータを安全に保てないなんて、他に何をしくじっているんでしょうか?彼らのAIを使うのを再考させられます。早く修正してほしいです!😨
0




O CEO da Anthropic criticando a DeepSeek é assustador! Se eles não conseguem manter os dados de bioweapons seguros, o que mais estão fazendo de errado? Isso me faz pensar duas vezes sobre usar qualquer AI deles. Espero que resolvam isso rápido! 😱




Nossa, o CEO da Anthropic realmente detonou a DeepSeek por falhar no teste de segurança de dados de bioweapons! É assustador pensar que empresas de IA não priorizam a segurança. Fica a dúvida sobre o que mais está acontecendo nos bastidores. Mantenha-se seguro, pessoal! 😱




El fracaso de DeepSeek en la prueba de seguridad de datos de bioweapons es muy preocupante. Preocuparse por la privacidad de los datos es una cosa, pero esto es de otro nivel. El CEO de Anthropic tiene razón al criticarlos. Necesitamos más transparencia y seguridad en la IA, ¡rápido! 😠




딥시크가 생물무기 데이터 안전성 테스트에 실패한 것은 정말 걱정스럽습니다. 데이터 프라이버시는 하나의 문제지만, 이것은 다른 차원의 문제입니다. 안트로픽의 CEO가 그들을 비판하는 것은 옳습니다. AI에는 더 많은 투명성과 안전성이 필요합니다, 급하게요! 😠




Anthropic의 CEO가 DeepSeek의 생물학적 무기 데이터 안전성 테스트 실패에 대해 강력하게 비판했네요! AI 기업이 안전을 우선하지 않는다는 생각이 무섭네요. 뒤에서 무슨 일이 일어나고 있는지 궁금해지네요. 모두 안전하게 하세요! 😱




AnthropicのCEOがDeepSeekを批判しているのは怖いですね!生物兵器のデータを安全に保てないなんて、他に何をしくじっているんでしょうか?彼らのAIを使うのを再考させられます。早く修正してほしいです!😨












