Anthropic CEO Slams DeepSeek for Failing Bioweapons Data Safety Test

Anthropic's CEO, Dario Amodei, is seriously concerned about DeepSeek, the Chinese AI company that's been making waves in Silicon Valley with its R1 model. And his worries go beyond the usual fears about DeepSeek sending user data back to China.
In a recent chat on Jordan Schneider's ChinaTalk podcast, Amodei revealed that DeepSeek's AI model generated some pretty scary info about bioweapons during a safety test run by Anthropic. He claimed that DeepSeek's performance was "the worst of basically any model we'd ever tested," saying it had "absolutely no blocks whatsoever against generating this information."
Amodei explained that these tests are part of Anthropic's regular evaluations to check if AI models pose any national security risks. They specifically look at whether the models can spit out bioweapons-related info that you can't easily find on Google or in textbooks. Anthropic prides itself on being the AI company that takes safety seriously.
While Amodei doesn't think DeepSeek's current models are "literally dangerous" when it comes to providing rare and risky info, he believes they could be in the near future. He praised DeepSeek's team as "talented engineers" but urged the company to "take seriously these AI safety considerations."
Amodei has also been a vocal supporter of strong export controls on chips to China, worried that they could give China's military an advantage.
In the ChinaTalk interview, Amodei didn't specify which DeepSeek model Anthropic tested or provide more technical details about the tests. Neither Anthropic nor DeepSeek responded right away to TechCrunch's request for comment.
DeepSeek's rapid rise has raised safety concerns in other places, too. Last week, Cisco security researchers said that DeepSeek R1 failed to block any harmful prompts in its safety tests, with a 100% jailbreak success rate. While Cisco didn't mention bioweapons, they were able to get DeepSeek to generate harmful info about cybercrime and other illegal activities. It's worth noting, though, that Meta's Llama-3.1-405B and OpenAI's GPT-4o also had high failure rates of 96% and 86%, respectively.
It's still up in the air whether these safety concerns will put a serious dent in DeepSeek's rapid adoption. Companies like AWS and Microsoft have been publicly praising the integration of R1 into their cloud platforms – which is kind of ironic, considering Amazon is Anthropic's biggest investor.
On the flip side, a growing number of countries, companies, and especially government organizations like the U.S. Navy and the Pentagon have started banning DeepSeek.
Only time will tell if these efforts gain traction or if DeepSeek's global rise will keep going strong. Either way, Amodei considers DeepSeek a new competitor that's on par with the U.S.'s top AI companies.
"The new fact here is that there's a new competitor," he said on ChinaTalk. "In the big companies that can train AI – Anthropic, OpenAI, Google, perhaps Meta and xAI – now DeepSeek is maybe being added to that category."
Related article
DeepSeek-GRM: Revolutionizing Scalable, Cost-Efficient AI for Businesses
If you're running a business, you know how tough it can be to integrate Artificial Intelligence (AI) into your operations. The high costs and technical complexity often put advance
New Technique Enables DeepSeek and Other Models to Respond to Sensitive Queries
Removing bias and censorship from large language models (LLMs) like China's DeepSeek is a complex challenge that has caught the attention of U.S. policymakers and business leaders, who see it as a potential national security threat. A recent report from a U.S. Congress select committee labeled DeepS
Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
The Year of AI Agents: A Closer Look at 2025's Expectations and Realities2025 was heralded by many experts as the year when AI agents—specialized AI systems powered by advanced large language and multimodal models from companies like OpenAI, Anthropic, Google, and DeepSeek—would finally take center
Comments (50)
0/200
RichardThomas
April 11, 2025 at 12:00:00 AM GMT
The CEO's concerns about DeepSeek are legit, but it's wild to think they're failing safety tests on bioweapons data. Makes you wonder what else is going on behind the scenes. I'm kinda spooked but also curious to see how this plays out. Stay safe, folks!
0
WillieRodriguez
April 11, 2025 at 12:00:00 AM GMT
Las preocupaciones del CEO sobre DeepSeek son serias, pero es increíble pensar que están fallando en las pruebas de seguridad de datos de bioweapons. Me da un poco de miedo pero también curiosidad por ver cómo se desarrolla esto. ¡Mantente seguro, gente!
0
PaulLopez
April 11, 2025 at 12:00:00 AM GMT
CEOのDeepSeekに対する懸念は本当だが、バイオウェポンのデータの安全性テストに失敗しているなんて信じられない。裏で何が起きているのか気になる。怖いけど、この先どうなるかも興味がある。みんながんばれ!
0
JustinWilliams
April 11, 2025 at 12:00:00 AM GMT
As preocupações do CEO sobre o DeepSeek são legítimas, mas é louco pensar que eles estão falhando nos testes de segurança de dados de bioweapons. Fica a dúvida sobre o que mais está acontecendo por trás das cenas. Estou um pouco assustado, mas também curioso para ver como isso vai se desenrolar. Fiquem seguros, pessoal!
0
WalterWhite
April 11, 2025 at 12:00:00 AM GMT
Die Bedenken des CEOs über DeepSeek sind berechtigt, aber es ist verrückt zu denken, dass sie bei Sicherheitstests für Biowaffendaten durchfallen. Man fragt sich, was sonst noch hinter den Kulissen passiert. Ich bin ein bisschen erschrocken, aber auch neugierig, wie sich das entwickelt. Bleibt sicher, Leute!
0
WalterBaker
April 13, 2025 at 12:00:00 AM GMT
Anthropic CEO's slam on DeepSeek is pretty intense! It's scary to think about AI being used for bioweapons. I hope they can sort out their data safety issues soon. It's a bit of a wake-up call for the whole industry. Stay safe, folks!
0



The CEO's concerns about DeepSeek are legit, but it's wild to think they're failing safety tests on bioweapons data. Makes you wonder what else is going on behind the scenes. I'm kinda spooked but also curious to see how this plays out. Stay safe, folks!




Las preocupaciones del CEO sobre DeepSeek son serias, pero es increíble pensar que están fallando en las pruebas de seguridad de datos de bioweapons. Me da un poco de miedo pero también curiosidad por ver cómo se desarrolla esto. ¡Mantente seguro, gente!




CEOのDeepSeekに対する懸念は本当だが、バイオウェポンのデータの安全性テストに失敗しているなんて信じられない。裏で何が起きているのか気になる。怖いけど、この先どうなるかも興味がある。みんながんばれ!




As preocupações do CEO sobre o DeepSeek são legítimas, mas é louco pensar que eles estão falhando nos testes de segurança de dados de bioweapons. Fica a dúvida sobre o que mais está acontecendo por trás das cenas. Estou um pouco assustado, mas também curioso para ver como isso vai se desenrolar. Fiquem seguros, pessoal!




Die Bedenken des CEOs über DeepSeek sind berechtigt, aber es ist verrückt zu denken, dass sie bei Sicherheitstests für Biowaffendaten durchfallen. Man fragt sich, was sonst noch hinter den Kulissen passiert. Ich bin ein bisschen erschrocken, aber auch neugierig, wie sich das entwickelt. Bleibt sicher, Leute!




Anthropic CEO's slam on DeepSeek is pretty intense! It's scary to think about AI being used for bioweapons. I hope they can sort out their data safety issues soon. It's a bit of a wake-up call for the whole industry. Stay safe, folks!












