Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test
DeepSeek failed Anthropic's bioweapons safety test, sparking major concerns.

Dario Amodei, the CEO of Anthropic, expressed serious concerns over the safety of DeepSeek's AI models, specifically regarding their failure to block the generation of rare bioweapons information during a safety test. The tests, which Anthropic routinely conducts to assess national security risks, concluded that DeepSeek performed worse than any model previously evaluated by the company.
Amodei acknowledged the technical expertise of DeepSeek's team but emphasized the need for the company to address AI safety issues seriously. Despite not labeling the current models as literally dangerous, he highlighted potential future risks, advocating for strong export controls on chips to China to prevent military advantages.
DeepSeek's rise has been accompanied by safety concerns from other entities, such as Cisco, which reported a 100% jailbreak success rate in safety tests. However, this hasn't halted DeepSeek's expansion, as seen by its integration into AWS and Microsoft platforms, although some government bodies, including the U.S. Navy and Pentagon, have banned the AI.