Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

DeepSeek failed Anthropic's bioweapons safety test, sparking major concerns.

: Anthropic CEO Dario Amodei criticized DeepSeek for performing poorly in a crucial bioweapons safety test. He noted that the Chinese AI company's model had no safeguards against generating rare and potentially dangerous information. Cisco also reported DeepSeek's failures, although similar issues were noted with Meta and OpenAI models. Despite these concerns, DeepSeek's rapid rise and integration into platforms like AWS and Microsoft continue.

Dario Amodei, the CEO of Anthropic, expressed serious concerns over the safety of DeepSeek's AI models, specifically regarding their failure to block the generation of rare bioweapons information during a safety test. The tests, which Anthropic routinely conducts to assess national security risks, concluded that DeepSeek performed worse than any model previously evaluated by the company.

Amodei acknowledged the technical expertise of DeepSeek's team but emphasized the need for the company to address AI safety issues seriously. Despite not labeling the current models as literally dangerous, he highlighted potential future risks, advocating for strong export controls on chips to China to prevent military advantages.

DeepSeek's rise has been accompanied by safety concerns from other entities, such as Cisco, which reported a 100% jailbreak success rate in safety tests. However, this hasn't halted DeepSeek's expansion, as seen by its integration into AWS and Microsoft platforms, although some government bodies, including the U.S. Navy and Pentagon, have banned the AI.