DeepSeek's updated R1 AI model is found to be more censored in tests
DeepSeek's R1 model scores high but faces criticism for increased censorship on sensitive topics.

DeepSeek's updated R1 AI model, designated as R1-0528, demonstrates advanced capabilities, rivaling top industry contenders like OpenAI's o3 in benchmark tests covering coding, mathematical problems, and general knowledge areas. However, alongside these achievements, concerns about its increased tendency to shy away from contentious issues have surfaced. The model's reluctance to engage with topics that the Chinese government deems controversial, such as human rights concerns in Xinjiang, raises significant questions about its applicability in open information contexts.
The pseudonymous developer known as 'xlr8harder' from the SpeechMap platform conducted systematic tests, highlighting that R1-0528 is significantly more censored compared to previous iterations. The testing criteria included politically sensitive subjects to the Chinese government, and the outcome suggested an amplified filtration system within the AI’s responses, potentially implemented through fine-tuning and prompt-level filtering.
Historically, AI models developed within China are subject to strict regulations aligned with state policies to prevent outputs that could disrupt sociopolitical stability. These controls are informed by a 2023 law mandating models not to generate content that could harm national unity or social harmony. DeepSeek's earlier R1 model was found to reject 85% of queries related to politically sensitive topics, indicating a longstanding practice of content moderation to comply with these national laws.
Observers, including AI platforms like Hugging Face, caution about the broader implications of leveraging Chinese AI models given their opaque moderation practices. Clément Delangue from Hugging Face has publicly expressed worries about the dependency on well-performing but censored Chinese models, urging for more transparency and ethical considerations in AI development.
Critiques against such censorship aren't new, as exemplified by earlier controversies surrounding models like Magi-1 and Kling, which also faced backlash for blocking politically sensitive content. These issues underscore the fine line AI firms in China walk between state compliance and catering to a global market expecting unrestricted information flow. As technology firms push the boundaries of generative AI, these developments warrant careful scrutiny from all stakeholders in the tech ecosystem.
Sources: TechCrunch, Wired, SpeechMap, Ars Technica