Meta says it may stop development of AI systems it deems too risky

Meta may halt risky AI systems; introduces Frontier AI Framework.

: Meta's Frontier AI Framework outlines conditions to not release AI systems deemed 'high-risk' or 'critical-risk.' These systems could aid in cybersecurity and biological attacks, with critical systems potentially causing catastrophic outcomes. Risk assessments are based on expert input rather than empirical tests. Meta aims to balance openness with safety, addressing criticism and contrasting with companies like DeepSeek.

Meta released the Frontier AI Framework, specifying the conditions under which it may withhold AI systems deemed 'high-risk' or 'critical-risk.' High-risk AI systems could facilitate attacks, while critical systems pose the potential for catastrophic, unmitigable outcomes.

System risk isn't determined through empirical tests but by internal and external expert reviews guided by senior decision-makers. If high-risk, Meta will hold back the release until risks are mitigated; critical systems will face halted development and enhanced security measures.

This framework is partly a response to criticisms of Meta's open AI approach, setting it apart from firms like DeepSeek, known for minimal safeguards. By balancing the benefits and risks, Meta aims to responsibly offer AI technology, preserving public benefits while minimizing risks.