Meta says it may stop development of AI systems it deems too risky
Meta may halt risky AI systems; introduces Frontier AI Framework.

Meta released the Frontier AI Framework, specifying the conditions under which it may withhold AI systems deemed 'high-risk' or 'critical-risk.' High-risk AI systems could facilitate attacks, while critical systems pose the potential for catastrophic, unmitigable outcomes.
System risk isn't determined through empirical tests but by internal and external expert reviews guided by senior decision-makers. If high-risk, Meta will hold back the release until risks are mitigated; critical systems will face halted development and enhanced security measures.
This framework is partly a response to criticisms of Meta's open AI approach, setting it apart from firms like DeepSeek, known for minimal safeguards. By balancing the benefits and risks, Meta aims to responsibly offer AI technology, preserving public benefits while minimizing risks.