Meta declines to sign the EU's AI code of practice

'Meta refused to sign the EU's AI code, citing legal uncertainties that exceed the AI Act's scope.'

: Meta has declined to endorse the European Union's AI code of practice, which aims to guide companies in compliance with AI legislation. Joel Kaplan, Meta’s chief global affairs officer, argues the code poses significant legal uncertainties and extends beyond the AI Act's intended scope. The EU code mandates demands like banning AI training on pirated content and updating documentation, which Meta criticizes. Various tech giants, including Meta, have urged the EU to delay its AI Act rollout, but the EU remains steadfast on its schedule.

Meta has chosen not to sign the European Union’s newly introduced code of practice for AI, which is designed to help companies comply with upcoming AI regulations. Meta's reluctance comes just weeks before these rules become enforceable. Joel Kaplan, the company's chief global affairs officer, expressed concern that the code imposes unnecessary legal ambiguities and extends requirements beyond the published AI Act.

Kaplan elaborated on LinkedIn, critiquing Europe’s approach to AI as misguided. He emphasized that the code's complexity could hinder the progression and deployment of advanced AI models within European markets. This code, as prescribed by the EU, was created for voluntary adoption and covers a range of stipulations including documentation protocols and restrictions on training AI with pirated content.

Several tech companies like Alphabet, Microsoft, and Mistral AI have joined efforts to challenge the EU's legislation, which strictly limits certain uses of AI perceived as high-risk or unacceptable. These encompass controversial applications, such as social scoring and biometric analysis. Despite these protests, the EU retains its timeline, underscoring the importance of enforceable standards to secure responsible AI usage.

Additionally, new guidelines were released targeting providers of general-purpose AI models, establishing a five-year window for compliance. These guidelines will impact major AI developers like OpenAI and Google, requiring them to align with systemic risk mandates. The rules are part of broader efforts to regulate AI comprehensively, reflecting the EU's commitment to risk management in AI technology.

Sources: TechCrunch, Bloomberg