Google plans to sign the EU's AI code of practice

Google will sign the EU AI code; Meta refuses, questioning the AI Act's impact.

: Google has decided to sign the European Union's code of practice for Artificial Intelligence, which aims to assist developers in adhering to the AI Act. In contrast, Meta has opted out of signing, criticizing the EU's approach as overreaching and claiming it could hinder AI progress. Google expressed its own reservations, particularly regarding impacts on copyright law and competitiveness in Europe. By signing, AI companies commit to transparency, avoiding pirated content, and agreeing to content owners' dataset restrictions.

In an important move, Google has conveyed its intention to sign the European Union's general-purpose AI code of practice, a voluntary framework designed to assist AI developers in aligning with the newly adopted AI Act. This act, scheduled to be effective on August 2, aims at companies managing general-purpose AI models that present systemic risks. Notables, including Google itself, Anthropic, Meta, and OpenAI, may find themselves adhering to this regulation, with a two-year period to complete compliance. Google's announcement was made ahead of the regulation enforcement, showcasing its commitment despite some existing concerns.

Google highlighted in a statement by Kent Walker, president of global affairs, that although the finalized code of practice had improved over initial proposals, concerns persist about the Act, particularly regarding its impact on Europe's AI development and deployment. Walker cautioned that deviations from established EU copyright laws, extended approval processes, and uncovered trade secrets could hamper innovation and damage competitiveness within Europe. This stance appears to be in opposition to Meta, who earlier refused to agree to the code, citing overreach by the EU which they argued could lead Europe astray in the AI sector.

The EU's AI Act intends to introduce risk-based regulation that outlines prohibitions on 'unacceptable risk' types, such as cognitive behavioral manipulation or the use of AI for social scoring. Furthermore, the Act classifies AI applications into 'high-risk' sectors, which include areas like biometrics, facial recognition, education, and employment, thus mandating developers to meet quality and risk management obligations. As part of the code of practice, companies are required to provide updated AI tool documentation, abstain from using pirated content for training, and comply with content owner's requests regarding the use of their works in datasets.

Kent Walker emphasized the necessity for AI models to support innovation without infringing on legal dependencies and intellectual property. The compliance framework set by the EU underpins the importance of responsibly managing AI tools which have significant potential to disrupt multiple sectors if not properly governed. The landmark AI Act thereby requires AI system registration and insists on rigorous compliance towards the defined guidelines intended to prevent misuse and encourage safer technological practices.

Meta's contrasting decision may set a precedent for further debate over AI regulation, potentially impacting other companies’ decisions in the tech sphere. This decision is particularly noteworthy considering the variance of responses within the industry, especially related to the regulatory landscape concerning generative AI systems, which continue to be a growing area of focus and development.

Sources: Google Blog, TechCrunch, European Union