Experts say that Google's latest report on its AI model lacks essential safety details

Google's AI safety report for Gemini 2.5 Pro lacks detail, raising transparency concerns.

: Google's recent technical report on its AI model, Gemini 2.5 Pro, is criticized for not providing detailed safety information. Experts like Peter Wildeford and Thomas Woodside expressed concern over the lack of transparency and detail, specifically the absence of Google’s Frontier Safety Framework findings. Kevin Bankston highlighted that this trend mirrors a broader industry issue of declining transparency. Google assures safety testing occurs but such tests are not reflected in their current documentation.

Google’s recent release of the technical report for its AI model, Gemini 2.5 Pro, has drawn criticism from experts for lacking detailed safety information. The report, published weeks after the model’s launch, omits findings from Google’s Frontier Safety Framework (FSF), a protocol designed to identify and mitigate potential harms from advanced AI systems. This omission has raised concerns about the transparency and thoroughness of Google’s safety evaluations.

Peter Wildeford, co-founder of the Institute for AI Policy and Strategy, expressed disappointment in the report’s brevity and delayed release. He emphasized that the lack of detailed safety data makes it challenging to assess whether Google is upholding its public commitments to AI safety. Similarly, Thomas Woodside, co-founder of the Secure AI Project, noted that Google’s last publication of dangerous capability test results was in June 2024, highlighting a pattern of inconsistent safety disclosures.

Kevin Bankston, a senior adviser on AI governance at the Center for Democracy and Technology, commented on the broader industry trend of declining transparency. He described the situation as a “race to the bottom” in AI safety, where companies prioritize rapid deployment over comprehensive safety reporting. Bankston’s remarks underscore the growing concern that major AI developers, including Google, may be compromising on safety standards in the pursuit of market leadership.

Google has stated that it conducts internal safety testing and adversarial evaluations before releasing its models. However, the lack of detailed documentation in the Gemini 2.5 Pro report has led to skepticism among researchers and policymakers. The absence of comprehensive safety data hampers independent verification and raises questions about the robustness of Google’s safety protocols.

The situation is further compounded by the absence of a safety report for Gemini 2.5 Flash, a smaller and more efficient model recently announced by Google. While the company has indicated that a report is forthcoming, the delay reinforces concerns about the timeliness and transparency of Google’s safety disclosures. As AI systems become increasingly integrated into various aspects of society, the need for thorough and prompt safety evaluations becomes ever more critical.

Sources: Techcrunch, Google Deepmind