Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform

Hugging Face detected unauthorized access to its AI model hosting platform, Spaces.

: AI startup Hugging Face announced unauthorized access to its Spaces platform, impacting its system security. The company has revoked possibly compromised tokens and advised users to update their security settings. Investigations with cyber security experts and law enforcement are ongoing to enhance platform security.

Hugging Face, a prominent AI startup, announced that it detected unauthorized access to Spaces, its platform for creating, sharing, and hosting AI models. The intrusion potentially involved Spaces secrets, which are critical for accessing various protected resources. Following the detection, the company revoked several tokens to prevent further unauthorized access and has been actively notifying affected users to refresh their keys or tokens, suggesting a switch to more secure access tokens as a precautionary measure.

The exact scope of the impact remains unclear, though Hugging Face is collaborating with external cybersecurity forensic experts to thoroughly investigate the incident and review its security protocols. This includes reporting the breach to appropriate law enforcement agencies and data protection authorities, underscoring the severity of the intrusion. The company has expressed regret for the inconvenience caused to its users and is committed to using this incident as an opportunity to strengthen the security across its entire infrastructure.

This incident highlights broader security challenges facing Hugging Face, which has faced scrutiny over its security practices in the past. Previously, vulnerabilities and potential exploits have been discovered by various security researchers, pointing to possible avenues for attackers to inject malicious code or create sabotaged AI models. In response to these challenges, Hugging Face has partnered with the cloud security firm Wiz to enhance vulnerability scanning and improve security measures, signifying an ongoing effort to fortify its defenses in the increasingly targeted AI and machine learning sector.