Anthropic blocks OpenAI from using its Claude models
Anthropic ends OpenAI's access to Claude due to terms violation, yet allows benchmarking.

Anthropic, a company known for its development of AI models, has revoked OpenAI’s access to the Claude family of AI models. This decision stems from reported incidents where OpenAI connected Claude to various internal tools. Through these tools, OpenAI evaluated Claude’s performance against their own AI models across categories such as coding, writing, and overall safety. This action was considered a violation of Anthropic’s terms of service, especially since Anthropic prohibits the use of Claude for building competing AI services.
In response to these actions, a representative from Anthropic provided an official statement to TechCrunch, commenting that OpenAI's technical staff made use of Anthropic's coding tools prior to the anticipated launch of GPT-5. This use of Claude was explicitly forbidden under Anthropic's commercial terms. Despite withdrawing OpenAI's access, Anthropic mentioned that they would continue allowing it but solely for benchmarking and safety evaluations. This move indicates Anthropic's emphasis on maintaining competitive boundaries while still supporting industry standard practices in AI development.
On the other hand, OpenAI expressed disappointment over Anthropic's decision. An OpenAI spokesperson remarked that using such tools and approaches is industry standard. OpenAI also highlighted that even as Anthropic has restricted their access, OpenAI continues to allow Anthropic to use its own APIs. This exchange underscores the complex and sometimes contentious relationships between AI companies in their pursuit of advancement and market share.
This action by Anthropic is not entirely unprecedented. Jared Kaplan, Chief Science Officer of Anthropic, had previously expressed caution about providing access to competitor companies. He mentioned concerns over selling Claude models to OpenAI, which reflects the company’s protective stance over its technological assets. This conservatism is further demonstrated by their decision to cut off access to Windsurf, a company that was rumored to be a potential acquisition target for OpenAI but was later acquired by Cognition.
The situation between Anthropic and OpenAI highlights the tensions in the AI industry as companies attempt to balance collaboration with competition. These dynamics become particularly pronounced in areas involving transformative AI technologies where intellectual property and innovation are closely guarded. This case not only emphasizes the importance of adhering to terms of service agreements but also sheds light on how companies navigate complex inter-company relationships and evolving market landscapes.
Sources: Wired, TechCrunch