Google is using Anthropic’s Claude to improve its Gemini AI

Google uses Anthropic's Claude to test its Gemini AI for accuracy and safety.

: Google is utilizing Anthropic's Claude model to improve its Gemini AI by comparing their outputs. Contractors have been tasked with evaluating responses based on criteria like truthfulness. TechCrunch revealed Google may not have permission from Anthropic. Concerns arose about Gemini's accuracy, especially on sensitive topics.

TechCrunch has revealed that Google is employing Anthropic's Claude to enhance its Gemini AI model, indicating a comparison of outputs from the two models to improve accuracy. Contractors are given up to 30 minutes per prompt to evaluate the answers based on various criteria such as truthfulness and verbosity.

There have been instances where Claude's responses appeared in internal tools used by Google with explicit identifiers. In some cases, Claude's emphasis on safety was notable, refusing to answer prompts deemed unsafe, contrasting with Gemini, which had incidents of safety violations.

Google, a significant investor in Anthropic, hasn't confirmed having permission from Anthropic to use Claude in this manner. Google DeepMind reassures that Gemini is not being trained on Anthropic models, though output comparisons are performed for evaluations according to industry practices.