Google scrambles to manually remove weird AI answers in search

Google manually removes bizarre AI responses from its search tool amid quality issues.

: Google is actively working to manually disable strange AI-generated responses in its AI Overview product after facing public ridicule. These erroneous responses have included nonsensical advice such as advising users to put glue on their pizza. Despite the issues, Google continues to refine the tool, aiming to provide accurate and useful results.

Google's new AI Overview product, introduced as the Search Generative Experience, has been giving off bizarre and meme-worthy answers, which has required the company to manually intervene and remove such responses. Examples of these AI mishaps included advising users to eat rocks or put glue on their pizza, prompting Google to disable AI Overviews for certain searches to manage the public's reaction and preserve its brand. Despite having served over a billion queries since its launch, the AI's quirky outputs have brought scrutiny to Google's reputation for quality and innovation.

The issues with AI Overview highlight a broader challenge in AI development where achieving a completely accurate and reliable AI remains elusive. Gary Marcus, an AI expert, commented on the difficulty of AI systems reaching 100% accuracy in understanding and generating human-like responses, underlying the complexity of the last 20% of AI development which involves advanced reasoning and source legitimacy. This highlights the gap between current capabilities of large language models, like Google's Gemini and OpenAI's GPT-4, and the futuristic goal of artificial general intelligence.

Amidst increasing competition from companies like Bing and OpenAI, and emerging AI startups, Google is feeling the pressure to perfect its AI offerings. The competitive landscape is prompting rapid development, but not without issues, as seen with Google's recent AI blunders. This situation is reflective of the growing pains in the AI industry where companies are striving to balance innovation with reliability and accuracy, illustrating the complex journey toward advanced AI systems that can perform complex human-like reasoning and fact-checking.