Google still limits how Gemini answers political questions

Google limits Gemini's political responses; rivals update chatbots.

: Google's AI chatbot, Gemini, remains conservative by restricting responses to political queries, including elections. In a test, Gemini refused to answer questions about current U.S. leaders or mistakenly identified them, reflecting its struggles with facts. Despite rivals like OpenAI embracing broader political discourse, Google maintains its cautious stance. Allegations have arisen that Gemini's restrictions amount to AI censorship.

Google's AI chatbot, Gemini, is under scrutiny for its cautious approach towards political questions. While competitors like OpenAI have encouraged open discussions even on controversial topics, Google appears to remain conservative. Recent tests by TechCrunch found Gemini unable to answer basic political queries, such as identifying the current U.S. President and Vice President.

Google announced in March 2024 that Gemini would limit election-related queries, citing potential backlash from incorrect information. This caution persists even after recent major elections, with no plans publicly announced to shift this approach. The errors in answering political questions highlight Gemini's struggles with factual information.

In one highlighted test, Gemini referred to Donald J. Trump as "former president" and faltered on follow-up queries due to confusion over Trump's nonconsecutive terms. A Google spokesperson later confirmed efforts to correct such discrepancies, acknowledging the challenge large language models face when updating information or dealing with officeholders serving multiple terms.

TechCrunch's probing prompted partial corrections, with Gemini eventually acknowledging Donald Trump and J. D. Vance as the sitting U.S. leaders. However, inconsistency remains, as Gemini's responses continue to vacillate between accuracy and refusal.

These limitations have led to discussions about AI censorship. Figures like Marc Andreessen and Elon Musk suggest that Google's approach may stifle freedom in chatbots. Meanwhile, OpenAI is vocal about pursuing 'intellectual freedom,' maintaining objectivity in discourse. Similar sentiments are echoed by Anthropic, as its AI seeks to strike a nuanced balance between harmful and benign answers.

Sources: The Verge, TechCrunch, OpenAI Blog