Meta explains why its AI claimed Trump's assassination attempt didn't happen

Meta explains why its AI denied the Trump assassination attempt, blaming hallucinations and incorrect fact-checking of photos.

: Meta's AI initially refused to address the Trump assassination attempt due to potential confusion. It later denied the event thanks to AI hallucinations. Incorrect fact-checking was applied to Trump photos due to similarities with doctored images. Meta has updated its AI and corrected the labeling errors.

Meta has provided an explanation regarding its AI chatbot's reluctance to respond to questions about the assassination attempt on Trump, followed by its occasional denial of the event. The company stated that Meta AI is designed to avoid addressing recent events to prevent confusion and misinformation, but hallucinations led to it generating false denials in some cases.

AI hallucinations occur when the system produces incorrect or misleading answers due to factors like flawed training data or difficulty in processing multiple sources. Meta acknowledged the issue and updated its AI's responses, admitting the action should have been taken sooner, while continuing to address the hallucination problem.

Meta also clarified why its social platforms misapplied fact-check labels to photos of Trump after the attempt. A doctored image showing smiling Secret Service agents led to the label being erroneously applied to the original photo as well; this error has since been corrected. Despite updates, Trump's supporters accused Meta of suppressing the story, and Google faced similar criticisms from Elon Musk, leading to a clarification about an autocomplete bug.