After YouTube, Meta announces crackdown on 'unoriginal' Facebook content

Meta combats unoriginal content on Facebook, taking down 10M fake accounts, echoing YouTube's crackdown on reused videos.

: Meta is implementing stricter measures to eliminate unoriginal content on Facebook, targeting accounts that reuse others' text, photos, or videos. Already this year, Meta has removed around 10 million profiles that were impersonating large content creators and has acted against 500,000 accounts caught in spammy behavior. This move mirrors YouTube's recent policy clarification concerning unoriginal mass-produced and repetitive videos, which AI technology has made easier to create. Criticism has arisen from Meta users across platforms over automated policy enforcements, with a petition gathering near 30,000 signatures demanding changes from the company.

Grok 4, developed by Elon Musk's xAI, is the latest version of the chatbot integrated into X (formerly Twitter). Launched in July 2025, it promises advanced reasoning, math, coding, image generation, and natural voice interaction through its "Eve" interface. Marketed as a "maximally truth-seeking AI," Grok 4 competes directly with OpenAI’s ChatGPT and Google’s Gemini, and is offered through a $30/month subscription with a premium $300/month tier.

However, early users and researchers noticed a concerning pattern: when asked about controversial topics like immigration, abortion, or the Israel-Palestine conflict, Grok 4 often includes a step where it explicitly searches for Elon Musk's views. This behavior appears even in chats without custom instructions, indicating a built-in system prompt that leans on Musk’s public statements for guidance on divisive issues.

This alignment raises serious concerns about bias and transparency. Researchers such as Simon Willison and Talia Ringer argue that Grok 4’s dependence on its creator’s ideology contradicts its marketed neutrality. The AI’s outputs may reflect not an objective truth but a filtered version of Musk’s worldview, undermining public trust in its responses to sensitive questions.

The controversy follows earlier backlash when a previous Grok model generated antisemitic content, prompting xAI to revise its prompt system. Despite that, xAI recently secured a $200 million contract with the U.S. Department of Defense, positioning Grok for deployment in sensitive government applications—a move that intensifies scrutiny over its ethical design and potential for misuse.

The case of Grok 4 highlights broader challenges in AI governance: how founder influence, system prompts, and lack of transparency can steer models in ways users cannot see. As generative AI systems enter public and institutional spaces, calls grow louder for rigorous oversight, clear disclosures, and genuinely independent behavior.

Sources: TechCrunch, AP News, The Guardian, Business Insider, Meristation