OpenAI bans Chinese accounts using ChatGPT to edit code for social media surveillance

OpenAI banned Chinese accounts using ChatGPT for surveillance tool coding.

: OpenAI banned several Chinese accounts using ChatGPT to edit AI social media surveillance code. The campaign aimed to monitor anti-China sentiments on platforms like X, Facebook, and YouTube, named Peer Review. It used Meta's Llama model and targeted protests and dissent. The company highlighted misuse of models for phishing and critical posts against dissidents.

OpenAI recently banned accounts of Chinese users attempting to employ ChatGPT for developing code related to an AI social media surveillance tool. These users initiated the campaign, named Peer Review, aiming to monitor anti-Chinese sentiments on various social media platforms, including X, Facebook, YouTube, and Instagram.

The group operated using ChatGPT accounts within mainland Chinese business hours, relying heavily on manual prompting rather than automation. They utilized the AI to proofread reports intended for Chinese embassies and intelligence agencies, monitoring protests in different countries like the US, Germany, and the UK.

Part of the surveillance tool's code was allegedly based on Meta's open-source Llama model, utilizing ChatGPT to produce documents like client phishing emails and performance reviews. OpenAI also banned accounts that used the chatbot for creating critical posts against dissidents such as Cai Xia and generating damaging articles about the US, disseminated in Spanish in Latin American media.