OpenAI bans some Chinese, Russian accounts using AI for evil • The Register

OpenAI bans some Chinese, Russian accounts using AI for evil • The Register

10/07/2025


OpenAI has banned ChatGPT accounts believed to be linked to Chinese government entities attempting to use AI models to surveil individuals and social media accounts.

In its most recent threat report [PDF] published today, the GenAI giant said that these users usually asked ChatGPT to help design tools for large-scale monitoring and analysis – but stopped short of asking the model to perform the surveillance activities.

“What we saw and banned in those cases was typically threat actors asking ChatGPT to help put together plans or documentation for AI-powered tools, but not then to implement them,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told reporters.

One now-banned user, suspected to be using a VPN to access the AI service from China, asked ChatGPT to design promotional materials and project plans for a social media listening tool, described as a “probe,” that could scan X, Facebook, Instagram, Reddit, TikTok, and YouTube for what the user described as extremist speech, and ethnic, religious, and political content. 

This user claimed a government client wanted this scanning tool, but stopped short of using the model to monitor social media. OpenAI said it’s unable to verify if the Chinese government ended up using any such tool.

In two other cases, the company banned one user who asked ChatGPT to identify funding sources for an X account that criticized the Chinese government and another one who asked ChatGPT to identify petition organizers in Mongolia. 

In both, we’re told, OpenAI’s models only provided publicly available information – not identities, funding sources, or other sensitive details.

“Cases like these are limited snapshots, but they do give us important insights into how authoritarian regimes might abuse future AI capabilities,” Nimmo said. “They point to something about the direction of travel, even if they also suggest that maybe the destination is somewhere away.”

Since the company started producing threat reports in February 2024, OpenAI said it has banned more than 40 networks that violated its usage policies. 

Also since that time, the threat groups and individuals attempting to use AI for evil have been employing the models to improve their existing tradecraft, not to develop entirely new cyberattacks or workflows. That still seems to be the case, according to OpenAI execs.

More recently, however, some of the disrupted accounts appear to be using multiple AI models to achieve their nefarious goals.

“One China-linked cluster that we investigated, for example, used ChatGPT to draft phishing lures and then explored another model, DeepSeek, to automate mass targeting,” said Michael Flossman, who leads OpenAI’s threat intelligence team.

Similarly, a set of suspected and now-banned Russian accounts used ChatGPT to generate video prompts for an influence operation dubbed Stop News, but then attempted to use other companies’ AI tools to produce the videos that were later posted on YouTube and TikTok. 

OpenAI could not independently confirm which other models this group used.

“We’re seeing adversaries routinely use multiple AI tools hopping between models for small gains in speed or automation,” Flossman said.

In another example of attempted model abuse originating from Russia, the company banned accounts asking ChatGPT to develop and refine malware, including a remote-access trojan, credential stealers, and features to help malware evade detection. The company wrote:

These accounts appear to be linked with Russian-speaking criminal groups, as the threat intel team saw them posting about their activities in a Telegram channel connected to a specific criminal gang. OpenAI execs declined to attribute the malware-making endeavors to a particular cybercrime crew, but said they have “medium to high confidence on who is behind it.” ®

You May Also Like…

0 Comments