OpenAI has blocked several ChatGPT accounts connected to state-sponsored and cybercriminal groups from Russia, China, Iran, and other countries that were using artificial intelligence for malicious purposes. The announcement was made in the company’s report published on June 3.
One example involved a Russian-speaking group that used ChatGPT to create malware for Windows, refine code, and disguise the program as legitimate software — Crosshair X — in order to infect computers and steal credentials, cookies, and browser tokens. The stolen data was then sent to Telegram.
These actors operated with a high level of operational security (OPSEC), launching a single prompt per temporary account to iteratively refine their malware. Tools involved included Go, PowerShell, ShellExecuteW, DLL side-loading, Base64 obfuscation, and SOCKS5 proxies to hide their IP addresses.
OpenAI also detected activity from two Chinese APT groups — APT5 and APT15 — using ChatGPT for research on satellite technologies, writing scripts, managing Linux systems, developing Android apps, and creating bots for social media platforms, such as automated liking on TikTok, X (Twitter), and Instagram.
In another case, the Iranian operation Storm-2035 used ChatGPT to generate political comments in English and Spanish in support of Palestine, Scotland, and Ireland. The Chinese influence campaign Sneer Review generated Facebook and Reddit posts about global politics in English, Chinese, and Urdu.
In total, OpenAI identified nine threat actor groups misusing its models. These actors created fake content, automated social media attacks, and even assisted in fraudulent recruitment schemes involving “entry fees.”
The company emphasized that ChatGPT and other large language models can be used both ethically and maliciously. As a result, OpenAI is ramping up user activity monitoring and working with partners to detect and block malicious campaigns. These efforts target not only hacking attacks but also manipulative information operations on geopolitical topics.