War

Russia uses artificial intelligence to undermine the unity of Ukraine’s allies

Russia uses artificial intelligence to undermine the unity of Ukraine’s allies
Article top vertical

Pro-Russian forces use artificial intelligence to undermine the unity of Ukraine’s allies, conducting disinformation campaigns on social media. Users on X, TikTok, Facebook, Instagram, and Telegram spread AI-generated videos falsely accusing Ukraine’s closest allies of intending to divide and occupy the country. Although this campaign is unlikely to stop European support for Ukraine, Russian hackers try to weaken the resolve of Ukrainian forces in difficult situations, as reported by Foundation of Defense of Democracies (FDD).

After the “Coalition of the Willing” meeting on September 4, where participants confirmed support for sending peacekeeping forces to Ukraine, pro-Kremlin sources began spreading lies about plans to divide the country. The Russian hacker group Killnet claimed to have obtained confidential French military documents detailing France, the UK, Poland, and Romania’s plan to divide Ukraine into zones of interest. Despite Killnet having no direct connection to Russian intelligence, their actions align with Russia’s strategic goals.

Pro-Russian social media accounts quickly circulated fake maps and videos showing French leaders, imitating the style of major news agencies. However, a prompt refutation by the French government effectively stopped the spread of disinformation. France’s Ministry for Europe and Foreign Affairs pointed out the lack of official markings on the map and numerous spelling mistakes, confirming its falsity.

This case demonstrates how artificial intelligence facilitates and accelerates the creation and spread of propaganda, allowing Russia to conduct coordinated influence campaigns on the attitudes and behavior of Ukraine’s allies. With AI, Moscow can create individualized disinformation for different countries while maintaining a consistent narrative.

Experts note that governments can effectively neutralize such threats through rapid response and coordination. Taiwan provides an example, where upon detecting a foreign influence operation, the government publishes materials within two hours. While it is impossible to completely prevent deepfakes, prompt counter-propaganda helps minimize harm and maintain allies’ trust.

Share this article

Facebook Twitter LinkendIn