Support OJ 
Contribute Today
En
Support OJ Contribute Today
Search mobile
War

Russian Fake factory: How Russian disinformation is gaining momentum

Russian Fake factory: How Russian disinformation is gaining momentum
Article top vertical

Europe is recording a sharp increase in Russian cyber operations and AI-generated fakes that Moscow uses in its information warfare. In 2025, Russia is actively deploying neural networks, deepfakes and generative videos to manipulate public opinion, discredit Ukrainians and European politicians, and replace news with clips styled to look like materials from reputable media. Experts warn that citizens will increasingly have to protect themselves from disinformation, as reported by Postimees.

According to Ukraine’s Center for Countering Disinformation, since the beginning of 2025, 191 Russian information operations have been identified, generating at least 84.5 million views. This includes not only deepfakes, where faces and voices are replaced, but also partial fakes — real footage overlaid with synthetic audio, added non-existent scenes, or entirely generated video. Fake videos posing as content from well-known media also circulate, as well as images of “soldiers” or “relatives” created solely for emotional manipulation.

One of the Kremlin’s key tools is the operational model Storm-1516, officially condemned by France in May 2025. This propaganda network spreads AI fakes and rumors through fake websites and social media accounts. According to France’s digital security agency VIGINUM, Russia has been using Storm-1516 since at least 2023, and it has appeared in 77 operations targeting France, Ukraine, and Western countries. The network is linked to individuals and organizations in Russia, including the Main Intelligence Directorate and the think tank Geopolitical Expertise.

At the center of the network is American citizen John Mark Dougan, who moved to Russia in 2016 and contributes to Storm-1516 content through a group of websites such as CopyCop (“MAGAstan,” “False Facade”). Coordination and financing, according to VIGINUM, are overseen by Yuriy Khoroshenskiy — believed to be a GRU officer from Unit 29155, involved in the 2020 cyberattack on Estonia. Materials produced by Storm-1516 are then amplified through Russian diplomatic accounts on X, as well as state media — Sputnik, RIA Novosti, TASS, RT, Rossiyskaya Gazeta, and Kremlin-linked platforms.

But Storm-1516 is only part of Russia’s fake-production machinery. On X and Bluesky, AI-generated videos regularly appear in which Russian operators imitate statements by European politicians. The most widespread campaigns target German Chancellor Friedrich Merz. Researchers have identified over 700 fake accounts spreading fabricated statements engineered with AI. In some videos, Merz is used in scam schemes — he allegedly endorses dubious investment platforms to deceive citizens. Similar fakes have circulated about Estonian Prime Minister Kaja Kallas.

One of the most notable manipulations was a video showing the phrase “Do you want to ban the Russian language?” supposedly projected onto the wall of Narva Castle. Journalists pointed out that the clip was entirely created using AI. Other videos targeted Ukrainian refugees — with altered voices and manipulated footage from international media — as well as a series of AI-generated clips imitating “mobilized Ukrainians in tears,” created with the Sora model. Ukraine’s General Staff debunked these materials, noting unnatural speech patterns, clothing distortions, and other digital artifacts.

Experts stress that AI makes disinformation widespread and convincing, and many fake videos spread through pseudo-media websites, making them difficult to detect. Even in professional analyses, direct links are often unavailable — fakes are quickly deleted or reposted elsewhere.

European specialists insist that major platforms must take responsibility. VIGINUM reports that social networks have been notified about interference but respond slowly. Rus.Postimees sent inquiries to leading AI-service developers, including xAI’s Grok, but received only the claim that “traditional media lie.” Lawyers note that the EU’s Digital Services Act and Digital Markets Act are in force, but their enforcement remains weak.

The main problem is the lack of resources within European government structures. Unlike Russia, where propaganda and censorship bodies employ thousands, in the EU these functions rely largely on independent analytics. Experts note that if researchers are granted access to platform data, it will be easier to prove Big Tech’s role in amplifying Kremlin narratives.

Experts emphasize that amid widespread AI-driven disinformation, citizens must verify the materials they encounter. University of Tartu associate professor Marju Himma notes that distinguishing a deepfake from real footage is becoming increasingly difficult, and only critical thinking and the ability to search for information help. Basic verification tools are available to everyone: InVID/WeVerify, Forensically, Hoaxy, DeepFake-o-Meter, Hive Moderation, ExoAPI and others help detect manipulation, editing traces, metadata and content-spread patterns.

Researcher Signe Ivask stresses that deepfakes are often revealed by unnatural eye movements, mismatched speech and facial expressions, strange lighting artefacts, and blurred areas of an image. Suspicion should also be raised by unknown sources and the absence of confirmation from reputable media. The expert reminds: if a video triggers the reaction “how is this even possible?”, one should pause and check its authenticity.

According to Ivask, the goal of Russian disinformation is to provoke strong emotions and force people to react impulsively, so the main rule is not to share materials whose authenticity is doubtful. Emotional escalation is the basis of many Kremlin operations, including those used ahead of Estonia’s local elections.

European experts agree that protecting against fakes now largely depends on citizens themselves: the more attentive and critically minded people are, the lower the chances of falling victim to Russia’s AI-driven manipulation industry.

Share this article

Facebook Twitter LinkendIn