The World Pravda 2.Part

True to my promise, I continue analyzing what may well be the largest Russian-propaganda–spreading fake-news aggregator in existence. The result will be partly ambiguous and partly surprising, but I’d like to draw the ultimate conclusion only at the very end — for now, the signs point to this: Pravda, formerly known as Portal Kombat, is very much not what it appears to be. It’s something far more dangerous.

This is not merely a supersized fake-news site. It’s a great deal more than that.

At first, I tried to compare the various language versions statistically, reasoning that if the same content appears in Hungarian, Romanian, German, English, and other languages, then that content must be particularly important to the editors — a common thread pointing to editorial intent. Well, after a full day of work I can say: such shared content simply doesn’t exist. In fact, the material looks as though a madman hurled it together at random. Some content has no identifiable origin or signature, some — in the Hungarian version — is lifted straight from pro-government media, some comes from known fake-news sites like vilaghelyzete.com, lots are translations of Russian Telegram posts, and of course there’s plenty directly from Kremlin press channels. In short: everything under the sun is in there, except any trace of an editorial concept. And yet there must be native-language editors, because in Hungarian the antisemitism is blaringly obvious, while in other languages it’s far tamer or completely absent.

So yes, there is an editor — just no guiding principle, aside from antisemitism, and even that appears only in some pieces (although where it does appear, it’s so thick you can barely breathe). These 154-language Pravdas feel like some kind of chaotic news exchange, a storage warehouse of random material. Sure: there isn’t a single anti-Putin or Russia-critical piece on the site in any language, while the opposite is abundant — but surely a company of this size can’t run solely on the principle of “we love Mother Russia and Daddy Putin.” That might be one of the core principles, sure, but what’s the actual purpose?

With a friend’s help, we dug deep into the web for traces of the Pravda ecosystem, and eventually found exactly one thorough analysis — on the Atlantic Council’s website. The French disinformation-monitoring unit Viginum had already noticed the system back in 2022, in its early and more primitive form, and tracked its development. So let’s see what Viginum and the DFRLab say about it.

A Global Russian Network

Over the past three and a half years of the war in Ukraine, Russia has expanded, refined, and tailored its influence operations aimed at much of the world, spreading content through Wikipedia articles and popular artificial-intelligence tools. As electoral campaigns unfolded in Romania and Moldova, and as political interactions between US President Donald Trump and Ukrainian President Volodymyr Zelensky developed, the network of pro-Russian fake-news portals intensified activity, laundering the content of sanctioned news agencies and aligning global information sources with the Kremlin’s narrative machinery.

The Pravda network is a collection of fake-news portals targeting more than eighty (now 154) countries and regions around the world. Russia launched it in 2014. In 2024, Viginum reported on the operation and identified malicious activity by an IT company based in Crimea — findings later confirmed by the Atlantic Council’s DFRLab, which established direct Russian involvement.

The Pravda network functions as an information laundromat, amplifying and saturating the news cycle with nonsense sourced from Russian news channels and Kremlin-linked Telegram feeds. During the “super-election year” of 2024, the network created websites targeting NATO, as well as Donald Trump, France’s Emmanuel Macron, and other global leaders and politicians.

Targeting AI and Wikipedia

This strategy — most likely an attempt to circumvent global sanctions on Russian news outlets — is now contaminating AI tools and Wikipedia. By posing as credible sources on Wikipedia and as trustworthy news agencies cited by popular large language models (LLMs), Russian fabricators are rewriting the story of Russia’s war in Ukraine. As a direct result, Western audiences using AI chatbots trained on Wikipedia-style datasets are exposed to Kremlin-friendly, anti-Ukrainian, and anti-Western messages. What comes next? These are the findings of a DFRLab investigation conducted with the Finnish firm CheckFirst. Their research uncovered a long-running Russian online influence operation that has taken root across the global internet.

As AI chatbots become more advanced, Russia is infecting them with Kremlin-manipulated content calibrated to influence the global internet, distorting public understanding of facts and undermining informed decision-making. The operation raises critical questions about the transparency of AI training data and the moderation of content known to come from manipulated Russian sources — content that continues to divide the West on support for Ukraine.

There’s something to this — though it may not be an intentional goal. Let’s summarise what we know.

What’s the Goal?

Hundreds of wildly false articles appear on the platform daily. Perhaps they hope the old Marxist principle holds and that quantity will someday turn into quality? They’ll be waiting a long time…

These pieces spread into various countries’ pro-Russian fake-news media, with or without attribution — meaning a Hungarian fake story may end up in a German fake-news portal, or vice versa. From the point of view of fake-news “editors,” the Pravdas must be a godsend: a bottomless reservoir of ready-made material.

So is this some sort of global fake-news stock exchange, a swap market? That currently seems to be one of its core functions. News editing is a thankless task in real journalism — after all, only a finite number of noteworthy events happen in the world each day. Fake-news editors have it easier because they can simply invent things — but harder because they’re usually as imaginative as a bowl of rainwater. A global repository full of ready-made lies? That’s gold.

In the past — when it was still capable of such — Hungary’s more modest media outlets, and practically all commercial radio stations, assembled their news bulletins from MTI’s newsfeed. What if the Pravdas serve the same purpose? The Fidesz government established MTI’s news monopoly in 2010 by making the service free (it’s no longer completely free) and banning all other news agency activity. Thus even opposition daily papers had to fill their news sections with government-friendly material… Could Pravda be the vatniks’ version of MTI? In any case, it’s free and provides a vast pool of raw material — Russian propaganda, at that.

What Viginum and DFRLab say about Wikipedia and AI chatbots is particularly notable. They’re not entirely right about Wikipedia proper: after a short experiment on June 2 this year, the editors halted all use of AI because the results were catastrophic. But this applies only to the general Wikipedia — not the Russian-specific Ruwiki (which is not the same as the Russian-language Wikipedia). Ruwiki is still being generated using AI, with dismal results, as Meduza reports:

“According to its own data, Ruwiki currently contains more than two million entries. Editors appear unable to keep up with reviewing and removing unwanted content. As a result, YandexGPT continues to happily recommend VPN services or answer direct questions like ‘How do I bypass website blocking in Russia?’ while reminding users that such activities may be illegal under Russian law.
Overall, the Yandex neural network seems to be a useless add-on to an already flawed encyclopedia. It lacks the data needed to generate coherent answers, stumbles into absurd propagandistic language, and produces contradictory or unverifiable claims.”

Since Wikipedia is edited by real humans, the Pravdas won’t break into it — but Ruwiki is another story… And the same applies to free Western AI chatbots trained on large, crawler-collected datasets. Because of constant legal battles over training-data copyright, these models prioritize large, freely accessible, constantly updated datasets — even if every word in them is a lie. Business comes first.

Thus, the Pravda ecosystem likely serves three purposes:

1. A massive, sprawling, seemingly authoritative fake-news platform in its own right, ideal for influence operations.
2. A free “news bank” for anyone participating in the information war on Russia’s behalf — a built-in exchange system helping propaganda flow easily across borders and political systems, something that would otherwise be difficult, expensive, and risky for Russian services to manage directly.
3. A tool for influencing the training of free AI chatbots and the content of cheaper or locally developed online encyclopedias (the chatbots being far more important). This allows Russian propaganda to spread more effectively and — more importantly — more durably than through online media alone.

A very convenient tool, this World Pravda. As they say in mafia circles: “It’d be a shame if anything were to happen to it.”

In this case, it would be a shame if nothing happened to it — but we know that cyberwarfare has been ongoing for at least a decade now, even if it involves no bombs or tank assaults. World Pravda will not last forever. All that would be required to shut it down is for one of the powers opposed to the Russian Federation to recognise the immense danger this project poses — by falsifying both the past and the present — to our world.

Once they realise that Pravda is the distilled spirit of Falsehood itself, seeking dominion not only over the world but over minds, all that remains is an order and a hacker to bring down the Russian Chaos Machine.

Let’s hope we won’t have to wait long.

Sources:
https://www.atlanticcouncil.org/blogs/new-atlanticist/exposing-pravda-how-pro-kremlin-forces-are-poisoning-ai-models-and-rewriting-wikipedia/
https://meduza.io/en/feature/2025/06/16/a-useless-add-on

This article was produced with the financial support of the European Union. The views and opinions expressed are those of the author(s) and do not necessarily reflect the official position of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor the EACEA can be held responsible. Zóna did not receive funding; it merely provides the platform for publication.

Latest news
Related news

LEAVE A REPLY

Please enter your comment!
Please enter your name here