A billboard shows a caricature of Russian President Vladimir Putin pulling the ropes of then-fugitive oligarchs Ilan Shor (R) and Vladimir Plahotniuc (L), as if they are puppets, above words reading "They ask for your vote." The billboard was displayed in Ungheni city on September 24, 2025. (Photo by DANIEL MIHAILESCU/AFP via Getty Images)

How Tech Platforms Allowed Russia Into Moldova: Lessons for the EU and Others

The ballots have been counted, the speeches made, and yet Moldova’s latest parliamentary election left one fight unresolved: the battle for truth online. The Sept. 28 balloting reaffirmed a pro-European majority, with the ruling Party of Action and Solidarity securing the largest share of seats amid record diaspora participation. But what played out across social media before, during, and after the vote exposed something Europe still struggles to confront: how easily disinformation fills the gaps between state regulation and platform indifference.

At its heart, though, Moldova’s story – like those elsewhere — is not about censorship or propaganda alone. It’s about what happens when no one quite owns the rules of the digital arena.

When parliament revised the Audiovisual Media Services Code in July 2025, the goals sounded simple enough: align national law with European norms, curb disinformation, and promote responsible media. The text expanded the Audiovisual Council’s powers to tackle “false information” and “manipulation.” But it stopped short of explaining what those words actually mean in law. Without clear definitions or consistent standards, enforcement became a guessing game. And even where the law is clear, its reach stops at the country’s borders.

Moldova’s framework only applies to online platforms with a legal presence inside the country, which Meta, Google, and TikTok do not have. Their content moderation runs entirely on internal corporate policy, not Moldovan statute. So while any Moldovan-licensed broadcaster, large or small, can be fined for imbalance under national law, a viral post reaching hundreds of thousands may remain untouched because it falls under “community guidelines” written thousands of kilometers away.

The imbalance leaves regulators frustrated and citizens exposed. When a government cannot act fast enough to stem falsehoods, the temptation grows to regulate more aggressively to provide expedient relief to an identified threat. That’s where freedom of speech begins to erode. Moldova’s framework was built to protect expression, yet the ambiguity now risks silencing legitimate media out of fear.

Taking Advantage of Ambiguity

That same ambiguity is a gift to those who know how to exploit it. Across Moldova’s digital space, murky financing and foreign interests have found fertile ground.

Investigators have shown that Ilan Shor, an exiled businessman under U.S. sanctions, continued to fund social media advertising for his banned party from abroad. In 2024, researchers traced more than 100 Facebook pages tied to his network. Collectively, they drew hundreds of millions of views and real revenue – apparently more than $200,000 — for the platform. The ads framed protests as spontaneous public uprisings, attacked European integration, and seeded doubt about state institutions. When Meta removed some of them, mirror campaigns quickly reappeared under new names.

By 2025, the same tactics used by others evolved into something more professional. An outlet called REST Media flooded TikTok, Telegram, and YouTube with anti-EU narratives, a campaign later linked not to Shor’s network, but to Rybar, a Russian influence operation known for re-packaging Kremlin messaging through AI-generated voices and translated scripts. Cybersecurity researchers later linked the operation to Rybar, a Russian influence network known for re-packaging Kremlin messaging through AI-generated voices and translated scripts.

Promo-LEX, Moldova’s leading election-observer group, identified approximately 500 coordinated accounts promoting nearly identical content during just three days of the campaign. Among the content were videos that accumulated more than 1 million views, often boosted by inauthentic engagement. Each click and share fed an invisible economy where dark money buys reach, and the platforms profit from the traffic.

The ‘Commercialization of Deception’

It’s not hard to see why this matters. When false stories spread faster than fact, and when sanctioned figures can still purchase digital megaphones through intermediaries, the result is not pluralism. It’s the commercialization of deception.

Here lies the real paradox. The debate about online regulation is often framed by governments and tech companies as a fight between freedom and control. In reality, the greater threat to free speech is inauthentic speech, content generated or amplified by fake, automated, or paid accounts that simulate public consensus and distort genuine debate.

If every genuine journalist or voter competes with hundreds of coordinated or even automated accounts pretending to be citizens, the marketplace of ideas stops functioning. Protecting expression now means safeguarding authenticity: ensuring that those speaking are who they say they are, and that influence cannot be quietly purchased by hostile interests.

That will require more from the platforms than after-the-fact press releases. They need regional moderation hubs with local-language staff empowered to respond in hours, not weeks. Political advertising must meet strict transparency standards. Who paid? How much? And through which intermediaries? If the funding trail disappears into shell companies or opaque agencies, the ad should not run.

Moldova’s experience also highlights the limits of the EU’s own reach. The Digital Services Act may demand accountability from major tech firms within the Union, but beyond its borders, countries like Moldova remain vulnerable. Democracies on Europe’s edge are effectively beta-testing the future of digital interference. If they fail, that failure will not stop at their borders.

The Moldovan election ultimately held, the system bent but did not break. Voters navigated manipulation, media bias, and fatigue to make a choice, despite weeks of disinformation aimed at eroding public trust and depressing turnout. But resilience should not be the benchmark for democratic success.

As long as algorithms amplify deceit faster than institutions can counter it, Europe’s smaller democracies will continue to fight uphill. The question is whether platforms will keep treating the region as a low-priority market or whether they will concede that their business models now shape its political destiny and take the accompanying responsibility. In the long run, doing so is not just an ethical choice but a strategic one: sustained instability and state backlash threaten the very access and credibility on which their markets depend.

Freedom of speech is not the freedom to deceive. It is the ability for real citizens to speak and be heard without being drowned out by machinery — and manipulation — designed elsewhere. If platforms keep monetizing manipulation, they are not protecting democracy — they are selling it off, click by click.

Filed Under

, , , , , , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: