The DFRLab reflects on 2025

The DFRLab’s 2025 investigations expose global attempts to undermine democracies, erode public trust, and weaponize emerging technologies.

The DFRLab reflects on 2025

Share this story
THE FOCUS

BANNER: A woman takes a picture with her smartphone as she casts her ballot at a polling station during presidential elections in Warsaw, Poland, on May 18, 2025. (Source: Aleksander Kalka/NurPhoto via Reuters)

It has been a year of challenges and turbulence, but the work continues. As we close out 2025, it’s important to take a moment to recognize the achievements of the DFRLab team, whose rigorous investigations shed light on various attempts around the globe to undermine democracies, erode public trust, and weaponize emerging technologies. This year has underscored the necessity and impact of our community’s work.

In 2025, the DFRLab, strengthened by our partnerships, helped uncover Russian foreign influence operations aimed at weakening democracies, exemplified by the attempts to undermine Moldova’s elections, Ukraine’s resistance, and Georgia’s opposition.

In countering foreign influence, sanctions have become a tool of choice. Yet the DFRLab repeatedly documented how evasive sanctioned actors have turned to digital proxy forces to launder their narratives, exemplifying the challenges of enforcing digital policies.

Developments in artificial intelligence (AI) have accelerated attempts to both persuade and confuse the public, from AI slop targeting the Canadian election to Grok’s dichotomous and false fact-checks of the Israel-Iran conflict.

These highlights reflect key themes that defined much of our 2025 work, though they represent only a portion of our broad work.

Elections

From Canada to Moldova and Romania, the DFRLab documented the myriad ways in which threat actors sought to undermine election integrity.

In Moldova, hybrid tactics, combining online manipulation with offline financial incentives, showcased the convergence of electoral influence operations. The DFRLab participated in the FIMI Defenders for Election Integrity project developed by the FIMI-ISAC, which mapped the landscape of hybrid attacks Moldova faced. As one of Europe’s most contested information environments, Moldova offers critical insights into how democracies absorb and deflect foreign manipulation.

This year, we saw a spate of newly emerged media outlets targeting the Moldovan electorate. In several instances, we were able to attribute these faux media websites to Russian actors. For example, using website forensics, the DFRLab and GLOBSEC linked REST Media to the sanctioned Russian actor Rybar, illustrating how threat actors regenerate assets to evade sanctions.

Poland’s elections also saw sanctioned actors bypass legal restrictions to shape narratives during the campaign cycle, this time relying on Belarusian state media assets.

Canada’s federal election faced similar challenges, as an avalanche of AI slop YouTube channels sought to impersonate credible media outlets to mislead the public. But it also faced different challenges stemming from the unharnessed and pervasive use of AI. From deepfake videos of Prime Minister Mark Carney to bot accounts promoting conspiracy narratives about Carney, Canada’s election demonstrated how AI is used to artificially promote certain messaging during critical electoral periods. As a member of the Canadian Digital Media Research Network, the DFRLab worked alongside local partners to promote information integrity and expose attempts to mislead the electorate.

With Armenia’s 2026 election approaching, at a time when the country is pursuing a careful diversification of its foreign relations, we documented how pro-Kremlin actors, including the Storm-1516 operation, amplified multilingual propaganda and fabricated narratives in an attempt to undermine confidence in the Armenian government.

The Pravda Network and LLM Grooming

The DFRLab worked with Check First on groundbreaking investigations into Russia’s Pravda Network, visualized in a dashboard and interactive map. We revealed how this website ecosystem operates as a key vector for influence operations across more than 110 countries and regions. The network has published more than 3.7 million articles that repurpose content from Russian news outlets and amplify information from questionable Telegram channels.

The Pravda Network also exemplifies one of the newest tactics in the arsenal of information manipulation, large language model (LLM) grooming. Pravda domains infiltrated trusted platforms, appearing as sources on Wikipedia and being cited by AI chatbots, an insidious polluting of the information ecosystem.

Pushing Policy Forward

In pursuit of information integrity, the DFRLab and the Democracy + Tech Initiative worked with G7 nations throughout 2025 to inform approaches to AI, foreign information manipulation and interference (FIMI), and digital transnational repression (DTNR).

This year, the DFRLab, in partnership with Global Affairs Canada’s Rapid Response Mechanism, launched the DTNR Detection Academy—a first-of-its-kind initiative to strengthen democratic resilience and counter the growing threat of DTNR. We also published a landmark report, Authoritarian reach and democratic response: A tactical framework to counter and prevent transnational repression, which provides actionable countermeasures to disrupt, deter, and prevent future DTNR operations.

Additionally, our analysis on China’s AI engagement with emerging markets and developing countries contributed to G7 discussions on AI cooperation, which were reflected in the G7 Leaders’ Statement on AI for Prosperity.

Emerging Technology

A throughline in much of our work this year was how AI has become a catalyst for online deception, demonstrating how the cheap accessibility and rapid evolution of AI tools enable malign actors to generate tailored content that often outpaces detection frameworks. To better understand the threat, we examined the evolution of synthetic profile pictures to understand how these tools progress and their implications for future influence operations.

Separately, we also published a report, Biometrics and digital identity in Africa, which reviewed Africa’s expansion of biometric and digital identity systems, now operational in 49 countries and used in electoral processes across 35 nations. While positioned as a modernization tool, the report examines the associated risks of this technology and documents how weak legal frameworks, limited oversight, and a growing reliance on foreign vendors have created an ecosystem vulnerable to privacy breaches, state surveillance, and systemic exclusion.

Looking forward

As we look ahead to 2026, the threats documented this year show no sign of abating, and they will undoubtedly continue to evolve. Yet our work in 2025 demonstrates that rigorous investigation, partnership, and sustained commitment to transparency can expose these operations, inform policymakers, and strengthen democratic resilience.