FIMI 101: Foreign information manipulation and interference targeting the 2024 US general election

Foreign state and non-state actors are targeting Americans with false and misleading narratives to sow divisions ahead of the US election

FIMI 101: Foreign information manipulation and interference targeting the 2024 US general election

Share this story
THE FOCUS

Banner: Chairman Mark Warner (D-VA) questions Google parent Alphabet’s global affairs president Kent Walker, Meta’s global affairs president Nick Clegg and Microsoft President Brad Smith on the day they testify before a Senate Intelligence Committee hearing on election threats, on Capitol Hill in Washington DC, September 18, 2024. (Source: Reuters/Anna Rose Layden)

In the months leading up to the US general election on November 5, 2024, multiple US government defense and intelligence agencies have warned that foreign state and non-state actors are actively attempting to interfere with the election. Malign interference efforts are by no means unique to this election: with approximately half the world’s population voting at some point over the course of 2024, concerns about foreign influence operations targeting democratic elections have once again climbed atop the global policy agenda.

This primer is an overview of the techniques used in foreign malign influence campaigns and actions being taken by democratic governments to mitigate their influence. It also describes documented incidents of foreign malign influence and information campaigns targeting US audiences.

Understanding FIMI: Foreign Information Manipulation and Interference

Concerns about foreign malign influence in democratic elections have pushed governments and lawmakers around the world to establish mechanisms to address this threat. Among the first of these mechanisms was coming up with a term to describe the phenomenon that would encompass the ever-evolving tactics utilized by malign actors. To this end, the European External Action Service (EEAS) began utilizing the term Foreign Information Manipulation and Interference (FIMI), which in turn has been adopted by parts of the US government.

According to EEAS, FIMI refers to “a pattern of behavior that threatens or has the potential to negatively impact values, procedures and political processes. Such activity is manipulative in character, conducted in an intentional and coordinated manner. Actors of such activity can be state or non-state actors, including their proxies inside and outside of their own territory.”

Within the context of elections, FIMI tactics include disseminating false or misleading voting guidance, producing content that exploits sociopolitical divisions, and targeting voters and lawmakers to influence their opinions and votes.

Recognizing the significant threats of FIMI around democratic elections, the European Union ramped up its preparations ahead of the June 2024 parliamentary elections to counter disinformation and other forms of malign influence. The EU’s concerns were well placed: coordinated inauthentic campaigns spread disinformation on social media in multiple EU countries, amplifying misleading political content, climate change conspiracy theories, and pro-Russian narratives.

Learning from the experience of EU allies, the US government’s Foreign Malign Influence Center (FMIC), established under the Office of the Director of National Intelligence (ODNI), is leading US efforts to mitigate FIMI and coordinate federal responses. The FMIC adheres to a notification framework to help ensure that disclosures of FIMI across government, as well as with the private sector and the public, are “consistent, well-informed, unbiased, and appropriately coordinated across the Executive Branch.”

The FMIC is not the only federal body coordinating US intelligence activity around FIMI. In its 2024 National Counterintelligence Strategy, the National Counterintelligence and Security Center outlined coordination efforts across the counterintelligence community to “outmaneuver and constrain foreign intelligence entities… protect America’s strategic advantages, and invest in the future to meet tomorrow’s threats.”

The strategy includes nine goals to mitigate the impact of FIMI:

  • Detect, understand, and anticipate foreign intelligence threats, identify opportunities for action, and provide decision advantage.
  • Counter, degrade, and deter foreign intelligence activities and capabilities through coordinated offensive and defensive measures.
  • Combat foreign intelligence cyber activities through proactive, integrated operations.
  • Protect individuals against foreign intelligence targeting and collection, including Americans and others affiliated with the US government at home and abroad and other protected individuals in the United States who may be of high interest to foreign intelligence entities (FIE).
  • Protect democracy from FIE malign influence efforts to safeguard the integrity of and public trust in our democratic institutions and processes.
  • Protect critical technology and US economic security to safeguard our national security and competitive advantage.
  • Protect the nation’s critical infrastructure by increasing understanding and awareness of FIE capabilities and threats, enhancing resilience, denying adversary access, and deterring FIE threats.
  • Reduce risks to key US supply chains from FIE exploitation and compromise.
  • Build counterintelligence (CI) capabilities, partnerships, and resilience to achieve enduring superiority over our FIE adversaries

Examples of FIMI targeting the US

In a joint guidance released in April 2024, the Cybersecurity and Infrastructure Security Agency (CISA), Federal Bureau of Investigation, and the ODNI warned about the “latest tactics employed in foreign malign influence operations to shape US policies, decisions, and discourse and could be used to target America’s election infrastructure.” The agencies warned specifically about threats of foreign malign influence in the elections from Russia, Iran, and China.

Russian bots, doppelgangers, and influencers push pro-Kremlin propaganda

In April 2024, the Microsoft Threat Analysis Center (MTAC) documented Russian influence operations laundering anti-Ukraine content targeting US audiences through fake news websites amplifying pro-Kremlin narratives. This operation of seventy Russian actors used traditional and social media and a mix of covert and overt campaigns to undermine US support for the war in Ukraine, in line with the Kremlin’s international propaganda strategy.

Russia has also targeted US audiences with fake information sources as part of the operation known as Doppelganger. As we noted in our analysis published on September 17, 2024:

“Multiple research organizations and news outlets have exposed the Russian doppelgänger effort since 2022, including Sueddeutsche Zeitung, the EU Disinfo LabQurium, and the DFRLab. At the time, we uncovered more than 2,300 Doppelganger assets on Facebook and Instagram that targeted Germany, France, Italy, Ukraine, Latvia, and the UK with pro-Kremlin narratives. Meta estimated that the influence operation spent $105,000 on advertising to promote the network. Since 2022, Russia significantly increased its foreign influence operations, particularly through the tactics utilized in Doppelganger campaigns, including a recent operation that targeted the Paris 2024 Olympics with attempts to discredit France and President Emmanuel Macron.”

In September 2024, the US Department of Justice issued an affidavit explaining the seizure of 32 internet domains associated with Russian state actors involved in Doppelganger. The affidavit coincided with the beginning of the trial of three members of Black Hammer, a US fringe political organization, who are charged with acting as unregistered agents of the Russian government.

Meanwhile in July 2024, the DOJ seized two domain names used to create fake accounts and issued a search warrant for information related to 968 X accounts used by Russian actors to create social media bot farm, allegedly enhanced with generative artificial intelligence (GAI), that spread disinformation in the United States and abroad. According to the DOJ, an individual who worked as the deputy editor-in-chief for Russia’s state TV, Russia Today (RT), created the bot farm in coordination with the Russian Federal Security Service (FSB), formally known as the Committee for State Security (KGB). The DOJ stated in the announcement that the FSB’s use of use of US-based domain names `to register the bots “violates the International Emergency Economic Powers Act,” in addition to related money laundering legal violations.

Following the July 15 assassination attempt against Donald Trump, the DFRLab exposed Russian propagandists’ efforts to amplify conspiracy theories related to the incident. A second round of Russian conspiracy theories took place in the hours and days following the arrest of another would-be assassin, Ryan Routh, in mid-September 2024.

Increasingly aggressive – and creative – Iranian malign influence operations

Previous instances of state-sanctioned FIMI targeting US elections are informing how officials respond to it in the 2024 election cycle. For instance, in 2021, the US charged two Iranian hackers for their role in a “cyber-enabled campaign to intimidate and influence American voters, and otherwise undermine voter confidence and sow discord, in connection with the 2020 US presidential election.” The pair attempted to compromise voter registration and information websites, engaged in voter intimidation campaigns, and impersonated members of the Proud Boys group in messages sent to political actors. The incident highlighted how foreign actors take advantage of political polarization to deceive and exacerbate existing divisions among Americans.

Government-led Iranian operations have increased substantially in the lead-up to the 2024 elections. In April 2024, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) sanctioned two companies and four individuals involved in malicious cyber activity on behalf of the Iranian Islamic Revolutionary Guard Corps Cyber Electronic Command (IRGC-CEC). According to Google’s Threat Analysis Group, the hacking group APT 42, which is associated with the IRGC-CEC, has gradually expanded its cyber operations over the years from espionage operations to cyberattacks and information operations. APT 42 has previously targeted accounts associated with the Biden and Trump presidential campaigns in 2020 and is reviving its efforts targeting the US 2024 presidential campaigns.

In July 2024, Director of National Intelligence Avril Haines released a statement about malign actors associated with the Iranian government posing as influencers and activists to influence elections and US attitudes about the war in Gaza. In her statement, Haines reiterated a point he made in testimony to Congress the preceding May. “Iran is becoming increasingly aggressive in their foreign influence efforts, seeking to stoke discord and undermine confidence in our democratic institutions,” she noted.

In August, Donald Trump’s presidential campaign was hacked in an operation that US intelligence agencies confirmed was perpetrated by Iran.  The Trump campaign blamed “foreign sources hostile to the United States” for hacking internal communications, including files on potential running mates, which were subsequently leaked to reporters at POLITICO, the Washington Post, and the New York Times. The incident was reminiscent of the Russia-led hack and leak of the Hilary Clinton campaign’s private emails in 2016, as urged by then-presidential candidate Donald Trump, and raises alarms about direct interference in US political campaigns to influence election results.

Following the hack, former CISA director Chris Krebs warned that officials should “expect continued efforts to stoke fires in society and go after election systems.” Later that same month, MTAC released a threat intelligence report which exposed efforts by Iranian actors to create fake news sites and impersonate activists in order to influence American voters ahead of the elections. It also anticipates that Iran will continue its efforts to target institutions and candidates and amplify divisive content. The US Department of Justice is preparing criminal charges related to the case.

China and the return of “Spamouflage”

In April 2024, MTAC reported that actors associated with the Chinese Communist Party (CCP) are using fake social media accounts that post AI-generated content to sow divisions between Americans and possibly influence electoral outcomes. By August, Meta had removed more than 8,600 accounts, pages, and groups across its platforms as part of the largest influence operation disturbed by the company; it attributed the network to Chinese law enforcement.

The operation appears to have been part of Spamouflage, CCP FIMI campaign documented by the DFRLab and other research organizations over the last several years. As we reported in July 2024, Spamouflage amplified content related to pro-Palestinian protests at US universities as well as content mocking US President Joe Biden and questioning his suitability to run for office. Similar to Russian propaganda following the attempted assassination of former president Trump in July, the DFRLab identified anti-US narratives and conspiracy theories promoted by the CCP after the incident.

Israel targets its US ally

Earlier this year, the DFRLab and other research organizations documented an inauthentic information operation impersonating US individuals targeting lawmakers on X to influence opinions about the war in Gaza. The operation utilized at least 130 fake accounts to amplify allegations against the United Nations Relief and Works Agency (UNRWA). Many of the accounts in this network presented as US citizens and persons of color. 

Following this initial research, Meta conducted a subsequent review of its own platforms and discovered the presence of the same campaign. They attributed it to STOIC, an Israeli political marketing firm, deplatforming the accounts and sending a cease-and-desist notice to the company. Open AI also came to a similar conclusion. Eventually, the New York Times confirmed that Israel’s Ministry of Diaspora Affairs had hired STOIC to conduct the operation, making it the first known instances of Israel engaging in FIMI against the United States.  

While the operation itself did not focus on the election, it did clearly and directly seek to alter US policy on a sensitive topic during an election year. This raises difficult questions about how the United States would respond if Israel or another ally were to engage further in such operations, though there is no public evidence yet to suggest this is taking place.  

AI enters the fray

AI is playing a growing role in influence operations, allowing bad actors to impersonate individuals and speed up content creation and dissemination. AI generated audio and imagery is already reaching US audiences and going viral on social media accounts, albeit with little success in attracting audiences who believe the content is real, so limiting its potential impact. In the same announcement where OpenAI disclosed an Israeli-based operation targeting US audiences, it also said it had disrupted services to abuse its services by Russian, Chinese, and Iranian actors.

Examples of fake content targeting US audiences this year included fake AI-generated footage of former President Donald Trump on Jeffrey Epstein’s plane, an AI-generated robocall using President Biden’s voice urging New Hampshire residents to save their votes until the general elections, and a parody video using a manipulated audio of Vice President Harris appearing to call President Biden “senile,” and AI-generated images of Trump posing with Taylor Swift.

A recently documented manipulation campaign used AI to appropriate the likenesses of attractive European female influencers in support of specific campaigns and candidates. Recognizing the danger AI poses during contentious times, an increasing number of US states have approved legislation regulating AI in politics, though many of these laws have yet to withstand judicial scrutiny.

What’s ahead

Predictions of foreign malign actors engaging in FIMI to target the 2024 US general election are proving to be well-founded. The instances outlined above represent a snapshot of known FIMI efforts regarding the election; there are almost certainly additional operations that have yet to be identified and exposed.

Malign actors have proven to be agile, adaptable, and even innovative in their efforts to manipulate the information environment and exploit US divisions. As we approach the general election, it is imperative to remain vigilant against the escalating threat of FIMI influencing the vote or hampering election integrity. The next step is further understanding the scope and intention of those threats and their perpetrators.


Cite this case study:

Dina Sadek, “FIMI 101: Foreign information manipulation and interference targeting the 2024 US general election,” Digital Forensic Research Lab (DFRLab), September 26, 2024, https://dfrlab.org/2024/09/26/fimi-101/.