Regulating the use of AI for Brazilian elections: what’s at stake

Brazil’s implementation of rules regarding AI in upcoming local elections will serve as a major test case 

Regulating the use of AI for Brazilian elections: what’s at stake

Share this story
THE FOCUS

Banner: Supporters of Brazil’s former President Jair Bolsonaro attend a rally in Rio de Janeiro, Brazil April 21, 2024. (Source: Reuters/Pilar Olivares)

On October 6, 2024, more than 150 million Brazilians will go to the polls to elect mayors, councilors, and other local officials spread out across 5,568 municipalities. In an attempt to mitigate the potentially harmful use of generative artificial intelligence (GAI) tools in such a large and decentralized election, Brazil’s Superior Electoral Court, the body responsible for organizing and monitoring the election in Brazil, approved a resolution on February 27, 2024 regulating the use of artificial intelligence during the campaign.

Brazil is among more than eighty countries carrying out elections in 2024, each of which face unprecedented questions regarding the electoral weaponization of GAI. Video deepfakes, falsified audio clips, and generative imagery all have enormous potential to undermine elections, whether through individual cases of falsified information circulating during a campaign or through the technology’s mere existence sowing public doubt, allowing candidates to dismiss authentic footages as fake.  

The new regulations are among the first efforts worldwide to regulate GAI in an electoral context. It is also the latest in a decade-long trend in which Brazil has passed new measures regarding domestic internet governance, including regulating social media platforms to counter disinformation. In 2014, the country approved the Civil Rights Framework for the Internet, a law that establishes principles, guarantees, rights, and responsibilities regarding internet use in the country. Moreover, the Brazilian Congress is debating Bill 2630, also known as PL das Fake News or the “fake news bill,” which would establish content moderation guidelines on social networks, institute transparency reports, and enact measures against inauthentic accounts and profiles on social platforms. The proposed legislation has generated a strong reaction from digital platforms, which worked to defeat the project.

Brazil’s new electoral regulations require transparency regarding the dissemination of materials manipulated with artificial intelligence tools. According to these rules, campaigns must explicitly inform audiences when utilizing AI, including naming the tool being used, except when retouching content to improve their quality. Failure to comply with the rules will result in the removal of the content or access restrictions on the channels by which the content was transmitted.

The electoral court, known by the Brazilian acronym TSE, also established stricter punishments when AI is used to produce false content and deepfakes with the intention of harming or favoring a candidacy. In these cases, TSE reserves the right to revoke a candidate’s electoral registration, officially removing them from the ballot. 

“The use, in electoral propaganda, whatever its form or modality, of manufactured or manipulated content to disseminate notoriously untrue or out-of-contextual facts with the potential to cause damage to the balance of the election or the integrity of the electoral process is prohibited,” the rules state. “The use, to harm or favor a candidacy, of synthetic content in audio, video format or a combination of both, which has been digitally generated or manipulated, is prohibited, even with authorization, to create, replace or alter the image or voice of a living, deceased or fictitious person (deep fake).”

Notably, the electoral rules focus on generative media created by the campaigns themselves. They do not stipulate any policies regarding GAI content produced by people not linked to parties, candidates, or campaigns, nor do they establish whether such cases will be monitored or adjudicated. 

Prior to the passage of the regulations in February 2024, there were at least three cases of audio deepfakes falsifying comments of potential mayoral candidates; these clips gave the impression that the candidates had criticized or insulted public servants and political opponents. The audio circulated in WhatsApp groups, which are enormously popular in Brazil as a source of news.

The campaign for the Brazilian municipal elections officially begins on August 16, 2024, and ends on October 1, five days before the first round of voting. During the campaign, advertising and promotional content are allowed on television, radio, and the internet, including social media platforms. Any advertising or explicit requests to vote for a candidate outside this period is subject to a fine. 

Prior to the official campaign period, political parties and pre-candidates could begin raising funds for campaigns on May 15. This period, popularly known as the pre-campaign, is subject to the same AI rules established for the official campaign period. 

Although Brazil has pioneered the use of electronic voting machines for increased security and faster vote-counting, the intense political polarization in the country has, in recent years, posed challenges to the electoral system. Jair Bolsonaro, the former president, has been a tireless purveyor of conspiracy theories regarding the vulnerability of electronic voting machines, which have been used since 2000 in Brazil and have never had a record of irregularity. The 2022 electoral campaign, which Bolsonaro lost to the current president, Lula da Silva, was marked by accusations of a lack of honesty in the electoral process and threats against the rule of law. 

Inflated by influencers and Bolsonaro’s allies with a large audience on social media, far-right political influencers spread fraud accusations and encouraged a military coup by the armed forces, culminating in the anti-democratic demonstrations that led to an attack on the Brazilian Congress on January 8, 2023.

These recent events regarding Brazilian democracy are the backdrop for the 2024 elections. Polarization in the country shows no signs of decreasing. There is also no reason to believe that disinformation, widely observed in recent elections, will be less pervasive. Indeed, there is a strong expectation by disinformation researchers that it could have even more impact in the form of AI-generated deepfakes.

The regulations in detail

Regulations for the use of AI tools in Brazilian elections are codified in resolution 23.732. The document lists twelve measures approved on February 27 by TSE judges to update resolution 23.610, which was enacted in 2019 to regulate electoral propaganda in Brazil, such as TV, radio and print ads.

The guidelines for the use of AI are described in Article 9º-B of 23.732: 

“The use in electoral propaganda, in any modality, of synthetic multimedia content generated using artificial intelligence to create, replace, omit, merge or change the speed or superimpose images or sounds imposes on the person responsible for the propaganda the duty to inform, in an explicit, prominent and accessible way, that the content was manufactured or manipulated and the technology used.”

The resolution also details how this information should appear in campaign-related communications, including “at the beginning of the pieces or communication made by audio” or “on each page or side of printed material.”

Campaigns may employ AI tools on election-related content in three situations:

  1. For adjustments intended to improve image or sound quality;
  2. For the production of graphic elements of visual identity, vignettes, and logos; or
  3. In marketing resources commonly used in campaigns, such as the montage of images in which candidates and supporters appear in a unique photographic record used in the production of printed and digital advertising material.

The new resolution also restricts the use of “robots” (i.e., chatbots) that simulate dialogue with the candidate or any other person, and addresses several other major topics, including fake content, political ads, and collaboration with the Justice Ministry.

Fake content

The resolution holds big tech companies and social media platforms responsible for immediately removing content that contains disinformation, hate speech, neo-Nazi and fascist ideology, as well as anti-democratic, racist, and homophobic narratives. 

Called “solidarity liability,” the measure asks big tech companies to act against “disinformative content” without the need for user reports or a court order to remove the posts. This new measure is one of the most relevant points of the TSE resolution, as it requires a greater effort from platforms to regulate content – and specifically disinformation –  on their platforms.

In response, platforms like Google and Meta expressed discomfort with the determination and asked the TSE to change the text so that they would not be held responsible for the content produced by users. The request was denied and the resolution was approved as written.

Political ads

Platforms that allow the promotion of electoral advertisements must adopt specific measures to prevent the dissemination of untrue content. Among the measures, the resolution suggests:

  • An application of terms of use and content policies compatible with the electoral context;
  • The creation of reporting channels accessible to users and public and private institutions;
  • The adoption of preventive and corrective actions on the platform, and reporting, with transparency, the effects and results of these actions; and
  • The preparation of a specific report for the election year on the impact of the platform on the integrity of the elections.

Only positive political advertisements are allowed to be broadcast; that is, ads that do not speak negatively about opponents or spread defamatory, untrue, or out-of-context information.

Furthermore, platforms that allow the paid promotion of political content will have to provide a repository of all published advertisements. The resolution states that the repository must be easily accessible and offers the option of advanced search of ad data. The regulations also state that the repository must:

  1. include an ability to search for ads based on keywords, terms of interest, or advertiser names;
  2. show the spending on boosts, the period in which the ad was aired, the number of people reached, and the segmentation criteria defined by the advertiser when the ad was broadcast; and
  3. allow systematic collection, through a dedicated interface (application programming interface – API), of advertising data, including content, expenditure, reach, audience reached, and those responsible for payment.

Collaboration with the Justice Ministry

Lastly, the resolution requires platforms to collaborate with the electoral court and comply with court orders to remove content and suspend or ban profiles, after which the blocked materials and accounts must remain unavailable on the platform even after the election period unless another court decision allows their reactivation.

Potential impacts

The TSE’s regulation of the use of AI tools in the Brazilian electoral campaign is unprecedented in Brazil. As this will be the first time that elections will take place with a specific regulation for AI, it remains to be seen how the Brazilian electoral process and the regulation’s measures play out. The lessons learned from this election could serve to guide future policymaking.

For example, the TSE text leaves gaps on a series of topics. There are no specific definitions of what should be considered “disinformative content” or a “decontextualized fact.” While there is no global consensus on how to define these terms, the lack of definitions could cause confusion or failures in content moderation and law enforcement.

The resolution´s Article 9o refers to “preventive corrective actions, including the improvement of its content recommendation systems,” without establishing parameters to measure platform compliance with the resolution. Considering that the resolution itself requires that platforms proactively remove cases of deepfakes, disinformation, or decontextualized facts, the lack of compliance parameters could lead to platforms adopting different criteria regarding when content moderation is warranted. 

The resolution also does not provide details on how compliance with the measures will be monitored. The text suggests that there is a shared responsibility across law enforcement, civil society, political parties, candidates, and technology companies acting together, but it does not specify any procedures or tools to carry this out. 

There is also concern about the production of false materials by voters and supporters; a frequent occurrence in the last elections in Brazil, the proliferation of GAI could make it even more ubiquitous. Digital platforms were extensively used to disseminate false content to boost candidacies but without official links to the campaigns or parties. The resolution prohibits the promotion of electoral content by individuals, but, in past elections, platforms did not adopt all the measures to comply with the regulations to protect the electoral process, allowing countless advertisements of this type. The new resolution does not specify what will be done to detect this practice nor what the punishments would be, if any, for those involved.

The regulations have already provoked changes by some platforms. In May 2024, Google decided that it will not allow election ads to be broadcast on its platforms, including through its search engine and YouTube. Google cited technical difficulties in complying with TSE requirements, such as maintaining a repository for real-time tracking of ads and an advanced search tool. The company already has a tool for this purpose, but its operation is limited. According to Google, the definition of “political content ” presented by the TSE was also too broad, making it impossible to monitor the developments in this category.

X, formerly Twitter, also stopped allowing users in the country to promote political ads on its platform. The change was noticed by Brazilian press outlets in the first week of May, when the TSE’s deadline for platforms to adapt to the resolution ended. At the time of publishing, X had not officially commented on the topic.

This piece was published as part of a collaboration between the DFRLab and NetLab UFRJ, which published a version of this article in Portuguese. Both organizations are monitoring the use of AI tools during the 2024 Brazilian municipal elections to better understand their impact on democratic processes. 


Cite this case study:

Beatriz Farrugia, “Regulating the use of AI for Brazilian elections: what’s at stake,” Digital Forensic Research Lab (DFRLab), May 29, 2024, https://dfrlab.org/2024/05/29/regulating-the-use-of-ai-for-brazilian-elections-whats-at-stake.