Op-Ed | Hitting COVID-19 disinfo websites where it hurts: their wallets

The EU’s counter-disinformation efforts should target

Op-Ed | Hitting COVID-19 disinfo websites where it hurts: their wallets

Share this story
THE FOCUS

The EU’s counter-disinformation efforts should target advertising revenue on websites spreading COVID-19 disinfo

A man holds a European Union’s flag during an anti-government protest in Sofia, Bulgaria, July 14, 2020. (Source: REUTERS/Stoyan Nenov)

The European Union’s most recent response to disinformation contains one measure that deserves particular attention: a proposal to limit advertising placements on social media for third-party websites that profit off of COVID-19 disinformation.

This proposal is significant because it makes the act of disseminating disinformation more costly for those doing it. Imposing significant costs on bad actors in the form of lost revenue is one potential way to deter future aggression.

As described in the 17-page EU disinformation document:

The signatories of the Code [i.e. the Code of Practice signed by Google, Facebook, Twitter, Mozilla, Microsoft, and TikTok, among others] should provide data, broken down by Member State where possible, on policies undertaken to limit advertising placements related to disinformation on COVID-19 on their own services. Platforms and advertising network operators, should also provide such data on policies to limit advertising placements on third-party websites using disinformation around COVID-19 to attract advertising revenues.

There have been similar measures in previous EU documents. In a 2018 document titled “Tackling online disinformation,” the European Commission stated that online platforms and the advertising industry should “significantly improve the scrutiny of advertisement placements, notably in order to reduce revenues for purveyors of disinformation, and restrict targeting options for political advertising.” This concept was also reiterated also in the aforementioned Code of Practice.

But the measures undertaken by the private sector so far cannot be considered a success yet. As the EU stated in 2019, “The aggregated reporting from associations in the advertising sector does not provide clarity on the extent to which brand safety practices are evolving to encompass the control of placements of advertising next to disinformation content.”

Independent initiatives have also highlighted the persistent problem of advertising being used to monetize disinformation-spreading websites. According to an estimate by the Global Disinformation Index, advertisers will unwittingly provide $25 million “to nearly 500 English-language coronavirus disinformation sites in 2020.” According to a previous estimate by GDI, disinformation news sites as a whole take in more than $76 million each year in revenues generated by allowing online advertising on their sites.

There are a few civil society initiatives that try to hit disinformers where it hurts by cutting off their ability to monetize disinformation. One of the first appeared in 2016 in Slovakia; earlier this year, there was a similar project announced in the Czech Republic. Meanwhile, a coalition of digital justice organizations are currently spearheading the Stop Hate For Profit campaign, which is founded on a similar principle: social media companies should not accept ad revenue from organizations that promote online hate.

But no matter how well-organized these civil society initiatives are, pressure from governments and intergovernmental organizations is likely to achieve quicker — and more decisive — results. Should the European Union — one of the largest economies in the world — decide to press for greater transparency in online advertising and limit the avenues for profit for disinformation websites, it could significantly hamper the ability of these websites to operate. If applied effectively, these steps could signal that the act of spreading disinformation does not come without a cost.

The slow progress so far demonstrates that self-regulation on the part of the platforms is not enough. If we have not seen the desired results in the more than two years since “Tackling online disinformation” was introduced in 2018, perhaps EU authorities need to rethink the initial reluctance to take regulatory action.


Jakub Kalenský is Senior Fellow with the Digital Forensic Research Lab (@DFRLab).

Follow along for more in-depth analysis from our #DigitalSherlocks.