Op-Ed: The next big wave of disinformation will be heard, not seen

Audio chat may be all the

Op-Ed: The next big wave of disinformation will be heard, not seen

Share this story
THE FOCUS

Audio chat may be all the rage, but if we are not careful, it could become a primary vector for the spread of disinformation

(Source: Reuters / Thiago Prudêncio / SOPA Images / Sipa USA)

As part of our effort to broaden expertise and understanding of information ecosystems around the world, the DFRLab is publishing this external contribution. The views and assessments in this open-source analysis do not necessarily represent those of the DFRLab.

The next big threat in the information space will not be seen, it will be heard―and it’ll be nearly impossible to trace, attribute, and counter. As far back as 2018, we saw disinformation spreading through voice/audio messages on encrypted messaging platforms. Now, with the rise of unique, audio-forward applications such as Clubhouse and Twitch, disinformation and misinformation are likely to travel via audio at much higher rates than we’ve seen before. The challenges we saw in verifying audio just a few years back will be tenfold down the line. Fact-checkers, technology companies, international organizations, the media, and law enforcement have done a yeoman’s effort of attempting to counter disinformation to date. These coalitions need to start preparing and coordinating now to find viable and efficient ways of approaching and countering false messages disseminated specifically through audio while preserving freedom of expression.

The first week of 2021 saw a mob of avid Donald Trump supporters storming the US Capitol, breaking windows, kicking in doors, and chanting threats, egged on by conspiracy theories and false narratives of election fraud in part spurred by the then-former president.

One week later, as right-wing supporters of the former president flocked away from public-facing social media that were clamping down on conspiracy theory accounts and toward more private chats where they could find like-minded people, the now increasingly popular app Clubhouse began making headlines for a different reason―its audio-only nature was intimate and could make you feel right at home.

Launched in April 2020, the app, which is invite-only, is poised to be opened up for the world, per a blog post the founders penned at the end of January 2021. And yes, while the chat rooms undoubtedly provide an innovative way to engage those who share your interests, like the more well-established technology companies before it, Clubhouse is bound to experience its fair share of disinformation-related challenges.

We saw the challenges verifying audio messages can pose during the Brazilian elections in 2018, and in Argentina, where audio has been used prominently on WhatsApp since 2015. For those countering disinformation, audio attribution and verification has always been especially complex―everyday citizens just don’t have the tools necessary to perfectly detect the voice on the other end. For messages that are forwarded many times on encrypted platforms, verifying the person in the audio is who they say they are becomes an ever greater challenge.

Luckily, though audio messages are available on many social media and messaging platforms, for many people they are not the most prominent format through which content is circulated. That may not last forever.

With guidelines for content moderation still a very difficult area for platforms and regulators alike, and laws lagging behind the rapid evolution of technology, are we as a society ready to face the challenge of countering disinformation and hate speech spread through live audio or voice messaging?

If we’ve learned anything from the rapid rise of coordinated inauthentic operations and disinformation in the past few years, it is that we can often get caught just a tad behind the curve, chasing the ball as it rolls down the hill, just out of reach.

If counter-disinformation experts and multi-stakeholder groups don’t begin taking a harder, more purposeful look at strengthening the tools and skills necessary to counter disinformation spread through audio (while also accounting for data privacy and freedom of expression), we will inevitably be caught behind the curve once again, with far more problematic consequences in store. The time is now to start having these difficult conversations.


Roberta Braga is former Deputy Director at the Atlantic Council’s Latin America Center, where she helped spearhead the think tank’s work on countering disinformation in the region in partnership with the Digital Forensic Research Lab. She currently serves as Communications Manager for North America at Baker McKenzie.

Follow along for more in-depth analysis from our #DigitalSherlocks.