360/OS 2019: Deepfakes — It’s Not What It Looks Like!

Part of a series of posts highlighting key themes at the DFRLab’s 360/OS 2019 summit

360/OS 2019: Deepfakes — It’s Not What It Looks Like!

Share this story
THE FOCUS

BANNER: (Source: @sarahphotovideo/SarahHalls.net)

On May 2, 1945, Yevgeny Khaldei, a Red Army naval officer and photographer, took a photograph of a Soviet soldier triumphantly raising a Soviet flag over the German Reichstag following Nazi Germany’s historic defeat in World War II. The photo became an iconic symbol of Soviet propaganda.

Yevgeny Khaldei’s iconic 1945 photograph, “Raising a Flag Over the Reichstag.” (Source: mil.ru via Wikimedia Commons)

Khaldei’s photograph was staged and doctored. The flag had been sewn for him by his uncle, a tailor. When Khaldei arrived in Berlin with it in tow, he recruited three Soviet soldiers to help him stage the scene for maximum dramatic effect. Prior to publishing, Khaldei heavily edited the photo, adding dark smoke to the sky, as well as scratched out a wristwatch off of one of the soldiers’ arms from the negative in order to destroy evidence that the Red Army had been looting the city.

Manipulated photographs and videos are not novel: state as well as non-state actors have long used them to disseminate and preserve a sanctioned version of the historical record. The rise of deepfakes — manipulated video content produced using artificial intelligence — has provoked a wave of speculative hysteria, however. At 360/OS, Sam Gregory, Program Director at WITNESS, a nongovernmental organization that uses technology and video evidence to track human rights abuse, presented deepfake technology as an opportunity, rather than an existential threat to the public discourse and the basis of truth.

Living in the Present with an Eye Toward the Future

Gregory began by tempering our understanding of the present-day capabilities of synthetic media. Using commercially available as well as open-source tools, we now have the ability to alter video, create realistic voice audio using existing samples, and generate realistic faces of people that do not exist. In recent years, the amount of training data required to produce manipulated media has drastically reduced, which has in turn decreased the barriers to entry with regard to the creation of manipulated content.

In contrast to these lower-tech examples of synthetic media, Gregory noted, the barriers for entry for creating convincing deepfakes remain relatively high. Nonetheless, we see the same general trends with deepfakes as we do with lower-tech technologies; namely, simulation quality is improving, while barriers to entry are falling.

Sam Gregory, Program Director of WITNESS, discussed advances in synthetic media creation in recent years at 360/OS 2019. (Source: @sarahphotovideo/SarahHalls.net)

Gregory’s preferred approach to the problem of deepfakes is one of de-escalation and preparation. He argued that paranoia over deepfake technology often ends up exacerbating the very danger it attempts to inoculate against: if people think that no video footage can be trusted, they may well begin to doubt everything they see and hear, blurring the boundaries of truth and fiction. Furthermore, he warned against the “risks of pursuing any algorithm’s Achilles’ heel,” emphasizing that the technology is advancing too rapidly to rely on an awareness of a particular deepfake algorithm’s vulnerabilities to distinguish a fake credibly.

Gregory’s sobering assessment of the present capabilities and limitations of deepfake technology suggests that we should focus on the lower-tech threats that face us today — such as subtly edited “shallowfakes” — without discounting the potential of more advanced technology, such as deepfakes, to disrupt our public discourse and political processes tomorrow.

How #DigitalSherlocks Can Help

Luckily, many of the techniques the open-source community has developed to combat the former can also be applied to the latter. In terms of detection, one of the primary challenges is the accessibility of available tools. The tools the academic community has developed to detect AI-manipulated media, many of which rely on forensic technology and deep learning, are not accessible to the public.

At the DFRLab, we have one advantage in this respect: because we rely on open-source research, we see manipulated photo and video content as the intended audience sees it, within the context of ongoing disinformation campaigns. Our open-source research on lower-tech media manipulation has relied on these contextual clues, in addition to the technical ones that betray manipulated content, to trace the digital provenance of recycled videos during the European Parliament elections, discover YouTube videos automatically generated by bots from our own articles, as well as identify a doctored video spreading online in advance of local elections in Moldova.

An effective approach to combating the dangers posed by deepfakes will demand a coordinated and cross-disciplinary effort by our community of #DigitalSherlocks. By combining novel machine learning tools for detection, traditional online verification techniques, and an understanding of how disinformation spreads through a network, we can identify, expose, and explain these threats in real time.


Follow along for more in-depth analysis from our #DigitalSherlocks.