Cybersecurity & Tech Democracy & Elections

Thirty-Six Hours of Cheapfakes

Quinta Jurecic, Jacob Schulz
Thursday, September 3, 2020, 2:28 PM

Over the course of two short days, figures affiliated with the GOP published three different deceptively edited videos on social media. Platforms can’t handle the challenge alone.

 

House Minority Whip Steve Scalise addresses at crowd at the 2013 Conservative Political Action Conference (CPAC) in National Harbor, Maryland (Gage Skidmore/https://flic.kr/p/e4cqRP/CC BY-SA 2.0/https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In the last days of August, with the clock ticking down until Election Day, senior Republican officials pulled off a disinformation hat trick: Over the course of two short days, figures affiliated with the GOP published three different deceptively edited videos on social media.

First, on the morning of Aug. 30, House Minority Whip Rep. Steve Scalise tweeted out a video of a conversation between Democratic presidential nominee Joe Biden and disabled activist Ady Barkan, manipulating Barkan’s words in the service of the false claim that Biden had agreed to defund the police.

That same day, White House Deputy Chief of Staff for Communications Dan Scavino shared a tweet with a deceptively edited clip that appeared to show Biden falling asleep during a TV interview.

The final offering came the next day from the Trump campaign account @TrumpWarRoom, which posted a three-second video that purported to show Biden saying, “You won’t be safe in Joe Biden’s America.”

These were not deepfakes—hyperrealistic, synthetic audio or video that shows real people doing or saying things they never did or said. They were, rather, what are sometimes called “cheapfakes” or “shallowfakes”—synthetic media that doesn’t require any sophisticated technology to cobble together and is sometimes less convincing, and more easily detectable by experts, as a result. Cheapfakes aren’t really a new problem. In fact, the trio of fakes weren’t even the first cheapfakes the country has seen amplified by those close to President Trump: Back in the spring of 2019, Trump and his friend and lawyer Rudy Giuliani tweeted a video of Speaker of the House Nancy Pelosi that had been edited to make it appear that Pelosi was slurring her words.

One year after the Pelosi cheapfake, the information ecosystem seems better equipped to handle manipulated media: Reporters quickly identified the trio of GOP cheapfakes as misleading, and Twitter eventually tagged all three videos as having been manipulated. That’s genuine progress.

But progress doesn’t mean that all is now well in the information ecosystem. The recent 36-hour bonanza of cheapfakes shows that they are still capable of causing mayhem—especially in the midst of a highly partisan electoral contest, where one side has shown itself willing to use video and audio editing to bend the truth to its advantage.

Anxieties about election misinformation still tend to focus on doomsday narratives about Russian trolls. A day after this flurry of cheapfakes, the lead story on the New York Times online warned: “Russians Again Targeting Americans With Disinformation, Facebook and Twitter Say.” And concerns about manipulated media often center around the more involved technological manipulations required to create deepfakes.

But disinformation doesn’t require foreign influence or even fancy algorithms. People have a tendency to believe even hastily edited video and audio that confirms their worldview. None of the three videos tweeted by Scalise, Scavino and @TrumpWarRoom necessitated any herculean technical mastery to create.

The video from @TrumpWarRoom was the simplest: The clip of Biden apparently declaring, “You won’t be safe in Joe Biden’s America,” was snipped from a longer sentence in which Biden said that “Trump and Pence are running on this, and I find it fascinating: ‘You won’t be safe in Joe Biden’s America.’”

This is not the first time that a short snippet of a Biden speech has been plucked out of context to make it appear that Biden had undermined himself. In January 2020, a 13-second clip that appeared to show Biden lauding “our European culture” began circulating on Twitter; fuller context from the speech showed that Biden had been making an argument that lax American attitudes toward domestic violence trace back to English common law. The decontextualized cut of that video, however, was posted by an anonymous account—unlike the “Joe Biden’s America” video, which was published by a verified Twitter account formally affiliated with the Trump campaign.

The video tweeted by Scavino likely also required a fairly straightforward edit. The footage of a news anchor chiding her guest to “wake up” came from an awkward 2011 television interview with singer Harry Belafonte. As Jane Lytvynenko of BuzzFeed News writes, “The manipulated version shared by Scavino substituted Biden for Belafonte and added a snoring soundtrack.” The creator of the video—who is apparently based in Denmark but describes himself as keenly interested in U.S. politics because “[i]t’s a complete circus!”—insisted to BuzzFeed News that he made the video as a parody. He also emphasized that the original copy of the video included a disclaimer that it had been edited, which the copy tweeted by Scavino lacked.

In the case of Scalise’s tweet, the video of Barkan pushing Biden on transferring some responsibilities from police to social services was edited to make it appear that Biden had agreed to take money from police departments. The original interview shows Biden agreeing with Barkan on redirecting some funds for social services, but noting that “that’s not the same as getting rid of or defunding all the police.” In the original, Barkan follows up: “But do we agree that we can redirect some of the funding?” And Biden concurs, “Yes, absolutely.” But in the version tweeted by Scalise, Barkan appears to explicitly ask Biden, “Do we agree that we can redirect some of the funding for police?” A colleague of Barkan’s told reporters, “Though Ady would have loved Joe Biden to announce in this interview that he is in favor of defunding the police, the Vice President never said it.”

What made this edit convincing is that Barkan suffers from ALS and speaks with voice assistant technology—meaning that he does not move his lips when talking. Viewers rely on lip movements and other nonverbal cues to track human speech, and absent those indicators, it’s much tougher to tell if audio is real or fake (unless the person’s voice is both familiar and clearly mismatched). It’s not clear how the audio of Barkan saying “for police” was generated: An initial Washington Post story suggested that the editor had pulled a preexisting clip of Barkan saying “police” and slotted it into the interview audio, while Barkan himself wrote that “Scalise’s team just went the extra mile in seeming to find the exact voice generator I use when they whipped up the extra words meant to damn Biden.” But either way, the video probably didn’t require the sort of advanced technical maneuvering needed to create a convincing deepfake—as it might have if the editor had, for example, inserted a new word into one of Biden’s sentences and manipulated Biden’s mouth to match.

However the video was made, it exploited someone particularly vulnerable to this sort of splicing. As Barkan put it, “because of my Hawking-esque voice, it’s particularly easy for others to manipulate what I say.”

The good news—such as it is—is that platforms, particularly Twitter, have become more adept at responding to deceptive media over the past year. In May 2019, the response to the Pelosi video was slapdash: YouTube took the video down; Facebook added a cautionary note that third-party fact-checkers found the content misleading, and sought to give it less attention on the platform’s recommendation engine; and Twitter did nothing at all. Since then, all three platforms have announced more formalized guidelines for how they will handle deep- and cheapfakes.

These policies are far from perfect and could certainly be more aggressive. But Twitter was relatively quick to tag the @TrumpWarRoom, Scavino and Scalise videos as “manipulated media.” The platform also removed the ability of users to retweet, like, or respond to the @TrumpWarRoom video, though users could still quote-tweet it, BuzzFeed News writes. Reporting from The Verge suggests that Twitter may have similarly limited engagement with Scalise’s tweet. This is consistent with Twitter’s policy on synthetic and manipulated media, implemented in February 2020, which states that the platform may “reduce the visibility” of content that violates the policy “and/or prevent it from being recommended.” (Additionally, the Scavino video was later removed from Twitter because of an independent copyright complaint—demonstrating that sharper tools are available for copyright enforcement than for protecting information integrity and democracy.) That isn’t to say that Twitter’s content moderation efforts went off without a hitch: Users noted that the “manipulated media” label did not appear if the labeled posts were quote-tweeted, but Twitter says it is working to fix that issue.

The responses from Facebook and YouTube are more mixed. Facebook flagged the video posted by Scavino as false but does not appear to have taken any action on the Scalise video—though Scalise eventually deleted the Facebook post himself, along with his original tweet. Meanwhile, the original creator of the video of Biden sleeping wrote that YouTube took the edited clip down. We weren’t able to identify whether the @TrumpWarRoom video ever made it to Facebook—but the misleading three-second clip is still up on the group’s affiliated YouTube account.

So there is certainly room for Facebook and YouTube to have been more aggressive. To be sure, these decisions are not always easy, and platforms that choose to remove or flag content will risk accusations of political bias. But as one of us wrote with Bobby Chesney and Danielle Citron in 2019, regarding the Pelosi cheapfake, “the stakes with elections are too high not to take action at least in the clear cases.” And in a deeply fractured information system—where social media users might not see or absorb posts from reporters pointing out falsehoods—a fact-check from a tech company may be the only feasible way to get it across to a whole group of users that a piece of content is indeed false.

Yet even Twitter’s more robust response seems inadequate to the scale of the problem. The platform responded quickly once journalists—and, in the case of the Scalise video, Barkan himself—pointed out the manipulated media. But by that point, the material had already made its rounds. The video posted by Scavino, for example, had been viewed more than 2 million times on Twitter before it was removed— and that’s without counting views in private chats, emails and texts.

Ultimately, what’s really missing is not only more responsive moderation by platforms, but functional political and social guardrails. It’s hard to imagine how even the most heavy-handed moderation could curb the problem of cheapfakes and perhaps deepfakes in the absence of a strong norm against using them. To state the obvious: Platforms doing something is better than platforms doing nothing at all. But given the limitations of what any company’s moderation policies and practices can do, given the lack of meaningful coordination between large and small platforms, and given the limitations of any effort to rein in fakery that has been released, the real limit on the use of manipulated media is whether or not the relevant actors feel any shame or expect any consequence for perpetuating falsehoods.

Within the world of Trump’s Republican Party and its associated right-wing media, it seems that they do not. Scavino, for example, previously retweeted a misleadingly edited video of Biden in March and received Twitter’s very first “manipulated media” label; he does not seem to have been chastened. After Scalise, the second-highest-ranking Republican in the House, mangled the words of a disabled man, Fox and Friends treated the congressman to a chance to respond to the “accusations” that he had manipulated the video. Scalise took the opportunity to acknowledge that the clip “shouldn’t have been edited,” but he did not apologize, and he insisted that the underlying point made by the edit—that Biden would take money from police—was correct. Likewise, when Twitter flagged the @TrumpWarRoom video as misleading, the account posted a new tweet declaring, “To all the triggered journalists who can’t take a joke about their candidate, it’s not our fault Joe Biden was dumb enough to say this on camera.”

Not only is there limited reputational cost within right-leaning politics and media to publishing misleading content that’s rapidly identified as misleading, there’s actually a perverse reputational benefit to doing so for those affiliated with Trump: These episodes allow the person to absurdly cry censorship and trade on an adversarial relationship with Big Tech or, in the case of @TrumpWarRoom, the press.

As a member of the House of Representatives, Scalise is subject to the chamber’s Code of Official Conduct—and according to the House Ethics Committee, publishing “deep fakes or other audio-visual distortions intended to mislead the public” might violate that code. Condemnation of Scalise by the Ethics Committee, which is evenly split between Democrats and Republicans, might go some way toward establishing that the House takes this policy seriously. Yet in the days since Scalise’s offending tweet, the committee has said nothing. Without help from Congress or Republican Party leadership, Twitter on its own can do only so much.

With this in mind, Barkan’s run-in with Scalise and the two other Trump team fakes are unlikely to be the last cheapfakes to percolate around the Campaign 2020 Twitterverse. This time around, platforms had to deal with a trifecta of popular fakes within the span of 36 hours, but there is no reason to think that the trend will slow down as the election draws nearer. And as these altered videos continue to pop up, even relatively plugged-in readers could become overwhelmed by the sheer volume of manipulated material, struggling to maintain ever-growing mental lists of what popular political videos are artificial—to the benefit not only of the politicians who distribute fakes but also of those who would benefit from casting doubt on documentation of their own outrageous behavior. Platforms can help remedy this with labels and takedowns. But there is a risk that some users will just throw up their hands and give up on distinguishing falsehoods and reality.


Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare