Cybersecurity & Tech Democracy & Elections

The Deepfake iPhone Apps Are Here

Jacob Schulz
Monday, April 27, 2020, 1:00 AM

The president retweeted a deepfake of Joe Biden. The fake was made on an iPhone app that I’d already been researching.

President Donald J. Trump addresses a crowd on the South Lawn on Arbor Day, Apr. 22, 2020 (Official White House Photo by Andrea Hanks/https://flic.kr/p/2iTcLhs/Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

Like many Americans, I woke up this morning to see that the president had retweeted a misleading gif of his presumptive Democratic challenger, Joe Biden:

The president’s use of this gif is already coming in for criticism. Writing in the Atlantic, David Frum noted the significance of the president’s retweet: “Instead of sharing deceptively edited video—as Trump and his allies have often done before—yesterday Trump for the first time shared a video that had been outrightly fabricated.”

But Trump’s action in sharing this particular video has a special resonance for me, because it was made with one of the iPhone apps I’ve been playing around with for the past month. I’ve been attempting to learn more about the democratization of what are called “deepfakes”—convincing videos that superimpose someone’s face on an already-existing video or photo or otherwise manipulate media to make it appear as if someone did or said something that never actually happened. The watermark on the bottom right-hand corner of the Biden gif, “muglife.com,” indicates that the gif came from Mug Life, an app that allows users to manipulate a still image of a person’s face.

It turns out I know a fair bit about Mug Life. While many of you have spent your initial period in quarantine trying to become bakers, yogis or knitters, I spent mine trying to become late night comedian Seth Meyers.

It was easy, really. I didn’t brainstorm standup routines or practice public speaking in an effort to emulate the late-night TV host. I just downloaded an app like Mug Life, took a selfie, selected a gif of Meyers and voila:

Deepfake creation used to require a serious computer and a good baseline of technological skill. But that barrier to entry has begun to erode. iPhone deepfake apps have made creating deceptive media easier than ever. An iPhone-created deepfake tweeted by an anonymous user with only 60,000 followers received a presidential retweet within an hour of posting. The era of the deepfake apps has arrived.

Way back in March 2018, Kevin Roose of the New York Times undertook a similar project to my Seth Meyers gambit. Roose noticed a flood of deepfakes and wanted to try the technology on himself. So he recruited a technologist from the Times and a Redditor who went only by the alias “Derpfakes.” Roose and his team used a free downloadable computer program called FakeApp, which functions thanks to machine learning software.

Roose explained his process: “The first step is to find, or rent, a moderately powerful computer,” in order to reduce the amount of time it takes the software to churn out a fake. Roose and his Times colleague’s rented server “provided enough processing power to cut the time frame down to hours, rather than the days or weeks it might take on [Roose’s] laptop.” To help the software—which “teaches itself to perform image-recognition tests through trial and error”—learn an accurate model of Roose’s face, they took “several hundred” photos of Roose’s face in all sorts of expressions. Roose and his Times colleague also fed the software images of the celebrities on whose faces Roose sought to superimpose his own. One deepfake took eight hours of training to produce; another had to be run overnight. Of the former, Roose conceded, “Only the legally blind would mistake the person in the video for me.”

 

From left: Roose being photographed, Roose as actor Ryan Gosling, Roose as actor Chris Pratt and Roose as comedian Jimmy Kimmel.

Needless to say, Roose’s project was far more involved than mine. Roose expensed $85.96 in Google Cloud Platform credits; I didn’t spend a dime. I undertook the whole experiment from my nearly three-year-old iPhone SE that’s missing the top-right corner of its screen. I entered 416 fewer pictures than Roose did for actor Ryan Gosling and 1,860 fewer than he did for actor Chris Pratt. I had no help from Derpfakes.

Yet my project, however flawed the end result may be, yielded at least as convincing a product as Roose’s deepfakes. The entire thing took me about 10 minutes.

Experts did warn us about the inevitable democratization of deepfake creation. Danielle Citron and Bobby Chesney cautioned in 2019 that “[t]he capacity to generate persuasive deep fakes will not stay in the hands of either technologically sophisticated or responsible actors. For better or worse, deep-fake technology will diffuse and democratize rapidly.”

Citron and Chesney were prescient. I’m hardly more technologically skilled than Roose or his Times colleague—and certainly less so than Derpfakes. Rather, in the year since Citron and Chesney published their article (and two years after Roose published his), deepfake creation has become so easy that I basically matched the Times’s project on my iPhone.

So what’s the big deal? The wide diffusion of deepfake technology means democratized creativity and fun—but also the potential for abuse. In the words of Citron and Chesney, “deep fakes can inflict a remarkable array of harms,” impacting individual victims and national politics.

The app I used is called Familiar, but the iPhone App Store has tons of others like it. The apps generally fall into one of three categories: those like Familiar (and its competitors Doublicat and Faces) that enable you to paste a picture of yourself on a gif of a celebrity, those that give you the power to animate still visages to say whatever you want them to say (Mug Life, the app used to create the Biden post, and Talkr) and those that allow you to superimpose a face on an image of another body (Copy Face). It’s probably overbroad to label all the creations yielded by these apps as deepfakes. Copy Face, for example, produces a still image that could be easily emulated in Photoshop or even Microsoft Paint. Nonetheless, the apps put the ability to create reality-defying media at users’ fingertips.

Almost all of them create a final product locally on your phone in less than a couple of minutes. And all of the apps have free versions that give access to a handy basic set of features. In-app purchases allow zealous users to pick from a wider range of celebrity pictures or create a higher volume of deepfakes. Or, in the case of Mug Life, for $12 per year, you can get a still image to say anything you want.

Users can employ these apps for harmless fun creations, like my gif of myself as a late-night television host. But they also lower the technological barrier for those looking to harass or exploit. As Motherboard reported in 2018, deepfakes are a popular tool to create nonconsensual pornography in which abusers superimpose the face of an unconsenting woman onto an adult actress. An astonishing 96 percent of deepfakes on the internet are nonconsensual pornography—almost always with a female victim. Fake App, the desktop program that Roose used, garnered some notoriety once it emerged as a tool to create such videos. Other deepfake computer programs have explicitly courted users interested in creating nonconsensual porn, but using these tools requires more time and technical sophistication than the iPhone apps.

Fortunately, an abuser would struggle to deploy iPhone apps for this type of creation: The App Store removes apps that deal in pornography (although some sneak through loopholes in the rules). And apps like Familiar and Doublicat come with a preset list of generally benign gifs on which one can superimpose a face, limiting the potential for this type of victimization.

One exception is Copy Face. This app allows users to upload both the face input and the pictures in which users insert the faces, and I couldn’t find anything in the app that would prevent users from using it to create nonconsensual pornographic deepfake pictures.

Copy Face’s terms of service do forbid users from “transmit[ing] any material that is defamatory, offensive or otherwise objectionable in relation to your use of the App or any Service,” although I’m not certain that such a provision would prevent deepfake porn—and it’s unclear the extent to which Copy Face enforces these rules.

The terms also ban use of the app “in any unlawful manner, for any unlawful purpose,” but this doesn’t accomplish much as a deterrent when it comes to deepfakes: Only one U.S. state, Virginia, has a law criminalizing deepfake pornography. In February 2019, Virginia extended its existing revenge porn law to criminalize “unlawful dissemination or sale of certain images of another person, that ‘another person’ includes a person whose image was used in creating, adapting, or modifying a videographic or still image with the intent to depict an actual person and who is recognizable as an actual person by the person’s face, likeness, or other distinguishing characteristic.” (Though it didn’t move to criminalize, California passed an October 2019 law that extended the existing private right of action for revenge porn to victims of deepfake porn.)

Beyond nonconsensual pornography, the apps do offer easy tools to create embarrassing deepfakes for the purpose of online humiliation or blackmail—two applications cited by Citron and Chesney as among the dark sides of deepfakes. A cyberbully could deploy face-to-gif apps like Doublicat, Familiar or Faces—the last of which offers a particularly high volume of embarrassing gifs on which to superimpose a face—to create a fake clip of someone appearing to be drunk or angry. A blackmailer could use Mug Life or Talkr to animate a picture so that their victim confesses to bad behavior or utters something offensive. Or, as Trump did, one can use gifs created by these apps simply to make someone look silly.

Would any of these fakes be all that convincing? Not yet. Deepfakes produced on the apps—like mine of Meyers—tend for now to look more like a fusion of two faces, not a clean insertion of one’s face on another’s body.

But it may not matter. Would you want even a semirealistic video of yourself ending up online in which you appear to say or do something that you never actually did? Bad actors could exploit even a flawed video for blackmail or humiliation. And the technological progress evinced even by the apps’ very existence does not bode well for the future. Tomorrow’s deepfake harassers may not have to contend with technical snags in making their iPhone fake videos. Think how much progress we’ve seen in the past two years. We will not have less in the next two.

As the Trump retweet puts into relief, these apps also pose potential problems in the political, national security and public health contexts.

We don’t yet really have to worry about the apps duping hordes of unwitting Twitter users into thinking the footage depicted in a Mug Life or Familiar gif actually occurred in real life; the movements in the Biden gif still look slightly too mechanical for real footage. And the tweet that the president retweeted even acknowledges that the gif is a fake. But the Biden fake is designed to inflict political harm nonetheless; that’s why Trump retweeted it, after all. It explicitly amplifies a trope—“Sloppy Joe”—favored by the president and other Biden detractors. In this sense, the deepfake provides a visual to fossilize the insult; it gives the political jab a tangible image. The Mug Life fake is for now like a caricature. But as the caricatures become more like photographs, the danger of confusion will grow.

The Biden gif and others like it put platforms like Twitter and Facebook in a difficult position.

Criminal law doesn’t offer much help in combating political deepfakes—only Texas and California have laws banning political deepfakes, and the rules in both states apply only during narrow preelection time windows. In this statutory vacuum, the platforms have taken commendable steps to address the deepfake problem. Facebook on Jan. 6 announced that it will begin removing certain forms of “misleading media” if it “has been edited or synthesized” in deceptive ways and “is the product of artificial intelligence or machine learning.” A month later, Twitter announced similar restrictions.

But here’s the problem: Facebook’s policy explicitly “does not extend to content that is parody or satire.” And Twitter’s deepfake rules require media to be “deceptively” altered or “shared in a deceptive manner” in order to reach certain thresholds of enforcement. (The Texas and Califonia laws have similar loopholes.) These idiosyncrasies will make it more complicated for the platforms to take action against the Biden video. The text of the tweet, which explicitly acknowledges the gif’s fabricated quality, makes it more difficult for Twitter to determine that the user shared the video in a “deceptive” way. As Frum commented, “Because the account retweeted by Trump explicitly labels its video a ‘deep fake,’ it arguably does not violate Twitter’s anti-deception policy.”

And what might the reaction look like should Twitter decide to take action? When called out for promulgating deepfakes, people often plead that they were just joking or didn’t actually mean to deceive anybody. In a representative example, Verge reported that a site devoted to deepfake porn “defines its content as ‘satirical art’ and claims, ‘We respect each and every celebrity featured. The OBVIOUS fake face swap porn is in no way meant to be demeaning.’” Rep. Paul Gosar offered up a similar defense when confronted for posting a deepfake picture of President Obama, writing that “no one said this wasn’t photoshopped.” For his part, the president just last week deployed the “satire” defense to explain away his propagation of other misinformation, and it would hardly surprise me if he or his defenders put forward the same excuse in response to criticism about the Biden gif.

The apps, particularly Mug Life, talk to their users about deepfaking in a similarly jocular register. For example, deepfake gif app Faces courts users by telling them in an ad (bookended with emojis), “Prank your friends—put their faces into video.” The apps, with fun names and goofy interfaces, generally encourage creators to view deepfakes as a big old joke.

In this sense, the apps have the potential to widen already existing fissures surrounding content moderation and deepfakes: Researchers and (now) platforms are wary of dis- and misinformation; large communities of users view content moderation either as a repressive arm of the fun police or, worse, as a vehicle to suppress political speech.

At the end of the day, the dangers of the apps still remain largely nascent. Mug Life, Familiar and their analogs are unlikely to sway the 2020 election. For now, the biggest problem comes from cheap fakes—more rudimentarily edited deceptive videos—and clips that simply remove the underlying context of a politician’s comments. House Speaker Nancy Pelosi fell victim to the former in May 2019, when the president shared a manipulated video of the speaker that appeared to depict her stammering, and Biden to the latter in January 2020, when a deceptively edited video began to circulate that appeared to show him espousing white nationalist tropes (in actuality, the clip came from a longer speech in which Biden critiqued American culture’s problems with sexual violence).

But in months or years, all types of iPhone-created kompromat will inch closer to photorealism. This will create huge problems for platforms; Twitter and Facebook will have to contend with many more deepfakes than pop up on the platforms today. For now at least, both platforms rely on bespoke takedowns and labeling, and content moderators already besieged. When everyone with a smartphone can produce a semi-convincing deepfake, the equation will only get worse: An enormous volume of deepfakes will float around on the sites while moderators scramble to label or remove those that pass a certain threshold of traffic or abuse.

Soon, people will be able to use their iPhones not just to turn themselves into mildly convincing late-night comedians but to convincingly turn Joe Biden into whatever they want. When that happens, in the now-infamous words of Samantha Cole of Motherboard, “We are truly fucked.”


Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare