Congress Cybersecurity & Tech Democracy & Elections

Members of the House Are on Notice: No Tweeting Deepfakes

Evelyn Douek, Quinta Jurecic, Jacob Schulz
Thursday, January 30, 2020, 1:00 AM

The House Ethics Committee has announced that members who share deepfakes or “other audio-visual distortions intended to mislead the public” could face sanctions. It’s a small but noteworthy step.

A screenshot comparing a real video of President Barrack Obama with a deepfaked version, Jul. 2019 (Youtube/UC Berkeley/https://youtu.be/51uHNgmnLWI/CC Reuse Allowed)

Published by The Lawfare Institute
in Cooperation With
Brookings

You might have missed it amid all the sound and fury of impeachment, but it’s been a busy week for disinformation. Twitter, the Wall Street Journal reports, will start removing posts that it determines are “misleading about an election.” Elizabeth Warren’s campaign rolled out a plan on “Fighting Digital Disinformation,” including a pledge not to “knowingly use or spread false or manipulated information.” And the House Ethics Committee announced that members of the House of Representatives who share deepfakes on social media might face sanctions from the House itself. The ethics announcement attracted the least attention of all, but it’s actually an important step in creating standards for how elected representatives should use social media.

This news might seem trivial compared to the drama of the impeachment trial going on just down the street. But as technology companies and governments alike grapple with how to address the spread of online falsehoods, the committee’s memo is noteworthy. Four years after the shock to the system of 2016, everyone agrees that disinformation and misinformation are problems that need to be dealt with, but the question remains who is best positioned to accept responsibility. The House of Representatives appears to be, in some small way, beginning to take on the task.

The committee’s “pink sheet”—an advisory memorandum on House rules—alerts members of the House to the dangers of posting deepfakes on social media, warning that “manipulation of images and videos that are intended to mislead the public can harm … discourse and reflect discreditably on the House.” For this reason, disseminating “deep fakes or other audio-visual distortions intended to mislead the public” could violate the House’s Code of Official Conduct, which governs the behavior of the chamber’s members and employees.

This might sound like the committee is addressing a problem that doesn’t exist yet. As far as we know, there have not been any cases in which a member of Congress—or any other prominent American political figure—has tweeted a genuine deepfake, meaning doctored audio or video generated through machine learning than can produce extremely lifelike and misleading results. After all, deepfakes—though concerning—just aren’t all that common in politics (yet).

Politicians have, however, published plenty of what the memorandum describes as “other audio-visual distortions”—that is, photos or video deceptively manipulated in a less sophisticated manner than a deepfake. Sometimes the manipulation is obvious: President Trump recently tweeted a picture altered to depict him putting a Medal of Honor around the neck of a dog that played a role in the raid on Islamic State leader Abu Bakr al-Baghdadi (the original photo showed the president bestowing the medal to a Vietnam War medic). But U.S. political figures have published more deceptive images, too. Three days after the strike that killed Iranian general Qassem Soleimani, Rep. Paul Gosar tweeted a photo appearing to show President Obama shaking hands with Iranian President Hassan Rouhani; it took 40 minutes before he acknowledged that the picture was actually a fake, a doctored version of a shot from a 2011 meeting between Obama and then-Indian Prime Minister Manmohan Singh.

It’s easy to see why the committee might be particularly concerned about these manipulations now. Quoting a Congressional Research Service article on deepfakes, the memo warns that when members of the House or their staff tweet or share deepfakes or other manipulated audio or video, they could “erode public trust, negatively affect public discourse, or even sway an election.” Consider, for example, the misleadingly edited video tweeted out by President Trump in May 2019 that appeared to display Speaker of the House Nancy Pelosi slurring her words at a news conference—or the wide circulation of a deceptive clip that seemed to show Democratic presidential candidate Joe Biden make a racist remark. Now imagine such a clip, with sophisticated editing or not, circulated in the days or even hours before an election—and given the appearance of legitimacy by retweets from members of Congress and perhaps even the president himself.

The memo changes the calculation for members: Tweet misleading content, and you could be subject to House disciplinary rules. The Ethics Committee asks members and staff to think before they tweet: Before posting on social media, “[m]embers and staff are expected to take reasonable efforts to consider whether such representations are deep fakes or are intentionally distorted to mislead the public.” Members who break this guidance could be subject to public admonition by the committee of the offending members. Technically, the full House could even censure members—or expel them for violation of House rules.

Censure is unlikely, though, and expulsion even less so. Only five members have ever met the latter fate, three of whom were expelled for fighting for the Confederacy. Tweeting a deepfake may be harmful to democracy, but it might not be quite the same as taking up arms against the republic.

More realistically, the committee may hope that the guideline itself will have a deterrent effect. The committee is often reluctant to discipline members, so the most realistic deterrent may come from members’ desire to avoid scrutiny from the Office of Congressional Ethics, which investigates and produces public reports about ethics violations. As former senior counsel to the House Michael L. Stern, told us, “[T]he primary motivation for complying with this edict is not the formal sanction, but the embarrassment of getting called out for violations, as well as the desire to avoid an investigation” by either the committee or the independent Office of Congressional Ethics.

In this way, the pink sheet is a norm-setting exercise for the House—and a bipartisan one, as the committee is the only one in the House evenly split between Democrats and Republicans. When members tweet doctored pictures or share altered videos, they now do so as knowing violators of the House’s position on this type of content.

So the guidelines set out in the memo might not have that much bite. Yet they still represent a shift from the current state of things—which is to say, an ad hoc system in which tech platforms are effectively building the content moderation plane in midair, setting out broad rules and then struggling to decide the extent to which politicians get special treatment. The result of these efforts has been that, by and large, citizens and users of the major platforms have come to think of the platforms themselves as the governing bodies to whom appeals should be addressed when a prominent political figure publishes misleading or objectionable content. Every time President Trump posts a threat on Twitter, for example, hundreds of other users will tweet at Twitter CEO Jack Dorsey and other executives demanding to know why the platform won’t remove content that appears to violate its terms of service.

The situation has become so commonplace that it may no longer seem strange. But there is something bizarre about the notion that private platforms not only can regulate what and how politicians communicate with their constituents and the rest of the world, but that those platforms should. As any number of scholars and journalists have pointed out, the responsibility of deciding what speech is and is not within the bounds of acceptable discourse is, on its face, a task better suited to democratic governance than private enterprise. The Ethics Committee guidance is a small step toward governmental institutions accepting at least some of that burden.

That said, the House shouldering part of that weight means that the chamber will now have to grapple with some of the puzzling content moderation questions that only platforms have dealt with until now. Just as Twitter or Facebook users whose content is removed will appeal to the platforms to argue that their posts don’t violate the terms of service, surely the Ethics Committee and the Office of Congressional Ethics are going to be faced with the problem of assessing whether a member or staffer has actually violated the rule. Recall that the memo’s prohibition includes an intent requirement—so a member who shares a deepfake could argue that he or she too was duped, or explain that the shared deepfake was not a deception, but merely a joke. In fact, after he published the fake image of Obama shaking hands with Rouhani, that’s exactly what Gosar did: “[N]o one said this wasn’t photoshopped. No one said the president of Iran was dead. No one said Obama met with Rouhani in person,” Gosar wrote.

Then there are the new questions that emerge from two sets of overlapping rules between the House and the platforms, which will create headaches for both parties. Platforms are likely to face two situations: those in which a member spreads misleading content that appears to violate the terms of the memo and the House does nothing, and those in which the House issues some sort of sanction.

In the first case, where a member spreads misleading content but the committee takes no action, there will no doubt still be calls—as there are every time something similar happens—for platforms to do something. These calls are natural: Platforms have the greatest capacity to stop the spread of content on their services by using their content moderation infrastructure, and so it feels like they should exercise it. But, as noted above, it is jarring to ask a private company, with no democratic mandate or legitimacy, to take action against the speech of a public representative when the House Committee on Ethics itself has not done so. This situation appears to be a lose-lose for platforms: Take no action and they will continue to face charges of negligence in their roles at gatekeepers; do something and they will appear to be usurping the role of the committee to police the boundaries of acceptable conduct by elected representatives.

The second case isn’t easy either. Where a particular post has resulted in sanction from the committee, what should platforms do to best facilitate democratic accountability? Continuing to allow false content to spread might allow it to continue to do harm. But removing a post by a representative could prevent constituents from holding their member of Congress accountable, because fewer people will be aware of the post in the first place.

This situation calls for thinking outside the binary of taking down content or leaving it up, which platforms are starting to show in other cases—like flagging content that has been rated false by fact-checkers, or Twitter’s (as yet unused) policy of placing warning screens on content that breaks its rules but is in the public interest. When it comes to deepfakes and other “synthetic and manipulated media,” for example, Twitter has proposed placing a notice next to such posts or adding some other kind of warning, which could mesh well with whatever action the committee takes. Similarly, platforms could reduce circulation of misleading content that is the subject of committee censure and add context noting the committee’s action.

And, as ever, there’s the little matter of what to do about the president. The Ethics Committee can pass whatever rules it likes for members, but that’s not going to stop Trump from tweeting and retweeting. What happens if, for example, members of the House retweet a Trump tweet containing a misleadingly edited video? The members might be censured while the president would remain untouched.

Then again, perhaps this mirrors the struggles of Twitter itself: Puzzled by how to handle presidential tweets, the platform has written into its terms of service an exception for “newsworthiness,” which essentially allows Trump to publish whatever he likes. (Other platforms have similar rules.) Of course, the House has no power over Trump’s Twitter feed, and while the platform could limit his tweets if it wanted, it has decided not to.

The Ethics Committee is now going to get a taste of the absurdities and contradictions of regulating social media. We’ll have to wait and see whether Twitter users start tweeting at the committee as well as @jack and asking that congressional tweets be taken down—or demanding to know why members have been unjustly punished.


Evelyn Douek is an Assistant Professor of Law at Stanford Law School and Senior Research Fellow at the Knight First Amendment Institute at Columbia University. She holds a doctorate from Harvard Law School on the topic of private and public regulation of online speech. Prior to attending HLS, Evelyn was an Associate (clerk) to the Honourable Chief Justice Susan Kiefel of the High Court of Australia. She received her LL.B. from UNSW Sydney, where she was Executive Editor of the UNSW Law Journal.
Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Jacob Schulz is a law student at the University of Chicago Law School. He was previously the Managing Editor of Lawfare and a legal intern with the National Security Division in the U.S. Department of Justice. All views are his own.

Subscribe to Lawfare