Cybersecurity & Tech Democracy & Elections Surveillance & Privacy

Focusing on Privacy Won’t Solve Facebook’s Problems

Kareeda Kabir, Ilari Papa
Sunday, October 27, 2019, 2:00 PM

Editor’s note: This article grew out of work done in our Georgetown University class on national security and social media. The class tackled an array of questions related to how hate groups exploit social media, exploring issues ranging from privacy and human rights concerns to technological and legal barriers. Working in teams, students conducted independent research that addressed a difficult issue in this problem space. —Dan Byman & Chris Meserole

 

Facebook CEO Mark Zuckerberg announces the plan to make Facebook more private at Facebook's Developer Conference on April 30, 2019. (Flickr/Anthony Quintano, CC BY 2.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s note: This article grew out of work done in our Georgetown University class on national security and social media. The class tackled an array of questions related to how hate groups exploit social media, exploring issues ranging from privacy and human rights concerns to technological and legal barriers. Working in teams, students conducted independent research that addressed a difficult issue in this problem space. —Dan Byman & Chris Meserole

At Facebook’s annual developer conference on April 30, founder and CEO Mark Zuckerberg laid out a major change for the social media platform: a shift to privacy. Facebook’s redesign, which was first announced in a blog post in March, aims to prioritize private, encrypted messaging (on Facebook Messenger as well as Facebook-acquired WhatsApp and Instagram’s Direct Messaging) and Facebook groups over the more public “town square”-style News Feed that has been the cornerstone of the platform since its founding. Private messaging, Zuckerberg argued, is the future of social media communications.

Standing on stage, Zuckerberg demonstrated the tangible impacts of the platform’s new focus: a “groups” tab on the Facebook app, recommendations throughout the app for groups users might be interested in joining and a separate section on Facebook Messenger for messages between close friends. According to Zuckerberg, this shift reflects what users prefer: “By far, the three fastest-growing areas of online communication are private messaging, groups and Stories,” he said.

Conveniently for the company, this also shifts users away from the more public-facing News Feed, which has been at the center of Facebook’s problems over the past several years—from misinformation and election meddling, to user data misuse, to incitement to a genocide. But, unfortunately, the company cannot fix the challenges it faces by steering users to more private conversation arenas. Doing so might actually make matters worse—for users, governments and Facebook itself.

Misinformation has often been described as an issue with the News Feed since users can post false news or share misleading articles. But false or sensational content also spreads by means of Messenger and within Facebook groups. There have been cases, such as in India in 2018, in which the spread of misinformation on WhatsApp (an encrypted platform) led to mobs attacking and murdering innocent people.

Within the News Feed, Facebook has taken numerous measures to combat the spread of misinformation—deprioritizing such content on users’ timelines, partnering with fact-checking organizations and providing related content aimed at debunking circulated falsehoods. These strategies have attained limited success, but it is clear that the problem won’t be disappearing anytime soon—the rise of deep fakes is only one example of what the future of disinformation might hold.

Shifting to privacy allows Facebook to wish away the problem—at least on a superficial level. By encrypting messages, Facebook essentially washes its hands of responsibility for its content. After all, the platform cannot regulate, or even be responsible for, what it cannot see. Pushing Facebook groups allows the company to put the responsibility for content moderation on group administrators and moderators—rather than on its own employees and contractors.

Facebook currently has a reputational problem with respect to privacy—as Zuckerberg noted at the conference, joking that Facebook “doesn’t have the strongest reputation for privacy right now.” But Facebook’s proposed shift toward privacy is also a shift away from responsibility. Its strategy of minimal effort and avoidance will fail to solve the real challenges the platform faces, and in the end it will not improve the company’s image either.

End-to-end encryption does have clear benefits, especially for users in countries where governments abuse human rights or try to snoop on their citizens. The demand Facebook faces for more user privacy is real.

But the end-to-end encryption of Messenger poses several security risks as well. Though much discussion of trade-offs from encryption has focused on government concerns over the decreased availability of information to law enforcement, there are other risks, too. Most individuals, of course, do not use Messenger to coordinate terrorist attacks or for other malicious purposes. But if the company relinquishes control of the data in motion (that is, user transmissions over the messaging service), it will lose access to exchanges of harmful content such as disinformation, hate propaganda and incitement of violence.

Facebook rightly argues that even if it does not encrypt Messenger, users with bad intentions will opt for alternatives such as Telegram. The company also states that it will continue cooperating with law enforcement by providing collectible data if government agencies send lawful requests. Facebook summarizes the latter in its Transparency Report, in which it clarifies its choices and its intentions to collaborate with law enforcement. But there’s no doubt that shifting a great deal of activity to private and encrypted fora will reduce Facebook’s ability to police the use of its platform.

And that’s a shame, because Facebook has taken significant action to reduce the spread of disinformation and hate speech on its platform. These actions have varied from providing more context on articles users share on their news feeds and on Messenger, to banning white nationalists and white separatist content, to penalizing groups that consistently disseminate disinformation. In addition, users can report content spread on Messenger when it violates the Community Standards or qualifies as spam. The Forward Indicator and the Context Button on Messenger, respectively, identify forwarded messages and provide additional information about the articles users share. But notably, on Messenger, Facebook is relying on users to complain. It is not doing the work itself.

The measures Facebook has incorporated so far do not halt the spread of harmful content, but they do help contain it and inform users about the credibility of the sources that produce the articles they consume. While at the moment end-to-end encryption applies to Messenger only if users opt in, pushing for total encryption of the platform—with no option for users to choose nonencrypted messaging—threatens to undermine Facebook’s efforts, since its policies would fail to apply to the closed-off platforms of communication. Giving up the company’s power to access the messages users exchange does grant more privacy for the user, but it also helps Facebook evade moral, if not legal, responsibility for those users’ safety. Bad actors seeking to incite violence might reach a broader audience by posting on News Feed. But they might also decide to prioritize their privacy and safely communicate with fewer people by spreading content on Messenger. In the context of increasing calls for Facebook to tackle the dissemination of hate and disinformation on the platform, encrypting Messenger will likely render all the existing safety tools and policies redundant.

Aside from a shift toward encrypted messaging, Facebook has also begun to promote Facebook groups heavily (even recently releasing a sentimental advertisement to encourage their use). But the issue of content organization and misinformation still persists in groups. Facebook groups are as much “town squares” as news feeds—if not even more so. Currently, there seems to be no limit to the number of people in a Facebook group (popular groups such as Dogspotting have amassed more than a million members). Through groups, an infinite number of users can share posts that can reach an infinite number of people. The only thing that could stop them is approval from administrators, though administrators are not Facebook employees nor are their actions regulated by Facebook. However, when group membership surpasses 10,000 members, it becomes difficult to maintain and keep track of posts without a dedicated team of administrators.

The groups function also has the same radicalizing potential as other forms of internet recommendation. The Facebook algorithm continues to suggest similar groups for the user to join in the same way that it produces similar content for the user to read in the News Feed. Renée DiResta, a researcher at Data for Democracy, has expressed concern over this phenomena: Join an anti-vaccine group, she writes, and “the platform might suggest that you join flat-earth groups or maybe ones devoted to Pizzagate—putting you on a conveyor belt of conspiracy thinking.” In fact, a number of terrorist attacks have been perpetrated by individuals who appear to have been radicalized on different internet groups, including Cesar Sayoc, who mailed 16 pipe bombs across the United States to high-profile politicians such as Hillary Clinton and Joe Biden. Moreover, the use of encrypted messaging applications for recruitment and radicalization by extremist groups like the Islamic State demonstrates the risks of having large, private but ultimately unregulated communication groups.

Facebook must recognize and account for the perverse propensity for Facebook groups to perpetuate issues of misinformation and fake news. To this end, we suggest that Facebook address this problem with the following four steps: (a) instituting stepwise regulations and requirements for groups based on their size, (b) increasing scrutiny and time-delays for transfers of group ownership between users, (c) systematically deprioritizing suspicious groups within the Facebook search function, and (d) implementing fact-checking tools within Facebook groups.

First, the greater the size of the audience within the group seeing the posts, the closer the group’s News Feed approximates a mass broadcast. Accordingly, Facebook should exercise greater oversight over the content of such groups. A stepwise introduction of oversight regulation, such as requiring some base level of internal content moderation by administrators and members for smaller groups (such as fewer than 10,000 members), and mandated annual audits by Facebook for larger groups (such as those larger than 10,000 members), might help Facebook mitigate the problem of echo chambers and misinformation within groups.

Second, preventing instantaneous transfers of group ownership would disincentivize malicious Facebook users from hijacking groups with normally benign content for the purposes of misinformation. By introducing this time lag, Facebook would discourage opportunistic users from hijacking or “hacking” groups to capitalize on public emergencies or crises for malicious purposes. At the moment, regular groups can be hacked and flooded with fake news and propaganda, generally from non-American accounts. During crises, groups tend to “keyword squat,” in which administrators create a group related to a current event (e.g., “RIP STEPHEN HAWKING 2018” following the physicist’s death). But the content posted within the group may well not pertain to the group name. By introducing this lag, Facebook could mitigate the possible harm of misinformation broadcasts within groups when they are most vulnerable to exploitation.

Third, Facebook should systematically deprioritize groups that disseminate misinformation and fake news within their search function and recommendations, thereby reducing the visibility of such groups. It is well documented that a number of Facebook communities engage in and spread misinformation and conspiracy theories, and Facebook has commendably taken action against some of them, such as groups and pages that spread misinformation about vaccines. However, the company should systematically apply these approaches to other identified misinformation campaigns and mark such groups with warning labels.

Lastly, we propose the implementation of a fact-check certificate box, comprising a hover-over infographic summarizing ratings from sources like NewsGuard, PolitiFact and Snopes indicating the credibility and reliability of a news piece and its publisher. The box could also include suggested articles about the same topic from credible sources. These should be available not just in the News Feed but also in groups and in Messenger. We also recommend that Facebook expand its partnerships with journalism outlets and fact-checking sources (like NewsGuard and PolitiFact) globally and that the company offer regional grants to encourage the establishment of fact-checker and/or news publisher raters in more regions and more languages.

Facebook’s shift toward privacy indicates that the company is taking its security issues seriously and recognizes that a new approach is needed in order for users to regain trust in the platform. Still, the solutions Facebook proposes—emphasizing encrypted messages and Facebook groups—do not sufficiently address the challenge of pervasive misinformation on the platform.

Of course, with advances in artificial intelligence and the rise of deep fakes, there will always be new and emerging problems in this realm for Facebook to grapple with. And Facebook alone cannot solve the broader problem: Other platforms might offer a safe haven to users with malicious intent. But while social media platforms themselves are not responsible for creating the issues of misinformation or online hate, they do have tools at their disposal to help reduce those harms. The best first step for Facebook is to recognize that a shift toward privacy alone won’t solve the problem—the company needs to take responsibility and, most importantly, take action.


Kareeda Kabir is a senior at Georgetown University studying linguistics. She researches violent extremism.
Ilari Papa is a research assistant at the Washington Institute for Near East Policy, where she examines China and great power competition in the Middle East. She is also a graduate student at Stanford University, where she will pursue a master’s degree in international policy at the Freeman Spogli Institute.

Subscribe to Lawfare