Cybersecurity & Tech

Mark Zuckerberg’s Metaverse Unlocks a New World of Content Moderation Chaos

Quinta Jurecic, Alan Z. Rozenshtein
Wednesday, November 3, 2021, 8:01 AM

Given Facebook's problematic history with moderating its platform, the metaverse could recreate and exacerbate existing problems in a new environment.

A collage of Meta logos. (Photo by Design at Meta)

Published by The Lawfare Institute
in Cooperation With
Brookings

Goodbye Facebook, hello … Meta?

On Oct. 28, Facebook CEO Mark Zuckerberg announced that he was changing his company’s name to reflect Facebook’s shift in focus from a social media company centered around one product, Facebook, to one that “reflects the full breadth of what we do.” Meta will be built around the “metaverse,” Facebook’s branding for a suite of virtual- and augmented-reality features. 

Zuckerberg earnestly assured the audience that those new features “will require new forms of governance.” For anyone who’s been paying attention to the catastrophic failures of some of Facebook’s existing experiments in platform governance, that should be enough to cause worry. The metaverse could not only recreate these problems and extend them to a new environment. It could make them much, much worse.

The timing of Faceb—er, Meta’s—announcement is itself striking. The company has had a rocky few months with the flood of reporting on Facebook’s failures to adequately moderate its platform, produced on the basis of documents leaked by a Facebook whistleblower and her testimony before Congress. So, although Facebook’s pivot to the metaverse has been a long time in coming, the name change looks in part like an effort to change the subject.

The metaverse as Zuckerberg displayed it in his promotional video, with slick graphics and users beaming virtually into concerts half a world away, is far from the product as it currently exists. As a result, some commentators have criticized Facebook’s conception of the metaverse as “vaporware”: the all-too-common practice in the tech industry of hyping a product that never comes to fruition. But as technology journalist Casey Newton argues, this skepticism may be misplaced. Facebook is investing tens of billions of dollars in the metaverse, as are other high-profile tech companies, from established giants like Microsoft and Apple to the developers of mega-popular games like Fortnite and Roblox—the latter two of which are arguably already far ahead of Zuckerberg’s vision. And once the basic metaverse infrastructure is built, the contributions of developers and users could massively multiply its reach and impact in ways that neither Zuckerberg nor his critics can today imagine—in the same way that, once Apple allowed third-party apps on the iPhone, smartphones took over the world far beyond Steve Jobs’s initial vision of a music player that you could make calls on.

The problem with the metaverse is that the same qualities that make virtual reality a potentially revolutionary technology also make it a deeply dangerous one. The metaverse—in particular its use of augmented and virtual reality—takes the immersive qualities of the two-dimensional internet and turns it up to 11. (If you doubt this, take a trip to your local VR cafe and try out the latest high-end VR headsets; you’ll get a visceral appreciation for why so many people view this as a revolutionary technology on par with personal computing and smartphones.) There are real promises to this technology, and not just for everyday consumers looking for entertainment. Imagine, for example, how people with disabilities or life situations that prevent them from traveling could access experiences that never would have been available otherwise.

But increased immersion means that all the current dangers of the internet will be magnified. Today’s relatively crude virtual and augmented reality devices already demonstrate that people react to the metaverse with an immediacy and emotional response similar to what they would experience if it happened to them in the offline world. If someone waves a virtual knife in your face or gropes you in the metaverse, the terror you experience is not virtual at all. People’s brains respond similarly when recalling memories formed in virtual reality and remembering a “real”-world experience; likewise, their bodies react to events in virtual reality as they would in the real world, with heart rates speeding up in stressful situations. 

This can be useful for, say, therapy and medical treatment. But the same realism that makes virtual reality a potential boon for patients suffering from phantom limb pain could also mean that harassment in the metaverse will be more visceral and thus more harmful; misinformation more vivid and thus more convincing; everyday experiences more entrancing and thus more addictive. Even the question of how advertising will take shape in virtual reality raises unique problems of transparency. This doesn’t mean that the costs of the metaverse won’t be outweighed by its benefits but, rather, its creators and stewards will have to think carefully about how to minimize those costs.

Unfortunately, nothing in Facebook’s history suggests that it will be a good steward to navigate these challenges. The “Facebook Files” and the “Facebook Papers,” tranches of reporting on the documents released by Frances Haugen, the Facebook whistleblower, describe a company whose product has become so massive that it may have grown beyond control. To point to just one example, the documents show violence and harassment piling up in regions of the world where Facebook has failed to invest in adequate content moderation in major languages. Facebook has always been about engagement at all costs and about making as large and global a “community” as possible. The past several years have made clear the downsides of that approach, and the company’s new pivot seems not to recognize these warnings. Zuckerberg’s sales pitch for the Meta rebranding featured plenty of excitement over the possibilities of human connection: “Together, we can finally put people at the center of our technology, and deliver an experience where we’re present with each other!” he announced. But is that necessarily a good thing?

According to the Washington Post, the company “is meeting with think tanks to discuss the creation of standards and protocols for the coming virtual world.” And in a brief (and awkwardly “impromptu”) exchange with Nick Clegg, the former English deputy prime minister and currently Facebook’s global policy chief and all-around political fixer, Zuckerberg made vague references to “designing for safety, privacy, and inclusion before the products even exist.”

While it’s a relief to hear that Facebook is putting at least some thought into this ahead of time, the Facebook Papers show how, again and again, thoughtful and dedicated Facebook employees urge the company to slow down and think more carefully about the negative effects of its products—and leadership ignores them. That doesn’t inspire confidence that whatever standards Facebook initially comes up with for the metaverse will be up to the task, or that the company’s executives will be open to listening to employees about how to fix whatever new problems inevitably arise in virtual reality. As Ethan Zuckerman writes in the Atlantic, “How will a company that can block only 6 percent of Arabic-language hate content deal with dangerous speech when it’s worn on an avatar’s T-shirt or revealed at the end of a virtual fireworks display?”

Much depends on details about the metaverse that are as yet unknown. For example, while Zuckerberg promises “open standards” and “interoperability”—the ability for users to move their avatars and digital assets seamlessly in and out of Facebook-controlled regions of the metaverse—true decentralization involves overcoming major technological challenges. It also requires resisting the profit incentives for maintaining a walled garden, a closed ecosystem under the provider’s control—which was, after all, one of Facebook’s major innovations when it appeared on the decentralized internet of the mid-2000s. 

And even if Facebook supports a decentralized metaverse, it will still exert outsize influence over it, not least because of Facebook’s (now Meta’s) unusual ownership structure in which Zuckerberg controls a majority of the company’s voting shares and thus has essentially dictatorial control over the company and its products. Whether one thinks that more or less or just different content moderation is needed, the fact that so much of people’s digital—and, if the metaverse succeeds, “real”—lives is controlled by one person should cause major unease.

The only way for the metaverse not to turn it into a disaster is for Facebook to design it in such a way that limits engagement, constrains virality, and in general makes for a more human-scale platform than Facebook itself is. But if the metaverse carries with it Facebook’s philosophy of engagement at all costs—and so far, there is every indication that it does—it will, far from fixing Zuckerberg’s problems, make them infinitely worse. 

---

Disclosure: Facebook provides support for Lawfare’s Digital Social Contract paper series, for which Alan Rozenshtein is the editor. This post is not part of that series, and Facebook does not have any editorial role in Lawfare.


Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare