Cybersecurity & Tech

Sen. Hawley’s Bid to ‘Disrupt’ Big Tech

Kyle Langvardt, Alan Z. Rozenshtein
Wednesday, September 4, 2019, 8:00 AM

In recent months, Republican Sen. Josh Hawley has introduced three bills that would take a startlingly aggressive approach toward the technology industry: prohibiting gambling-like “loot boxes” in video games, restricting certain design features to make social media less addictive and tying platforms’ intermediary-liability immunity to “unbiased” content moderation.

Sen. Josh Hawley speaks with then-Acting Defense Secretary Patrick Shanahan in April 2019. (Defense Department Photo by Navy Petty Officer 1st Class Dominique A. Pineiro)

Published by The Lawfare Institute
in Cooperation With
Brookings

In recent months, Republican Sen. Josh Hawley has introduced three bills that would take a startlingly aggressive approach toward the technology industry: prohibiting gambling-like “loot boxes” in video games, restricting certain design features to make social media less addictive and tying platforms’ intermediary-liability immunity to “unbiased” content moderation. In place of the market-based approach to tech policy that has predominated in the United States since the 1990s, the bills would institute a much more hands-on approach that would direct specific aspects of product design in gaming and social media.

All of these bills are controversial, and a couple of them have met with widespread ridicule. The ridicule is misplaced. These bills are flawed in the details, and some of those flaws are fatal. But they get the big picture right in more ways than most are ready to admit.

The Proposals

Let’s start with the least (but still) controversial of Hawley’s proposals: a ban on “loot boxes” in video games. Loot boxes are in-game purchases that randomly (or semi-randomly) give the user some reward. They have come under broad criticism for essentially being unregulated gambling, and they are increasingly controversial in the gaming community. Hawley’s bill, which is co-sponsored by Democratic Sens. Richard Blumenthal and Ed Markey, would prohibit loot boxes or other “pay-to-win microtransactions” in games either “oriented” toward minors or where the publisher has “constructive knowledge” that the game is played by minors.

The law sweeps broadly in two respects. First, its use of a “constructive knowledge” standard means that video game companies will have to go to great lengths to ensure that their games aren’t played by minors. This burden shifting is by design; as the bill’s FAQ states, the “onus should be on developers to deter child consumption of products that foster gambling and similarly compulsive purchasing behavior.” Second, it goes beyond loot boxes to include other in-game purchases that make games easier, such as a chance to skip a difficult level or purchase extra attempts.

The bill—which comes as states and other countries are considering their own bans on loot boxes—has an uncertain legislative future, but that hasn’t stopped it from spooking the video game industry. Mobile games, in particular, depend heavily on in-app purchases, most of which come from “whales”—a term borrowed from the casino industry to refer to customers who play extensively (and often compulsively). A successful regulatory attack on loot boxes and other types of habit-forming product design would seriously impair the business model for mobile gaming. So it’s little surprise that the head of one of the world’s largest video game developers has come out strongly against the proposed legislation, even arguing that it would violate the First Amendment.

Hawley’s most recent piece of technology regulation, the Social Media Addiction Reduction Technology (SMART) Act, takes the concerns animating the loot box bill and applies them to social media apps and platforms. The social media industry has its own reasons to drive compulsive use: The more time a user spends in the app, the more data the app collects and the more ads the app displays. (In 2014, Silicon Valley investor Nir Eyal wrote a bestseller, “Hooked: How to Build Habit-Forming Products,” which revealed and celebrated the recipe for success at Facebook, Twitter and others.)

The SMART Act would ban platforms from “using practices that exploit human psychology of brain physiology to substantially impede freedom of choice.” Some of the design features identified are infinite scrolls, video autoplay and badges for high levels of use—all features that have been criticized (and at times defended) as effective ways to encourage habitual use. The bill would also impose a default maximum of 30 minutes of use of a given social media platform per day; users could increase or remove that limit, but it would reset to 30 minutes on the first day of every month.

The most provocative of the trio is the Ending Support for Internet Censorship Act, which would regulate content moderation by giant platforms like Facebook, Google and Twitter. Hawley’s bill targets giant platforms that moderate content in a politically “biased” way, by revoking the platforms’ protections under Section 230 of the Communications Decency Act of 1994. By preventing platforms like Facebook and Google from being treated as the “publishers” of what their users post, Section 230 provides an immunity from defamation and other tort suits that is unavailable to more traditional media entities like newspapers and broadcast stations. Hawley, who criticizes Section 230 as a “sweetheart deal that no other industry enjoys,” would require the largest platforms to periodically demonstrate to the Federal Trade Commission (FTC) that they are “providing a forum free of political censorship” (something that Hawley unconvincingly claims was at the heart of the original Section 230 “deal”).

This last bill has been widely panned. First, critics have argued that government cannot be trusted to regulate political content online and that “neutrality,” at any rate, is undefinable. These concerns are both valid and serious. The problem is that there is no good reason to think that Facebook and Google are any more trustworthy or discerning than government. Today, these platforms’ powers to take down content—and in effect to regulate the media—look a lot more like what states do than what publishers or even cable news channels do. If one accepts the premise that private content moderators are “the new governors” of online discourse, then the question is not so much whether a public agency can be trusted to govern, but whether it is a more trustworthy governor than a private corporation operating outside of any legal framework. We find it hard to imagine a robust system of speech protections in a Facebook-like setting that would not involve an oversight role for an administrative agency, no matter how queasy that makes even the casual civil libertarian (ourselves included). Hawley’s bill is the first to acknowledge this, and it probably won’t be the last.

Takeaways

We see five important takeaways from this trio of proposals.

First, they are willing to directly intervene in how technology companies design and operate their services. This is a striking change from what’s gone before, but it’s also a natural evolution of technology regulation. In the early days of the internet, the government took an intentionally hands-off stance. The past few years have seen increased regulation, but it took the form of empowering other actors—specifically, users—to act. Consider SESTA/FOSTA (and other attempts to limit Section 230 immunity); rather than regulating platforms directly, these laws authorize private litigation that pressures companies to improve their policies. Or consider data-privacy laws like the California Consumer Privacy Act (and the law that inspired it, the European General Data Privacy Regulation), which largely operate through a notice-and-consent process meant to empower users to control how their data is being used.

The Hawley proposals are much blunter. They identify certain practices and design features as harmful, and then they either ban them or attach penalties to them. No more infinite scrolls. No more loot boxes in kids’ games. Megaplatforms that violate new FTC standards on “political bias” in content moderation will be left to drown in potentially ruinous litigation over third-party content.

It is remarkable to see such measures come from the desk of an otherwise typical deregulatory conservative. The willingness of Hawley and his co-sponsors to regulate a leading economic sector at such an intimate level reveals a deep disillusionment with the current state of affairs and a lack of faith that anything but drastic measures will suffice. It also suggests they think they have found a ripe political target.

The second lesson is that regulation is hard. Any approach that addresses specific design techniques point-by-point will be somewhat clunky, and done wrong it could easily make things worse. These pitfalls are on display throughout the bills on gaming and social media addiction. User interface design changes all the time, and it’s a hopeless endeavor for Congress to try to legislate every time some product manager in Silicon Valley comes up with a new way to extract user attention. The danger of writing laws specifically about loot boxes and infinite scrolls is that such legislation may quickly become obsolete, as has occurred frequently with technology-specific statutes (for example, the federal privacy law that applies only to video rentals). But the alternative—having Congress write broad, technology-neutral legislation and then having agencies implement more specific regulation—runs the risk of giving the government too much power. It’s a tough needle to thread.

Hawley’s bills also have some serious drafting problems. Both the loot box bill and the social media addiction bill sweep far broader than their headline policy changes require. The content-moderation bill’s threat to revoke Section 230 immunity strikes us as arbitrary and probably self-defeating, given that firms without Section 230 protections will have new incentives to censor more aggressively lest they open themselves up to liability for any stray comment by a user. And heaven help the social media company trying to work out what “political bias” means. Under Hawley’s bill, any content moderation that “disproportionately restricts or promotes access to ... information from a political party” is “biased,” but content-moderation practices that are “necessary for business” are not. The law can be read to cover most content moderation, no content moderation or anything in between.

Some of these flaws might be worked out in later drafts—but even then, the business of translating broad policy goals into a precise legal framework will involve plenty of awkwardness. In the meantime, society lacks even a basic consensus on what tech-related problems, if any, it should be trying to address. Are loot boxes “gambling” or simply ways to improve the “user experience”? Is social media “addictive” or merely “engaging”? And are platforms “censoring” speech or simply “moderating”? Consensus on these issues is an elusive and moving target, and any legislation will naturally draw controversy. The fact that these policies will roll out on people’s phones and inside their social media accounts can only personalize and intensify the debate.

Third, it’s dangerously easy for politics to obscure the bipartisan stakes. For example, Republicans are doing their best to make content moderation look like a partisan issue based on unfounded claims. Sen. Hawley and others have alleged, without any real basis, that lefty Californians at Google and Facebook use content moderation to undermine conservative views they disdain. This unsupported paranoia distracts from the copious evidence of general arbitrariness in platforms’ policies, in terms of both their application and the platforms’ tendency to change their policies seemingly on a whim. The bill’s exclusive focus on “political bias” contributes to a misplaced impression that Republicans alone should be concerned about the status quo. A better bill would reach a broader range of free speech concerns, and its sponsors would draw public attention to censorship risks affecting a broader set of constituencies within and outside both parties. A better Congress, in turn, would see corporate content moderation as a nonpartisan long-term structural risk to democracy—one that needs to be addressed before a giant platform uses its power to get control of politics. (We suspect the Democratic caucuses would grasp this more easily if Walmart owned Facebook or if Koch Industries had a major stake in Twitter.)

A fourth observation: Any attempt to directly regulate how technology companies design and operate their products will raise serious First Amendment challenges. The platforms’ brief against the content-moderation bill, in particular, is clear. Google’s long-term position—as reflected in litigation and suggested in a famous paper it commissioned from Eugene Volokh and Donald Falk—is that it is a “curator” or an “editor” of its search results. In an abstract sense, this “editing” function turns Google into a First Amendment “speaker,” just as the editor of a newspaper “speaks” by choosing what to print. The same argument applies in the social media context: When Facebook takes down a piece of content, it “speaks” by “editing.” (Neither of us is a fan of this argument, which we view as overly formal and unnecessary expansive, but it has broad support in the industry and, most importantly, matches the current Supreme Court’s First Amendment style.) If Google and Facebook are “speaking” through their content-moderation policies, then Hawley’s bill represents an effort to control the “message.” On this view, making Section 230 protections contingent on a social media company’s content-moderation practices may be an unconstitutional condition in violation of the First Amendment.

Hawley’s proposed legislation on loot boxes and social media addiction raises constitutional questions that are more speculative and potentially farther-reaching. Tech companies could argue that all of the little tricks that make up addictive tech design are expressive content: the loot box animations, the images of “badges,” the choice to allow the user to keep scrolling infinitely. They could even go further and, invoking the principle that code is (sometimes) speech, argue that regulation that forces them to rewrite their code base infringes on their First Amendment rights. The laws’ defenders, meanwhile, may argue that very little of this material is First Amendment subject matter at all. Or they may take up a range of more nuanced and novel positions—arguing, for instance, that laws regulating autoplay or infinite scrolls are subject to intermediate First Amendment review as commercial speech or as time, place and manner regulations.

Such arguments point out the tensions inherent in the current expansive trend in First Amendment doctrine, which can sometimes seem to protect almost everything tech giants do. But the practical realities suggest this can’t be right—does the First Amendment really require the government to jump through tiers-of-scrutiny hoops any time it wants to regulate the largest and most powerful companies in the world? Whatever concerns one might have about the Hawley measures specifically, there is probably some role for the kind of direct design regulation that Hawley has proposed. How First Amendment doctrine will accommodate that role remains unclear.

Finally, although it is virtually certain that none of these bills will get a Senate vote, let alone become law, it is a mistake to write them off as stunts (even if Hawley is particularly good at trolling the technology industry) or as the clueless rantings of the techno-illiterate. It is wiser to take them seriously, even if not literally, and consider their potential (at least in better-drafted form) to expand an often-stagnant discussion around tech policy in a new and dramatic direction.

Hawley—at 39, the youngest member of the Senate—is clearly knowledgeable about and comfortable with technology, and he has been squinting at the technology industry since he was Missouri’s attorney general. The Hawley measures have many flaws, but they are not behind the times. Instead, their most fundamental provocation is that they attempt to think ahead to an unsettling future. American scholars and legislators are used to assuming that free speech and the free market walls off software from design-based regulation and that software design is unregulable by nature. Whatever their other problems, the most provocative aspect of the Hawley bills is that they challenge this laissez-faire consensus.

As Tim Wu has described in his media history “The Attention Merchants,” the “bargain” of attention-grabbing media ecosystems is ultimately “beset with a certain ‘disenchantment,’ which, if popular grievance is great enough, can sometimes turn into a full-fledged ‘revolt,’” during which “the attention merchants and their partners in the advertising industry have been obliged to present a new deal, revise the terms of their arrangement.” The three measures we have discussed here represent the closest thing yet to a legislative prototype for that “new deal.” And for that reason alone, they deserve to be taken seriously.

The Hawley proposals—expansive, invasive, constitutionally uneasy—show just how hard it will be to reconcile big tech with the premises of a free society. The bargain with today’s attention merchants has concentrated a stunningly inappropriate amount of power in the hands of a small and unelected group of individuals. The Hawley proposals envision a public regulatory platform big and aggressive enough to match that power. One thing is for certain: It won’t be pretty.


Kyle Langvardt is an Assistant Professor of Law at the University of Nebraska College of Law and a Faculty Fellow at the Nebraska Governance and Technology Center at the University of Nebraska.
Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, Research Director and Senior Editor at Lawfare, a Nonresident Senior Fellow at the Brookings Institution, and a Term Member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.

Subscribe to Lawfare