Silicon Valley's Regulatory Exceptionalism Comes to an End
Not so long ago, it was hard to find anyone who thought regulating Silicon Valley was even possible, let alone a good idea. Deference to the technology industry was such that companies were sometimes even applauded for baldly violating existing regulations. Think of the early days of Uber, whose “innovative” business model relied on running over transportation regulations and dealing with fines and lawsuits later.
Published by The Lawfare Institute
in Cooperation With
Not so long ago, it was hard to find anyone who thought regulating Silicon Valley was even possible, let alone a good idea. Deference to the technology industry was such that companies were sometimes even applauded for baldly violating existing regulations. Think of the early days of Uber, whose “innovative” business model relied on running over transportation regulations and dealing with fines and lawsuits later.
This regulatory exceptionalism, widely shared among Silicon Valley’s otherwise left-leaning elites, has long roots. It goes all the way back to John Perry Barlow’s libertarian declaration of the internet’s independence from “Governments of the Industrial World, you weary giants of flesh and steel.” Today it includes more measured but still emphatic warnings from Silicon Valley elites as well as scholars that the government must regulate with the lightest touch, if at all, so as not to impede innovation.
How the times have changed.
Over the past few days, Facebook has lost tens of billions of dollars in market value as investors punish the company over its latest scandal: that it carelessly turned over tens of millions of Facebook profiles, laundered through an academic researcher, to the Trump-aligned British political intelligence firm Cambridge Analytica, and then neither reported the breach nor changed its policies for months thereafter. This is only the most recent of Facebook’s woes—it’s still being actively investigated by the special counsel and the Senate intelligence committee for its unwitting (but careless) facilitation of Russian interference in the 2016 election. In a revealing interview with the New York Times, Facebook CEO Mark Zuckerberg made clear that the company has finally accepted responsibility for making sure that the platform isn’t used to undermine political processes around the world: “This is a massive focus for us to make sure we’re dialed in for not only the 2018 elections in the U.S., but the Indian elections, the Brazilian elections, and a number of other elections that are going on this year that are really important.” So much for “moving fast and breakings things.”
Facebook might be the most prominent target of blame, but it isn’t the only one. Twitter continues to be flooded with bots, Google has been criticized for trying to silence its critics, Amazon is increasingly viewed as a market-distorting monopoly, and Apple has managed to encrypt its way onto the bad side of law enforcement agencies across the country.
The good old days for Silicon Valley—when technology companies could swat calls for regulation by pointing to their brilliance and seemingly miraculous profitability—may be rapidly coming to an end. Legislators and policymakers are talking seriously about levels of regulation that would have been unthinkable only a year ago, whether in terms of forcing Facebook, Twitter and Google to aggressively filter their contents and advertising or heavily regulating Amazon as a monopoly.
Nor is talk of regulation hypothetical. Congress just passed (and President Trump is expected to sign) the “Allow States and Victims to Fight Online Sex Trafficking Act of 2017” (also known as SESTA or FOSTA). This landmark bill will make online platforms liable if they’re used to facilitate sex trafficking. As I’ve written before, the background immunity that the law abrogates, Section 230 of the Communications Decency Act (CDA) of 1996, “is the most important legal driver of digital free expression” and enjoys “enjoys near-mythic status among internet activists and technology companies.” Talk of amending CDA 230 liability has always been “a third rail of internet-policy debates.” Yet it’s finally happened—and though backers of SESTA and FOSTA have played down the bill’s effects on CDA 230, the legislation is likely to have important consequences. Now that Congress has limited CDA 230, it has made “make conceptual space for the kind of regulation ... that technology companies have until now successfully fended off.” And the law signals that Congress is playing for keeps; if it’s willing to amend the Magna Carta of the internet, it’s hard to imagine any area of technological regulation that’s off limits.
This new regulatory landscape is striking on a number of fronts, but two in particular are worth noting. First, neither policymakers nor scholars have figured out how to regulate Silicon Valley. Although technology companies are not exceptional in being “above” regulation, the challenge of regulating multinationals with virtual products, user bases that are far more global than they are American, and technology that’s constantly changing, is far greater than that of regulating even the most complex domestic industrial sectors. This is especially true when the government is trying to regulate for national-security or public-safety reasons, a scenario that carries the temptation for extreme risk-aversion and overregulation.
The long-criticized Committee on Foreign Investment in the United States (CFIUS), which conducts lengthy, often-delayed national-security reviews of foreign purchases of American companies, is a case in point. CFIUS may very well strike the right balance between regulation and innovation when a foreign government wants to buy an American steel manufacturer (though even here CFIUS has plenty of detractors), but does anyone think the right answer is to make Facebook or Apple pre-clear their commercial acquisitions or technological innovations with the FCC, let alone the FBI the intelligence community? For scholars and policy analysts, developing theoretical approaches to and practical frameworks for regulating technology companies is a huge and urgent research agenda. Let’s get on it.
Second, regulation rarely stops where it starts. Once our society decides that it’s a good idea to regulate Silicon Valley in one way—for example, imposing European-style privacy regulations on Facebook or making Twitter liable if its users sex-traffic by tweet—it’s easy to dismiss a technology company’s latest protest against regulation as so much special pleading. To put it another way, once Silicon Valley’s aura of regulatory invincibility vanishes in one area, it vanishes everywhere. So while encryption has nothing to do with social media data-privacy protection, regulation of data privacy might conceivably open the door to a renewed government push for court decisions or legislation that would mandate that technology companies design their encrypted products with law-enforcement access.
The public, and therefore the government, has traditionally relied on technology companies to regulate themselves. As I argued in a recent law review article, “When Goldman Sachs or Monsanto urges us to support something, that’s enough to cause many of us to oppose it; when Apple’s CEO denounces a technical assistance order, many of the same corporate skeptics quickly fall in line.” I’m not sure that’s true anymore. If the public stops trusting what technology companies say on enough big technological or regulatory issues, why trust them at all?