Cybersecurity & Tech

What’s Next for Section 230? A Roundup of Proposals

Zoe Bedell, John Major
Wednesday, July 29, 2020, 9:01 AM

President Trump’s executive order taking aim at Section 230 of the Communications Decency Act is just the most recent in a long line of proposals from both sides of the aisle to potentially limit the statute’s broad scope.

President Trump meets with Twitter CEO Jack Dorsey on April 23, 2019, in the Oval Office. (Source: White House)

Published by The Lawfare Institute
in Cooperation With
Brookings

Section 230 of the Communications Decency Act has long provided internet platforms like Twitter and Facebook with immunity from claims based on third-party content that appears on their platforms. The statute, codified at 47 U.S.C. § 230, forces individuals unhappy with third-party content to sue the party directly responsible for such content—that is, the user who posted it. This immunity has fostered the free flow of ideas on the internet. But critics have also argued that Section 230 shields publishers from liability for harmful content, including platforms that allegedly facilitate sex trafficking or publish terrorist content.

The scope of Section 230 immunity has been thrust into the spotlight recently. Perhaps the most publicized attack on the law was President Trump’s May 28, 2020, executive order—released in apparent retaliation for Twitter’s decision to fact check one of the president’s tweets. The executive order essentially reimagines Section 230, putting forth an interpretation that is far narrower than current judicial decisions recognize. A few weeks later, Sen. Josh Hawley and the Department of Justice separately put out their own proposals for how to narrow the scope of Section 230. Hawley attempted to provide some statutory support for Trump’s vision, while Trump’s own Justice Department primarily explored other avenues.

But these high-profile proposals are far from the first effort to amend or repeal Section 230. Rather, they are just the most recent in a long line of proposals from both sides of the aisle to potentially limit the statute’s broad scope.

The proposals fall into three main categories. Some, like the president’s executive order, focus on preventing perceived political discrimination on major platforms (even though data consistently suggests this alleged censorship is not actually happening). Others offer somewhat more targeted solutions for removing immunity for particular categories of harmful (or illegal) content. The final category, including the Department of Justice’s proposal, consists of more sweeping efforts to limit when platforms can claim Section 230 immunity.

Proposals Aimed at Curbing Perceived Political Censorship

The first category of reform efforts have targeted what many Republican politicians view as the anti-conservative bias of major platforms. These efforts, like the executive order, have focused on removing Section 230 immunity for platforms that engage in perceived political censorship.

President Trump’s Executive Order

The most prominent proposal in this category is the president’s recent executive order. As mentioned, the order purports to (re)interpret Section 230 in a way that would narrow the immunity provided by the law and is a thinly veiled swipe at Twitter’s perceived censorship of Trump and other conservative voices.

The order takes an aggressively purposivist approach, encouraging a departure from the broad protections that courts have at this point agreed are provided by the statutory text. It suggests that Section 230 was enacted to provide immunity for “good faith” efforts to block harmful content, particularly content harmful to children, but not “to allow a handful of companies to grow into titans controlling vital avenues for our national discourse ... and then to provide those behemoths with blanket immunity when they use their power to censor content and silence viewpoints that they dislike.” The statute’s “limited protections,” the order advises, “should be construed with these purposes in mind.” But courts have consistently held that Section 230 immunity applies without regard to whether particular moderation decisions were made in good faith, based on the statute’s blanket statement that internet platforms shall not be treated as the publisher or speaker of third-party content, without any reference to their motivation.

Nonetheless, in light of the perceived limited intent, the executive order attempts to reinterpret Section 230, suggesting that websites “should properly lose” their “limited liability shield” whenever they “remove or restrict access to content” not in good faith. To advance this reinterpretation (which has no force on its own), the order requires the secretary of commerce, “in consultation with the Attorney General,” to petition the Federal Communications Commission (FCC) for regulations clarifying the reach of Section 230. The order requests that the regulations consider whether Section 230 immunity should apply only when removals of content are in good faith. The order implies that removal is not in good faith if it is “deceptive, pretextual, or inconsistent with a provider’s terms of service” or “taken after failing to provide adequate notice, reasoned explanation, or meaningful opportunity to be heard.” On July 27, the secretary of commerce filed the petition contemplated by the executive order. The petition, somewhat unsurprisingly, seeks the narrow interpretation of Section 230 envisioned by the order itself (and in some ways goes even further than the order in attempting to restrict Section 230’s scope).

Neither the order nor the petition to the FCC has any immediate impact on Section 230’s scope. Rather, they will have an impact only if the FCC issues regulations adopting the executive order’s vision of the statute. Even then, the order’s impact may be limited. Neither an executive order nor the FCC can graft additional requirements onto Section 230 without congressional action, much less impose requirements that are inconsistent with the statutory text. Courts have consistently held that “[c]onsistency with the authorizing statute is as much a predicate for validity for an Executive Order as for an agency regulation.” Courts have also repeatedly rejected the interpretation of the Communications Decency Act proposed by the executive order and the petition—that is, that the act protects removals of content only if they are in good faith. Meanwhile, the executive order has already been challenged on First Amendment grounds as an impermissible effort to retaliate against Twitter for fact checking the president, and FCC regulations narrowing the act would also likely be subject to legal challenge.

Hawley’s First Proposal

Even before the executive order, Hawley had put forward a proposal to narrow Section 230’s scope that was similarly focused on perceived political censorship. Hawley has been one of the most vocal critics of Section 230 and one of the more prolific drafters of legislation to reform it. In June 2019, he proposed legislation that would impose a difficult-to-obtain prerequisite to Section 230 protection: A platform would be entitled to immunity only if it first obtained a certification from the Federal Trade Commission (FTC) that it does “not moderate information provided by information content providers in a manner that is biased against a political party, political candidate, or political viewpoint.”

Many of the details of Hawley’s approach are unclear, including the exact contours of the certification process, or what would constitute “moderat[ing] information” in a way that is “biased against” a particular political viewpoint. The sweep of the proposal, though, is somewhat more targeted than other reform proposals, as it would apply only to large internet companies with more than 30 million U.S. users, 300 million global users or $500 million in annual revenue. (Facebook and Twitter fit these criteria.)

Hawley’s Second Proposal

Following the president’s executive order, Hawley proposed additional legislation that would provide some needed statutory support for the executive order’s vision of Section 230 immunity. Hawley’s second effort, somewhat awkwardly titled the “Limiting Section 230 Immunity to Good Samaritans Act,” would require platforms to “promise” to enforce their terms of service in good faith in order to qualify for immunity. The bill defines “in good faith” to mean that a platform does not selectively enforce its terms of service to restrict access to or availability of certain material.

Like Hawley’s first proposal, this legislation is also seemingly targeted at platforms that allegedly disfavor conservative content, threatening them with removal of Section 230 immunity for such “bad faith” restrictions on content. The bill also provides for civil damages claims (plus attorney’s fees) based on breaches of the “promise” to enforce terms in good faith—that is, situations in which a platform allegedly enforces its terms in a selective manner. This effort, similar to Hawley’s first proposal, would only target large platforms, defined to include platforms with more than 30 million U.S. users or 300 million worldwide users in the last 12 months and more than $1.5 billion in annual revenue.

Gosar’s Proposal

Republican Rep. Paul Gosar proposed yet another bill targeting platforms’ moderation of political speech. Gosar’s press release describes the bill’s goal as “facilitat[ing] the option for a self-imposed safe space or unfettered free speech.” As Gosar puts it, his bill would transfer the content-screening function to viewers, thus “stop[ping] Big Tech censorship of competition and lawful political content.” Whether the bill would achieve that goal, though, is far from obvious. Section 230 has two main provisions—one that broadly prevents platforms from being held liable for publication of third-party content, and a second, narrower provision that provides immunity for good faith actions to restrict or remove specific objectionable content. Gosar’s bill would limit the narrower “good faith” provision to permit removal of only “unlawful” content—rather than the broader “objectionable” content currently protected—while also creating a new safe harbor for platforms that give users additional filtering power. The change would thus theoretically dissuade platforms from restricting content themselves and instead encourage them to leave those choices to their users. But the bill leaves untouched Section 230’s broader immunity provision for publication of third-party content, even though courts have interpreted it as the workhorse provision. Modifying the “good faith” safe harbor, then, would not necessarily erode the statute’s broad protections.

Neither Hawley’s proposals nor Gosar’s offering have yet gained any sort of momentum toward passage. But they represent a growing view in some political circles that Section 230 should not shield alleged political censorship.

Targeted Efforts to Narrow Section 230’s Scope

A second category of efforts to narrow Section 230 has focused on particular categories of objectionable content on platforms, either removing immunity for such content or attempting to use Section 230 to incentivize more aggressive efforts to police it.

FOSTA

In April 2018, long before the executive order, Congress passed the Fight Online Sex Trafficking Act (FOSTA), which narrowed the scope of Section 230 immunity. FOSTA declared that platforms cannot claim Section 230 immunity for their role in facilitating sex trafficking through their publication of third-party content. It did so by removing immunity for certain civil claims and criminal charges relating to web platforms’ alleged facilitation of sex trafficking.

FOSTA was intended to target platforms like the now-defunct Backpage, a classified advertising website thatmany law enforcement officials viewed as a platform on which sex trafficking thrived. Backpage had—before FOSTA—avoided civil liability because of the Communications Decency Act. (However, federal law enforcement agencies shut down Backpage before FOSTA was signed into law, perhaps based on violations of federal criminal law, which are exempt from Section 230 immunity.) Following FOSTA’s passage, other platforms shut down voluntarily for fear of liability. Craigslist, for example, dropped its “personals” section from its website, fearing that it might face far-reaching liability based on its unwitting facilitation of sex trafficking. However, anecdotal reporting suggests that sex workers now feel less safe because, as websites shutter due to FOSTA, workers are unable to screen customers online. Democratic members of Congress have introduced a bill to gather data on FOSTA’s effects.

FOSTA’s true impact will not be known for some time, as websites—and sex workers—are just beginning to consider the new legal and regulatory landscape, while courts are just now starting to interpret its provisions. But while the predictions for the future tend to be all over the map, the law so far has been the only successful effort at paring back Section 230’s scope.

Warner White Paper

Other politicians have proposed going further than FOSTA. For example, in July 2018—just a few months after FOSTA’s passage—Demoratic Sen. Mark Warner circulated a white paper that advocated for further narrowing Section 230’s scope beyond FOSTA. Warner suggested potentially removing Section 230 immunity “for failure to take down deep fake or other manipulated audio/video content.”

Proposal by State Attorneys General

In May 2019, a bipartisan group of 47 state attorneys general wrote to Congress and asked that Section 230 be amended further. The group proposed a relatively modest change to the statutory language with significant implications: In addition to Section 230 not barring enforcement of federal criminal law, the proposal from the attorneys general would amend the statute to make clear that the statute is not a bar to the enforcement of state or territorial criminal law as well. That change would vastly expand both the number of criminal laws that could be enforced against publishers—and, thus, that publishers would have to monitor and attempt to comply with—as well as the number of individuals and agencies allowed to do the enforcing. The attorneys general apparently asked for similar amendments in 2013 and 2017 without much success, and at least so far, it seems likely that their 2019 request will meet a similar fate.

EARN IT Act

In March 2020, Hawley, Republican Sen. Lindsey Graham, and Democratic Sens. Richard Blumenthal and Dianne Feinstein proposed the EARN IT Act. The EARN IT Act, like FOSTA, focuses on specific, criminal content—content relating to child sexual exploitation. It aims to curb that type of content by removing Section 230 immunity for certain claims based on such content.

The EARN IT Act would also create a 19-member commission to formulate the best practices for preventing child-sexual-exploitation content. The best practices are meant to cover a wide variety of issues, including “preventing, identifying, disrupting, and reporting child sexual exploitation,” “training and supporting content moderators,” “offering parental control products,” and “receiving and triaging reports of child sexual exploitation by users.” The original iteration of the EARN IT Act would have allowed companies to “earn” back Section 230 immunity by complying with the best practices, but the act has since been revised to render the best practices advisory. In its current form, it will therefore function largely like FOSTA, providing a carve-out from Section 230 immunity for child-sexual-exploitation-related content.

The EARN IT Act’s approach has been criticized by many as dangerous. Specifically, many commentators fear that the EARN IT Act is just a “sneak attack” on encryption, based on the suspicion that the best practices might ban encryption or permit liability for implementing it. An amendment introduced by Democratic Sen. Patrick Leahy attempts to address that critique, stating that companies cannot face liability based on their implementation of end-to-end encryption. But that has not quelled the fears of privacy advocates. The current version of the EARN IT Act, with Leahy’s amendment, was approved unanimously on July 2 by the Senate Judiciary Committee. It will next move forward to a Senate floor vote.

Sweeping Changes to Section 230’s Scope

There have also been proposals that would go further, in some ways, than the above proposals and would change the landscape of Section 230 immunity entirely. Such proposals have been put forward by politicians, scholars and members of the public.

Proposals by the Department of Justice

The Department of Justice released a report in June 2020 identifying “areas ripe for Section 230 reform.” The report declares that “the time is ripe to realign the scope of Section 230 with the realities of the modern internet,” and it puts forward a number of proposals for paring back Section 230 immunity. For example, the report suggests several additional carve-outs from Section 230, including for “truly bad actors”—that is, platforms that “purposefully facilitate or solicit third-party content or activity that would violate federal criminal law”—and for “claims that address particularly egregious content,” including child exploitation and sexual abuse, terrorism, and cyber-stalking.

But the report also goes beyond those sorts of narrow subject-matter carve-outs. For example, the report proposes removing Section 230 immunity for federal civil enforcement actions, which would remove immunity in cases brought against platforms by the FTC, Justice Department and other civil agencies. In other words, while private citizens couldn’t bring suit, the federal government could. The report also offers other ideas for consideration, such as distinguishing between different types of internet services for immunity purposes (for example, platforms versus internet service providers) and “sunsetting” Section 230 immunity at some point. It presents a broad and multifaceted approach to narrowing Section 230 immunity.

Proposals by Prominent Democrats

A number of prominent Democrats have suggested making wholesale changes to Section 230’s scope. While several candidates for president weighed in on the issue in broad terms, former member of Congress and Democratic presidential hopeful Beto O’Rourke was the first to offer a formal proposal. That proposal would have encouraged tech companies to adopt and implement terms of service aggressively policing speech that “incite[s] or engage[s] in violence, intimidation, harassment, threats, or defamation” against others based on various demographics, including race, religion and gender identity. Companies that failed to comply would lose their immunity.

Though O’Rourke dropped out of the race early on, presumptive Democratic nominee Joe Biden appears to have taken up the mantle of Section 230 reform. Though Biden had previously suggested openness to reforms, in a January 2020 interview with the New York Times editorial board, he went much further, stating that Section 230 should be “revoked, immediately.” Though Biden has not provided any additional details on his plan, he did double down on revocation in May, after the president issued his executive order. (The former vice president finds himself in the somewhat unusual position of echoing a stance taken by the president and Republican Rep. Matt Gaetz.)

Platform Accountability and Consumer Transparency Act

Most recently, on June 24, Democratic Sen. Brian Schatz and Republican Sen. John Thune introduced the Procedural Accountability and Consumer Transparency (or PACT) Act—one of the more detailed and nuanced proposals made so far. It has two main goals—increasing transparency and consistency of content moderation decisions, and decreasing the amount of unlawful content on web platforms.

The bill aims to increase transparency by requiring clear content moderation policies and quarterly “transparency reports” regarding moderation decisions. It tries to eliminate unlawful content by creating a notice-and-takedown regime for such content, in which companies must remove content deemed unlawful by a court order within 24 hours (or lose Section 230 immunity for claims based on that content). The PACT Act also eliminates immunity from federal civil enforcement actions and empowers state attorneys general to bring claims based on federal civil statutes if their state has an analogous law.

Daphne Keller of Stanford’s Center for Internet and Society has a detailed overview of the law, which she calls “a list of serious ideas, imperfectly executed.” It remains to be seen whether the PACT Act will move forward, but as Schatz has pointed out, it is sponsored by both the chairman and the ranking minority member of the Senate Subcommittee on Communications, Technology, Innovation, and the Internet, so it has at least some chance of gaining traction.

Hawley’s Third Proposal

Hawley, in addition to his political-censorship-focused proposals, has also recently introduced a third proposal that would target Section 230 immunity from a different angle. On July 28, 2020, Hawley announced the Behavioral Advertising Decisions Are Downgrading Services (or BAD ADS) Act. The BAD ADS Act, rather than targeting perceived political censorship, would remove immunity based on certain advertising practices by web platforms. In particular, it would strip Section 230 immunity from a platform for 30 days whenever a platform engages in “behavioral advertising,” which the act defines as advertising that is targeted based on a user’s personal traits, personal information or online behavior. As Hawley puts it, the act is intended to disincentivize “Big Tech companies like Google and Facebook” from “creat[ing] psychological profiles, which are then used to deliver individually targeted ads.” As with Hawley’s other proposals, this one also targets only large platforms—those with more than 30 million U.S. users or 300 million worldwide users in the last 12 months and more than $1.5 billion in annual revenue. This latest effort by Hawley shows his continued interest in narrowing Section 230’s scope.

“Reasonableness” Proposal

One final approach would require platforms to take “reasonable steps to prevent or address unlawful uses of their services” in order to qualify for Section 230 immunity. That approach was originally proposed by Danielle Citron and Lawfare editor-in-chief Benjamin Wittes, with the aim of “eliminat[ing] the immunity for the worst actors.” Of course, the meaning of “reasonable steps” is vague and would likely be given meaning through extensive and costly litigation that to some extent could defeat the purpose of Section 230. But this approach might appeal to lawmakers looking to narrow Section 230 immunity and incentivize the removal of unlawful content. Its broad “reasonableness” requirement would in some ways be the most sweeping and burdensome of all the proposals discussed above. This approach has been endorsed by IBM—a notable development given that the technology industry has rarely provided that sort of explicit endorsement to proposals like these. However, IBM is not necessarily one of the companies facing the most direct or intense scrutiny from lawmakers and the public, so it is not clear how much weight IBM’s endorsement will carry, either within the tech industry or more broadly.

***

These proposals show a growing push to revise the scope of Section 230 immunity. But they are not without resistance—Section 230 has its share of defenders fighting to keep it in its current form. Eric Goldman, a prominent Section 230 expert, has criticized several efforts to narrow immunity as counterproductive and problematic. Likewise, the Electronic Frontier Foundation (EFF) has criticized some efforts to narrow Section 230 as both flawed and potentially unconstitutional. EFF and others have reinforced that Section 230 should not be narrowed because it has played—and continues to play—an essential role in allowing internet businesses to thrive.

Whatever results from these efforts, there is clearly bipartisan interest in potentially revising the immunity that Section 230 provides. Of course, even without Section 230, platforms will still have some viable defenses to content-related claims. But as one of us (Major) has explained, those defenses are far less developed, and will be less reliable, than Section 230 immunity. The success or lack thereof of these proposals will therefore play a major role in what the internet looks like in the years to come.


Zoe Bedell is an attorney in the Washington, D.C., office of the law firm Munger, Tolles & Olson LLP. Her practice focuses on complex commercial litigation, as well as privacy and technology issues. Before joining the firm, Zoe clerked for Justice Elena Kagan of the U.S. Supreme Court and for then-Judge Brett Kavanaugh of the U.S. Court of Appeals for the District of Columbia Circuit. Zoe received her J.D. from Harvard Law School, magna cum laude. Prior to law school, Zoe served as an officer in the U.S. Marine Corps, deploying twice to Afghanistan, and worked at an investment bank for two years.
John Major is an attorney at Munger, Tolles & Olson LLP. He frequently represents technology companies in platform liability matters. He previously clerked for Judge Paul Watford and Judge Alex Kozinski of the U.S. Court of Appeals for the Ninth Circuit.

Subscribe to Lawfare