Congress

Back to the Future for Section 230 Reform

Mark MacCarthy
Tuesday, March 2, 2021, 11:54 AM

The notice and takedown system rejected in 1997 might be a way forward.

The U.S. Capitol in Washington, D.C. (JStephanMease, https://tinyurl.com/y6v5bjvy; CC BY-SA 4.0, https://creativecommons.org/licenses/by-sa/4.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Reform of Section 230 of the Communications Decency Act is on the agenda for both Congress and the Biden administration this year—all the more so after the renewed discussion around technology platforms sparked by the U.S. Capitol riot and President Trump’s subsequent deplatforming from major social media platforms. Just recently, members of Congress and congressional aides reported conversations with the White House on how to regulate the tech titans, including reform of Section 230.

Section 230 provides immunity for platforms when they act as the publisher of the material posted by their users, including when the platforms edit or fail to edit this material. Some conservative advocates for reform of the statute want to constrain platform editorial discretion to create a more favorable climate for their political perspectives. But they also want to exempt from Section 230 immunity certain especially harmful illegal activity, such as sex trafficking and child sexual abuse material. Progressives pushing for reform, meanwhile, want platforms to be less hostile toward speech from marginalized groups struggling for social, economic and racial justice. But they also want platforms to take action against hate speech and white supremacy. It is not clear whether both sides will be able to arrive at a unified reform.

If policymakers are unable to come to agreement, the courts may take action in Congress’s place. In a 2020 statement concurring with dissent from the Supreme Court’s denial of a petition of a writ of certiorari involving a Section 230 claim, Justice Clarence Thomas argued that judges have extended Section 230 far beyond what Congress intended—proposing a reinterpretation of Section 230 that would “par[e] back the sweeping immunity courts have read into §230.” And the Supreme Court might soon have another chance to reconsider Section 230, as one case before the Texas Supreme Court is on track to make its way up to the justices. If Congress does not enact Section 230 reform, the Supreme Court could well act to “pare back” Section 230 immunity in some way, even if not in precisely the fashion that Justice Thomas would like.

A new liability regime is coming. It would be better to have Congress rethink these issues anew than to see the Supreme Court establish the new regime disguised as interpretation of a 25-year-old statutory text.

But what is the best way forward? Sometimes it pays to examine a road not taken. In this case, the notice and takedown liability regime, rejected by the courts in 1997, might be worth reconsidering, and it might be something the warring partisan factions can agree on.

What Reforms Are Under Consideration?

A popular and bipartisan approach to Section 230 reform involves piecemeal carve-outs that focus on particularly egregious online harms and illegality. Congress did this in 2018 by passing the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), creating liability for online companies that facilitate or promote sex trafficking. The bill passed the Senate on an overwhelming bipartisan vote of 97-2. The Earn It Act sponsored by Republican Sen. Lindsey Graham, with bipartisan co-sponsorship including Democratic Sen. Richard Blumenthal, would withdraw Section 230 protection for violations of child sexual abuse laws. Democratic Rep. Anna Eshoo has introduced legislation to withdraw Section 230 immunity for the amplification of content connected to civil rights violations or involving acts of international terrorism.

The Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (Safe Tech) Act introduced by Democratic Sens. Mark R. Warner, Mazie Hirono and Amy Klobuchar would make sure Section 230 does not apply to ads or other paid content, does not bar injunctive relief, does not impair enforcement of civil rights laws, does not interfere with laws that address stalking and cyber-stalking or harassment and intimidation on the basis of protected classes, does not bar wrongful death suits, and does not bar suits under the Alien Tort Claims Act. The explanation of the Safe Tech Act released along with the text notes a crucial element of most Section 230 reform proposals. Namely, they “do not guarantee that platforms will be held liable in all, or even most, cases…. Rather, these reforms ensure that victims have an opportunity to raise claims without Section 230 serving as a categorical bar to their efforts to seek legal redress for harms they suffer—even when directly enabled by a platform’s actions or design.” FOSTA is an exception; it created a new offense of promoting or facilitating prostitution.

Sometimes these measures target particular Section 230 court decisions. Eshoo’s proposal to remove immunity for algorithmic amplification in connection with suits involving terrorism strikes at the decision in Force v. Facebook, Inc., where the U.S. Court of Appeals for the Second Circuit granted immunity to Facebook for recommending that terrorists should read each other’s posts and meet each other. The Safe Tech Act responds, among other things, to the Herrick v. Grindr case, in which the U.S. Court of Appeals for the Second Circuit issued a summary order upholding a district court decision immunizing the dating app from a product defect claim alleging that it did not have an effective system to prevent harassment and threats to physical safety.

By picking and choosing among the possible sources of litigation against social media companies, this carve-out approach inevitably increases the burden of legal analysis on social media companies to determine whether or not they have immunity in particular cases. Moreover, as Danielle Citron and Mary Anne Franks note, piecemeal reform is “inevitably underinclusive.” It creates an open-ended drive for Congress to update Section 230 exceptions whenever a new egregious harm is immunized by an especially broad interpretation of Section 230. Piecemeal proposals also increase the difficulty of finding bipartisan agreement on which immunities to remove and which to leave in place in a particular measure.

Conservatives sometimes focus their reform proposals on modifications of Section 230(c)(2)’s requirement that platforms act in “good faith” when they remove content. But this proposal is based on a misunderstanding of how Section 230 works in practice. As Eric Goldman—the best and most knowledgeable defender of Section 230—has noted, the platforms rarely use Section 230(c)(2) to escape liability for their content removal decisions. This is because Section 230(c)(2)’s requirement to show good faith imposes a high litigation burden, including the expensive and time-consuming burden of discovery. The major advantage for platforms from Section 230’s grant of immunity is not that platforms will necessarily win cases after lengthy court proceedings, but that they can often short-circuit the case before it can proceed. As a result, in practice platforms get their immunity without extended court processes by invoking Section 230(c)(1)—which courts have interpreted broadly to immunize any action where a platform acts as a publisher, including content removal or filtering, and which does not require any showing of good faith for such removals.

Some conservatives want to condition Section 230 immunity on a regulatory finding of a platform’s political neutrality. Some progressives and moderates, by contrast, want to make Section 230 immunity contingent on a court’s determination of reasonable content moderation practices. But these requirements for good behavior need not be tied to Section 230. If these practices are good policy ideas, Congress could mandate them directly and put a regulator in charge of interpreting and enforcing these rules. For example, I have proposed a stand-alone requirement that platforms act consistently with their own content rules, enforced by a dispute resolution mechanism modeled on the arbitration system maintained by the Financial Industry Regulatory Authority for broker-dealers. Conditioning Section 230 immunity on satisfying these obligations is an ineffective and indirect compliance mechanism.

Section 230 reform is not the right vehicle for addressing many issues that might need to be resolved in crafting a regulatory framework for social media companies. Reform of the statute cannot deal with social media controls on legal but harmful information on platforms including hate speech and disinformation, which are protected by the First Amendment. It is also the wrong mechanism for imposing transparency and accountability for content moderation practices for violations of a platform’s own content rules involving legal speech. I have recommended a free-standing law imposing transparency requirements on social media companies. Section 5 of the Platform Accountability and Consumer Transparency (PACT) Act, introduced in 2020 by Democratic Sen. Brian Schatz and Republican Sen. John Thune does this without reforming Section 230, as does the draft bill, circulated last year by Democratic Rep. Jan Schakowsky, chair of the Consumer Protection Subcommittee of the House Energy and Commerce Committee.

Section 230 reform is really about the extent to which platforms are liable for illegal speech and activity on their systems—not about these very different regulatory issues. Progressives and conservatives largely agree that there is too much illegal and harmful material on social media platforms and that a reform of the current liability system could go a long way to eliminating much of it.

A Notice and Takedown System for Social Media

But the past has something to teach today’s policymakers. Back in 1997, the U.S. Court of Appeals for the Fourth Circuit in the Zeran case had a chance to allow a notice and takedown system to develop under Section 230. The idea the court reviewed was that Section 230 immunity would not apply if a service provider got an adequate notice about illegal material on its system and failed to act. But the court rejected this idea as inconsistent with the purpose of Section 230 to provide an incentive for self-regulatory content moderation. The judges also thought notice liability would lead to excessive content removal: Faced with a blizzard of notices and in the absence of any liability for removals, platforms would have a “natural incentive simply to remove messages upon notification.”

This is now settled Section 230 law. But it was a mistake then—and is bad policy for today’s online world. A notice and takedown system would be far more effective in policing illegal content than immunizing companies for their failure to act, which is what Section 230 does. And the risks to free speech can be managed.

The United States has had more than 20 years of experience with a notice and takedown regime established in the 1998 Digital Millennium Copyright Act (DMCA), which provides a safe harbor from copyright infringement for platforms that act to remove infringing material upon notification. While neither content owners nor platforms are completely satisfied, this law has provided a workable system that protects both copyright interests and free speech. It requires strict conditions on adequate notices such as identifying the specific work infringed, a statement of good faith, and a statement under pain of perjury that the complainant is authorized to act for the copyright owner. The DMCA also requires the platform to provide an opportunity for the alleged infringer to file a counter-notice, and if the alleged infringer does so, the platform must restore the removed material unless the complaining party goes to court within 10 days alleging copyright infringement.

Regulators in the European Union have reproduced the DMCA’s model in recent years. In its 2000 Electronic Commerce Directive, the European Union provided immunity to online service providers only if they acted expeditiously to remove illegal material once they obtained actual knowledge of its illegality. This provided the legal basis for member countries to establish general notice and takedown schemes, but few did so.

In December 2020, the European Commission proposed a new Digital Services Act, which, among other things, would establish a pan-European notification and takedown regime. Under the proposed new system, receipt of an adequate notice concerning illegal material would require platforms to act expeditiously to remove the material in order to maintain immunity. In addition, the act would require complaints to contain an explanation of alleged illegality, identifying information and a statement confirming good faith. Platforms must also establish redress mechanisms for users whose content is removed, including internal complaint-handling mechanisms, out-of-court dispute settlement and judicial redress.

As with piecemeal Section 230 reforms, these new rules under the DSA do not determine that the platform is liable for damages if it does not remove the material upon complaint. They say only that if the platform fails to act, it loses immunity from whatever liability already exists under other laws for leaving such material in place. As Recital 17 of the DSA says, the new rules “should not be understood to provide a positive basis for establishing when a provider can be held liable, which is for the applicable rules of Union or national law to determine.”

Congress should examine the pros and cons of establishing a platform notice and takedown regime modeled on the systems in the DMCA and the Digital Services Act. Such a system would not require platforms to review material before it is posted. It would build on the complaint systems the larger social media companies already have in place and would rely on these complaints to alert platforms that someone has posted potentially illegal and harmful material. In the Grindr situation, for instance, the dating site would continue to enjoy immunity from liability until the victim of the impersonation and harassment notified the company, and it would remain immune from liability if it acted expeditiously to block the harmful activity.

To control the risks of excessive removals, the new system should include all the protections in these model laws, including provisions for counter-notice and reinstatement in the absence of beginning court proceedings, requirements for detailed and specific complaints citing relevant statutes, and requirements for good faith and perjury penalties. Others could be considered, including liability for wrongful removal, which might allow recourse to courts when perfectly legal material is removed in error. The law also could require platforms to take action against repeat offenders and groups that abuse the complaint process, and to develop a trusted reviewer program that privileges the complaints of verified users. In addition, the new requirements could be limited to the largest platforms, which create the greatest risks of illegal conduct and which have resources to manage notification liability.

No system that imposes notice liability can avoid creating some incentive for excess removals, but the current system errs in the opposite direction of allowing too much illegal and harmful material. Measures against abuse like ones just described have rendered the operation of the DMCA tolerable over the last decades and should mitigate the risks of extending it to broader liability.

Progressives and conservatives should be able to agree that a notice and takedown system as a reform of Section 230 deserves further consideration. Both sides want to cut down on the illegal and harmful material that appears far too often on platforms under the current system. They disagree about other issues such as whether platforms are taking down too much legal conservative content. But issues like political neutrality and reasonable access to platform services for all political perspectives are outside the scope of Section 230 reform and can be pursued through other vehicles. The same is true for mandates for fairness, accountability and transparency in content moderation.

If policymakers want to collaborate in reducing illegal and harmful online content, they will face a choice between piecemeal Section 230 reform such as the Earn It Act or the Safe Tech Act or more fundamental reform such as a notice and choice regime. Piecemeal reform poses a number of problems: It does not solve the underlying issues, raises the specter of partisan gridlock on any specific measure, and opens the door to an endless series of exceptions. Instead, the warring partisan factions should look carefully at a notice and takedown system. To make progress, sometimes policymakers need to go back to examine a road not taken.


Topics:
Mark MacCarthy is nonresident senior fellow at the Brookings Institution, senior fellow at the Institute for Technology Law and Policy at Georgetown Law and adjunct professor in Georgetown’s Communication, Culture & Technology Program.

Subscribe to Lawfare