Congress Cybersecurity & Tech Surveillance & Privacy

To Protect Kids Online, Follow the Law

Mary-Rose Papandrea, Matt Perault
Thursday, June 6, 2024, 8:00 AM
Courts have repeatedly struck down states’ child safety bills. Looking to past cases gives lawmakers a better playbook for future legislation.
Kids using smartphones. (Kampus Productions, Pexel; https://www.pexels.com/photo/kids-using-gadgets-while-on-a-sofa-bed-7414068/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The most active issue in technology policy in the United States is child safety. In 2023, states passed 23 laws governing kids’ experiences online, and many more legislatures are debating the issue this year. In Congress, lawmakers have held hearing after hearing, and one proposal, the Kids Online Safety Act, garnered more than 60 Democratic and Republican cosponsors and was marked up last week. Although the parties remain polarized on many issues, protecting kids has resulted in a rare display of bipartisanship.

Regardless of the merits of these laws, they suffer from a fatal flaw: They are likely to get struck down in court. A number of states—including California, Arkansas, and Ohio—have passed online child safety laws, only to have courts block them before they can be implemented. These laws have suffered similar fates for similar reasons—judges have found that they violate the First Amendment because they restrict too much speech. 

Experts argue that this approach to the First Amendment suffers from its own flaws. For example, Tim Wu, professor at Columbia University, claims that the First Amendment is “obsolete.” Mary Anne Franks, professor at the George Washington University, argues that free speech has become a “Lost Cause,” unable to “be explained by any principled doctrine.” Other scholars have made the case that current First Amendment doctrine hearkens back to the flawed jurisprudence of the Depression era, when courts repeatedly struck down economic regulation, even as the country wrestled with the most harrowing economic crisis in its history. Similarly, they argue, at a time when internet users need more governance of their online lives and specifically when they need to protect children from harmful online experiences, courts will use the First Amendment to block any attempt to regulate the tech sector.

But courts have never suggested that the First Amendment prohibits all laws regulating the tech sector. Instead, they have signaled that laws will be upheld if they follow the road map outlined in past cases. The problems arise when they deviate from the guidance courts have provided on what’s permissible and what’s not.

To pass kids’ online safety laws that survive court review, lawmakers can stick to four principles.

First, they should draft content-neutral laws. Content-based rules receive more stringent review and therefore are more likely to be struck down even if the restriction is aimed at content “harmful to minors.” In Brown v. Entertainment Merchants Association in 2011, the Supreme Court said it was “unprecedented and mistaken” when California tried to “create a wholly new category of content-based regulation that is permissible only for speech directed at children.” California had passed a law prohibiting the sale of violent video games to minors. Brown specifically rejected California’s reliance on Ginsberg v. New York, a 1968 case that upheld a state law prohibiting the sale of obscenity for minors. Brown held that unless a legislature is relying on an existing category of unprotected expression, it may not create wholly new categories of speech “harmful to minors.” The Court reached this conclusion even though the California anti-violence law closely mimicked the New York obscenity-for-minors law at issue in Ginsberg

As Brown makes clear, there is no general carve-out from the First Amendment for speech that is “harmful for minors.” Even Justice Samuel Alito’s concurrence in Brown, which criticized the majority for its failure to use “caution” when evaluating a legislature’s attempt to deal with the challenges of new technology, chided California for relying on “undefined social or community standards” to determine what types of violence were harmful for minors. Alito, joined by Chief Justice John Roberts, noted that “reasonable people” could disagree about which depictions of violence are suitable for children and adolescents. Furthermore, Alito criticized California’s laws for lumping together all minors, from children to those just under age 18; what might be harmful for younger children might not be harmful at all for those who are almost adults. 

Second, lawmakers must recognize that minors have First Amendment rights too. These rights include the right to speak as well as the right to receive information—therefore, online safety laws shouldn’t try to turn off the information firehose for kids. The Court quoted Brown approvingly in its recent 8-1 decision in Mahanoy Area School District v. B.L., in which it held that a public school could not constitutionally punish a student who complained about not making the cheerleading squad with a colorful Snapchat post stating, “Fuck cheer fuck everything.” Even in a case considering the leeway a school might have to regulate student speech in light of the “special circumstances of the school environment,” the Court almost unanimously declared that minors have robust free speech protections against government regulation. Justice Clarence Thomas dissented, but he is the only member of the Supreme Court who believes that parents have absolute authority over the rights of minors to receive speech and to speak. Laws prohibiting minors from accessing social media without parental permission ignore these lessons from Brown and Mahanoy at their peril.

Third, lawmakers should protect the rights of adults to access information online. Courts have repeatedly found that requiring users to verify their age to access lawful content is unconstitutional because it doesn’t prevent minors from accessing prohibited content but does burden adults trying to access what they’re permitted to see. In Reno v. ACLU, the Court struck down a law as constitutional because it “lacks the precision that the First Amendment requests” since it “suppress[es] a large amount of speech that adults have a constitutional right to send and receive.”

And fourth, lawmakers should propose laws that give people control over what they see online. Laws that give users more say over what’s in their feeds and what’s in their results might enable parents to supervise their kids’ online experiences or promote the use of filter software to help parents and kids manage the content they see. In United States v. Playboy Entertainment Group, a case about the availability of the Playboy TV channel, the Supreme Court held that the government “cannot ban speech if targeted blocking is a feasible and effective means of furthering its compelling interests.” Similarly, in Reno, the Court listed a number of control-oriented alternatives to broad speech restrictions, repeatedly emphasizing the value of parental choice and control. It hinted at “user-based software” as a “reasonably effective method by which parents can prevent their children from accessing material which the parents believe is inappropriate” (emphasis in original).

Many recent laws run afoul of these four principles. Some proposals try to limit kids’ exposure to specific types of content, like pornography, but age verification requirements impermissibly interfere with the First Amendment rights of adults. Others require platforms to impose age verification systems, with Florida even going so far as to prohibit use for children under age 14. And others focus on controlling the speech that platforms allow, rather than giving users and parents control over what they see.

The courts have called out these deficiencies and have issued injunctions to stop several of the new child safety laws from going into effect. In a decision upholding a challenge to California’s law, the court found that age estimation requirements would “appear to counter the State’s interest in increasing privacy protections for children” by compelling users to submit identifying information or consent to have their faces scanned to estimate their age. The judge also emphasized that the law could “chill a ‘substantially excessive’ amount of protected speech to the extent that content providers wish to reach children but choose not to in order to avoid running afoul” of the law. Similarly, she expressed concerns about the breadth of the restrictions the law could require, holding that “[i]n seeking to prevent children from being exposed to ‘harmful unsolicited content,’ the Act would restrict neutral or beneficial content, rendering the restriction poorly tailored to the State’s goal of protecting children’s well-being.” Repeatedly, the judge found a weak link between the law’s ends and the means it chose to pursue them.

Judges in Arkansas and Ohio found similar constitutional shortcomings in the safety laws in their states. In Arkansas, the court granted a preliminary injunction to stop the law from going into effect because the law’s prohibitions swept too far, disproportionately burdening adult speech and limiting minors’ access to content that is not harmful to them. The judge concluded the analysis of whether the law was sufficiently narrowly tailored by stating that “[i]f the legislature’s goal in passing Act 689 was to protect minors from materials or interactions that could harm them online, there is no compelling evidence that the Act will be effective in achieving those goals.” Likewise, in Ohio, the judge found that cutting off “all content on websites that the Act purports to cover” was a “breathtakingly blunt instrument for reducing social media’s harm to children.”

A more successful path would be to craft legislation that aligns with the law. Legislators could devise schemes that give parents and minors more controls, including by encouraging platforms to experiment with kids-focused services. As the judge wrote in the Arkansas decision, content filters, parental controls, and other tools that “enabl[e] or encourag[e] users (or their parents) to control their own access to information” are less restrictive than age verification requirements.

Legislators could also try to incentivize platforms to offer safer experiences for kids, rather than seeking to limit the information platforms can host or the content kids can see. A new privacy proposal in Congress includes a pilot program to “encourage private sector use of privacy-enhancing technology.” Similarly, lawmakers could use experimental policy frameworks to encourage platforms to test products and features that improve kids’ experiences online. Enacting safety pilot programs might foster more platform innovation and competition that could spur the creation of new safety features.

States might also experiment with educational programming that guides students to use technology products more safely. Florida considered a bill that would have required schools to develop curricula on “social media safety.” These types of digital literacy requirements are not content-based restrictions, do not limit a minor’s ability to send or receive information, and do not burden adult speech. They also help users to make more informed choices about what they want to see and how to establish their own safety parameters. In sum, they are more likely to survive judicial review.

Of course, even if these solutions work better from a legal perspective, they still have flaws. Offering better user controls might not have a meaningful impact on kids’ health if kids choose to see content that’s bad for them. Some experts have raised concerns about the impact of parental supervision on LGBTQIA+ communities, for example, who fear that parental involvement will limit kids’ ability to explore their identities online. Similarly, conservative youths wary of oversight from liberal parents might not welcome parental supervision of their online lives. And pilot programs and digital literacy programs may be viewed by many experts as small solutions to large problems; the likelihood they will move the needle on teen wellness is small. 

There’s also the practical challenge: Legislators don’t write laws in a perfect, lab-controlled environment where only the opinions of judges matter. They write laws in response to a wide array of political pressures, and they must appeal to different constituencies to ensure that they can secure enough votes for passage. For a lawmaker who wants to get a bill passed, these political constraints make it difficult to hew closely to the legal ones. 

Another important factor is the uncertainty of the judicial process. Despite this clarity in the legal precedent, courts can sometimes render unexpected verdicts. Recently, the Fifth Circuit upheld a Texas law that requires age verification for adult sites, overruling a contrary decision from a Texas district court. The Fifth Circuit found that the law was not content based and reviewed it using rational basis review rather than strict scrutiny. Ultimately it upheld the law, asserting that it is “rationally related to the government’s legitimate interest in preventing minors’ access to pornography.” A dissenting judge emphasized that the ruling was out of step with prior cases: “Stepping past this precedent, the majority’s new rule unjustifiably places the government’s interest upon a pedestal unsupported by Supreme Court precedent.” Whatever the case law might say, judges have the power to reinterpret and reinvent it. 

Despite some legal uncertainty and a lack of clarity on the right policy path to protect kids online, past cases provide guidance that lawmakers should follow. Legislators’ current path is unlikely to result in meaningful change if the laws they pass can’t survive judicial review. If they want to change the governance of kids’ experiences online, legislators should pass laws that are consistent not with aspirations of what the First Amendment might be, but with the reality of what the First Amendment is now.


Mary-Rose Papandrea is the Samuel Ashe Distinguished Professor of Constitutional Law at the University of North Carolina School of Law.
Matt Perault is a contributing editor at Lawfare, the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and a consultant on technology policy issues.

Subscribe to Lawfare