A Salesman’s Guide to the Scourge of Misinformation
A review of Steven Brill’s “The Death of Truth” (Knopf, 2024)
Published by The Lawfare Institute
in Cooperation With
One would not expect that a book by a businessman hawking his company’s services could illuminate a pressing civic problem, let alone make for good reading. But Steven Brill’s “The Death of Truth” surprises.
Brill is a journalist, bestselling author, and serial entrepreneur whose past ventures include American Lawyer Media, Court TV, and Brill’s Content magazine. The start-up he is promoting with his latest book is called NewsGuard Technologies, a company that provides ratings of online news and information. With characteristic bombast, he describes the civic problem in question in his subtitle: “How social media and the internet gave snake oil salesmen and demagogues the weapons they needed to destroy trust and polarize the world—and what we can do.”
Arguing that truth has “died,” Brill insists, relentlessly, that NewsGuard can bring it back to life. He and co-founder, Gordon Crovitz, a former publisher of the Wall Street Journal, market their firm as a human-scale, journalism-driven antidote to algorithmic amplification of mis- and disinformation. They have sought to sell their services, with mixed results, to Meta and other major social media companies, Google and its smaller competitors in the programmatic advertising market, and, most recently, to companies developing generative artificial intelligence (AI) systems. The surprising aspect of “The Death of Truth” is that by describing his frustrations as a salesman, Brill sheds valuable light on the inner workings of Silicon Valley and its malign effects on politics, public health policy, and society at large.
Digital “Nutrition Labels”
Brill and Crovitz started NewsGuard in 2018 as a nonpartisan watchdog. Brill is a moderate Democrat; Crovitz, in his partner’s words, a “rabid conservative.” (I know Crovitz dating from my days as a news reporter and editor at the Wall Street Journal. My wife worked for Brill at Court TV and Legal Times for several years in the 1990s.)
NewsGuard hired trained journalists to report on and rate the reliability of news and information websites and their associated Twitter (now X) accounts, Facebook feeds, and YouTube channels. The plan was to license the ratings—a point score of 0 to 100—to social media companies, which would attach them to each publisher when their work appeared on the platforms. Readers could then click through to what NewsGuard calls its “nutrition labels,” which explain its ratings in detail.
The company bases its evaluations on nine negative criteria, each of which leads to a point deduction from a starting place of 100. The worst transgression, “repeatedly publishing false news,” as determined by NewsGuard’s journalists, leads to a 22-point penalty. Failing to have a system for receiving complaints about errors and clearly correcting them results in a 12.5-point demerit, and so forth. NewsGuard persuasively claims that its ratings are nonpartisan, as illustrated by the conservative Daily Caller and liberal Guardian both scoring a perfect 100.
By the time Brill and Crovitz began paying sales calls to Silicon Valley headquarters, the major social media companies had been unmasked repeatedly for failing to identify fake accounts created by Kremlin operatives spreading divisive falsehoods about American politics, medical “news” services that amplified health hoaxes (abortions cause breast cancer!), and conspiracy theories that school shootings were “false flag” operations staged to promote gun control.
Pitching Silicon Valley
Brill offers a memorable first-person account of visiting airy, open-plan Silicon Valley headquarters buildings, “where we were amused to watch some of the T-shirt and torn jeans crew casually roller blade from meeting to meeting.” He and Crovitz, who present as throwback coat-and-tie executives, had a coherent pitch: “For a fraction of what [the social media companies] were paying their PR firms, lawyers, and lobbyists to deal with this stain on their brands, not to mention head off regulation, why wouldn’t they license our data and show our scores and nutrition labels to their users?”
Initially, they elicited some promising reactions, according to Brill. “Oh, thank God. You can take us out of our misery,” Chris Cox, the chief product officer at Facebook (now Meta), told him. “We’ve been trying to solve this problem with hundreds of engineers, and we know we can’t.”
But deals were never struck. “We were naive, clueless,” Brill writes. “We didn’t know that they didn’t want to solve the problem we told them we could solve. That problem was their business plan. Misinformation and disinformation were not bugs. They were features.”
Sensational and false content draws heightened user engagement in the form of screen time, “likes,” anger emojis, comments, and reshares. The advertisers that make social media such a lucrative business rely on engagement as a key metric for determining whether they are securing user attention. Platform algorithms are tuned to promote engagement, and NewsGuard was proposing, in effect, to counteract those algorithms with the human discernment of a crew of journalists. It was a nonstarter.
Microsoft came to NewsGuard’s rescue. The tech giant, which doesn’t own a platform that competes directly with Facebook, X, YouTube, or TikTok, stepped in to license NewsGuard’s data for users of its Edge browser and “to help inform decisions made about aggregating content on its Microsoft Newsroom platform.” NewsGuard survived to fight another day.
Occasional Oversimplification
As eye-opening as Brill’s account often is, he does occasionally oversimplify. For instance, when discussing one of the main consequences of the amplification of misinformation—increased political polarization—he implies that social media is the main, and possibly sole, driver of the divisive grievance mentality that has come to dominate U.S. politics, particularly on the right.
Referring to Donald Trump’s followers, he writes, “They were pissed off at all of the referees [judges, elections officials, scientific experts] because the recommendation engines of the platforms they had come to depend on for their ‘news’ steered them to ‘news’—whether it be Russian disinformation, conspiracy theories, a crazy uncle, or Trump (with his 89 million followers)—that reinforced the ‘our lives are threatened’ fear and gave them villains to blame and overthrow.”
But as Brill surely knows, other information sources—not least talk radio and Fox News—also fuel the fires of polarization and outrage. Social media is part of a larger problematic media ecosystem.
The Problem With Programmatic Advertising
When the giants of social media spurned NewsGuard, Brill and Crovitz turned their attention to the online advertising market, a topic the author dissects with gusto.
To make its task manageable, the firm rates the roughly 2,800 news and information sites in the U.S. that account for 95 percent of online engagement, meaning that these sites were among the 95 percent most shared or commented on in the feeds of the major social media platforms. The 95 percent run the gamut from reputable global news organizations to Russian propaganda outlets. NewsGuard generally ignores the thousands of other sites making up the remaining 5 percent on the theory that it’s better to cover most of the field than to expend vast resources chasing down every last internet outpost.
Further analysis reveals that about 35 percent of the 95 percent are “highly unreliable,” Brill writes. And yet, most are financed with advertising from reputable businesses. He offers as an example a website called the Santa Monica Observer, which is notorious for running fabrications such as an October 2022 dispatch about the attack on Paul Pelosi, husband of then-House Speaker Nancy Pelosi, in the couple’s San Francisco home. The deranged intruder apparently planned to take Nancy Pelosi hostage. The Observer falsely claimed that the attack on Mr. Pelosi stemmed from an encounter with a gay prostitute. The hoax went viral after the Observer posted it on social media, and X’s owner, Elon Musk, reshared it with his 111 million followers.
What caught Brill’s eye was that on the page alongside the phony Pelosi article were ads from blue-chip brands such as Hertz, Capital One, Lowe’s, Petco, and Disney. These companies didn’t purposely choose to advertise in the Observer, let alone next to a lurid fabrication. The juxtaposition resulted from programmatic advertising.
Google and a smaller California-based company called Trade Desk are the big players in programmatic advertising, which, according to Brill, accounts for 60 percent of all ads bought online. Here’s a basic idea of how it works: A company establishes a budget for a campaign targeting consumers according to dozens or possibly hundreds of demographic criteria, buying habits, and other “signals.” A programmatic ad middleman bids tiny fractions of the budget—it could be just a few cents—for each view of the ad to be seen by a member of the target audience over a set period of time. Google and Trade Desk run algorithmically driven marketplaces that identify potential ad spaces available on tens of thousands of websites that fit the bidder’s specifications. In a split second, a virtual “auction” occurs, and the ads are placed, with the advertiser having no idea where its message will appear. It’s like buying shares on a stock exchange that doesn’t tell you which companies you’re investing in.
Global spending on programmatic advertising is estimated to have exceeded $300 billion in 2023, Brill reports. This is largely how clickbait websites like the Santa Monica Observer, among other purveyors of mis- and disinformation, fund themselves. For NewsGuard, this troubling situation presents a sales opportunity. Advertisers can license the firm’s data identifying legitimate news and information sites and feed it into the ad-buying process to promote “brand safety” based on human intelligence, rather than relying solely on programmatic advertising’s artificial intelligence. Brill suggests that NewsGuard now enjoys a revenue stream from this product line, but what he calls the “Frankenstein’s monster” that is programmatic advertising continues to incentivize the spread of dubious information.
Yet another engine of falsehood—generative AI—is just revving up. Generative AI chatbots can produce uncannily human-sounding text responses, as well as images and audio, based on simple natural language prompts. But the chatbots have a tendency to “hallucinate,” or make things up, raising questions about their value in the marketplace. Unlike social media companies, which sell advertising by boosting user engagement, even if that means spreading low-quality content, generative AI companies are licensing their tools to businesses, governments, and other enterprises that demand accuracy; hallucination is an unwelcome bug, not a feature.
NewsGuard, Brill writes, is now marketing its ratings as a source of quality control for generative AI products. His desire to make a buck by improving generative AI underscores the broader need to fine-tune this evolving technology, whether using NewsGuard’s data or that of other firms devoted to boosting accuracy.
Republican Intimidation
NewsGuard’s efforts to curb misinformation have placed it in the crosshairs of Republican politicians like Rep. Jim Jordan of Ohio who argue that such efforts are a cover for an elaborate conspiracy involving liberal operatives, academics, and Silicon Valley executives eager to censor the speech of conservatives.
In March 2023, Jordan, who chairs the House Judiciary Committee, emailed Brill’s company, suggesting that it “may have played a role in this censorship regime by advising on so-called ‘misinformation’ and other types of content.” Jordan demanded copies of every NewsGuard communication with a government agency or technology company “referring or relating to the moderation, deletion, suppression, restriction, demonetization, or reduced circulation of content”—in other words, practically every document or message the company had created since its inception.
Rather than complying, NewsGuard hired a well-connected Republican attorney to make the case to the Judiciary Committee that the company employed a truly nonpartisan ratings process and that its only work for the federal government consisted of a $750,000 contract to help the Defense Department’s Cyber Command monitor disinformation campaigns by foreign adversaries. The pushback worked. NewsGuard never received a subpoena for the massive trove of communications, and the Judiciary Committee moved on to other targets.
Meaningful evidence of a “censorship regime” has not surfaced. A parallel lawsuit filed against the Biden administration by the attorneys general of Louisiana and Missouri, along with a group of conservative activists, ended with a thud in June, when the Supreme Court ruled that the plaintiffs lacked standing to sue in the first place.
But the Republican probe has notched victories in that it has stifled academic research on misinformation at several major universities, influenced some newly minted PhDs to pivot away from the field, and contributed to the collapse of the Stanford Internet Observatory, a leading misinformation research group. Brill does not delve deeply into these developments, but NewsGuard’s brush with right-wing intimidation provides an ominous warning of more severe tactics that could come from Washington should Republicans gain control of the White House and Congress in November’s elections.
Some Ways Forward?
Following through on the “what we can do” component of his subtitle, Brill offers a long list of recommendations to conclude “The Death of Truth,” some more promising than others. Here are three of them:
Amend Section 230
Brill recommends curtailing Section 230 of the Communications Decency Act, which provides social media companies with broad protection against civil liability for content posted on their platforms by third parties. Exposing the companies to more litigation risk would incentivize them to police content more vigorously and presumably reduce the amount of misinformation they host. Specifically, Brill urges Congress to condition Section 230 protection on social media companies dropping all use of algorithms to rank, recommend, or amplify content.
Unfortunately, the very essence of social media is using computational processes to cull and arrange the bottomless ocean of content available on the internet. The platforms cannot function—cannot perform their more constructive role of connecting people to other people, ideas, business opportunities, and the like—without algorithmic sorting. Brill’s recommendation is equivalent to urging the repeal of Section 230, a course that would probably cause social media companies to cut back drastically on controversial content of many sorts—a change that would have the effect of reducing free expression for billions of users.
As an alternative, Brill recommends conditioning Section 230 protection on platforms’ integrating tools—like NewsGuard!—“so that users could get access to more information about who is feeding them the news online.” Self-interested as this idea sounds, I think it’s a good one, and it would almost certainly spur other start-ups to jump into healthy competition with Brill’s company to provide useful analytical software. The downside would be that social media companies likely would try to pass along the cost of such “middleware” to users who now enjoy platforms without paying a subscription.
Enforce Consumer Protection Law
Brill wants the Federal Trade Commission (FTC) to step up enforcement of existing consumer protection law that forbids companies from employing “unfair or deceptive acts or practices in or affecting commerce.” This authority, which already applies to all industries, empowers the FTC to hold social media companies responsible for enforcing promises they make to users in their terms of service—promises that include restricting certain types of misinformation, among many other categories of harmful content. This proposal makes a great deal of sense, but the already-strapped FTC is unlikely to embrace it without more explicit authorization from Congress, including hundreds of millions of dollars in additional funding for expert personnel. I’ve outlined such a proposal in some detail in my work for the New York University Stern Center for Business and Human Rights.
End Anonymity
Facebook requires accounts to be attached to real names, but enforcement is spotty. X and other platforms do not even try to verify users’ identities. Anonymity is an invitation to mischief and worse. Brill underscores the state of play by citing data from an Israeli social media-monitoring firm called Cyabra, which found that on a single day shortly after the Israel-Hamas war began, roughly 25 percent of accounts on Facebook, Instagram, TikTok, and X posting about the conflict appeared to be fake. Brill reasonably recommends legislation requiring that platforms oblige users to submit meaningful proof of their identity.
***
Overall, “The Death of Truth,” while quirky in its dual purpose as exposé and marketing literature, and not without the occasional overstatement, is well worth the time of anyone concerned about the deleterious effects of technology on politics and civic discourse.