Cybersecurity & Tech

The FBI Leads the Way on Jawboning Governance

Matt Perault
Tuesday, September 3, 2024, 1:00 PM
This month, the bureau became the first federal agency to publicly disclose clear standards for its communications with tech platforms.
FBI logo. (Dave Newman, https://www.flickr.com/photos/groovysoup/4505842946; ATTRIBUTION 2.0 GENERIC, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

The Federal Bureau of Investigation leads the world in jawboning governance. While law enforcement agencies may not be known for transparency or self-governance, the FBI now stands alone among federal agencies in publicly communicating self-imposed rules on its communications with tech platforms.

In guidance posted on the FBI’s website on Aug. 1, the bureau outlined its approach to communicating with social media platforms about content they host. In doing so, it became the first federal agency to publish clear, public standards for jawboning.

The FBI’s leadership on this issue establishes a principled foundation for resuming information sharing with tech companies and provides some level of public transparency about its practices. If they are well-designed and well-governed, information-sharing programs have the potential to reduce harmful content and conduct; the resumption of the FBI’s program already has disrupted Russian propaganda operations. Some gaps still remain, and the bureau could fill those over time. Yet the focus should not be on these gaps but, instead, on the FBI’s leadership in taking a step forward on this issue. Other agencies should follow the FBI’s lead, giving guidance to their employees, tech platforms, and the public about the norms that govern government-platform communication.

The State of Jawboning Governance

Jawboning is when the government exerts pressure on a private entity to try to convince it to change its content practices. This pressure violates the First Amendment when it is “coercive” but permissible if it is merely “persuasive.” The challenge with that rule is that no one seems to know the difference between the two. After years of debate over whether Biden administration communications with tech platforms about coronavirus misinformation constituted jawboning, many hoped the Supreme Court would offer more detailed guidance on the topic in the Murthy v. Missouri case. But its eagerly anticipated decision in June offered none, and the Court kicked out the case on standing grounds instead.

The outcome was unsatisfying and will likely continue to be so in the long run. Regardless of whether the decision was correct (and based on a weak factual record, it seems like it was), a new administration will begin in January 2025 with no more clarity about how its employees should interact with tech platforms on content moderation than any of its predecessors. In short, jawboning governance remains thin.

If Donald Trump wins the November election, Democrats will lament the pressure he puts on tech companies to tilt content in his favor, as they hinted at in recent weeks after increased reporting about Trump receiving favorable treatment on X from owner Elon Musk. If Vice President Kamala Harris wins, members of Congress, like Rep. Jim Jordan (R-Ohio), will continue to claim that pressure from the White House results in the removal of speech they favor. Regardless of who occupies the Oval Office, no law, rule, or policy will constrain the behavior of government employees. Except at the FBI.

The FBI’s Guidance on Jawboning

On Aug. 1, according to reporting by the New York Times, the FBI published a new page on its website entitled “Providing Foreign Malign Influence Threat Information to Social Media Platforms.” FBI officials now have some guidance on how they should interact with tech companies when they want to share certain types of foreign threat information with them.

After developing these new internal guidelines in February, the FBI resumed its information sharing program, which it had previously shut down in response to a lower court decision in the Murthy case. But according to the Times, the program has “already thwarted two campaigns spreading information from Russia’s propaganda apparatus.”

The document begins by explaining the connections among the Department of Justice’s mission, the FBI’s role in supporting that mission, and communication with social media companies. It also cites the Justice Department manual in describing the rationale for disclosing foreign malign influence threat information to tech companies and links this rationale to findings in the congressional record. The document emphasizes that the FBI’s procedures “are designed to continue to ensure that Americans’ First Amendment rights are being protected.”

The guidance then lays out the “standard operating procedures” that govern FBI communication with tech platforms:

First, the FBI must clearly convey that any response by the platform is voluntary and the “Platform has no obligation” to take action. The word “voluntary” appears in the document in four places, including in a pull-out quote in a box near the top of the page, which reads: “Any actions the companies may take in response to receiving FMI threat information from the FBI in this context are strictly voluntary and are based on their independent judgment, initiative, and/or decision making processes.”

Second, the document states that the FBI must be “clear” that it is not asking for the platform to change its content policies.

Third, the FBI must be “clear” that its action is not “solely based on First Amendment-protected activity.”

Finally, the FBI must be “clear” that the platform will not be punished based on what it decides to do with the shared information. In other words, the communication cannot be a threat.

Ironically, it is not clear what the FBI means by “clear,” but the objective seems to be to leave no doubt in the mind of the company employee about the intent of the FBI’s communication.

The document provides guidance to FBI employees, but its intended audience seems much broader (a similar description of the procedures also appeared in an internal department letter included as an appendix in a July inspector general report). It offers useful guidance for tech platforms, who can reference it to confirm that they are not obligated to take action in response to information the FBI shares. And it provides helpful information to the public, not only about the nature of communication between the FBI and tech platforms but also about the rationale underlying it.

The document also clarifies that the Foreign Influence Task Force (FITF), a unit of the FBI, leads this work and details the FITF’s responsibilities, including “to enable an effective dialogue with Social Media Platforms focused on understanding the capabilities of these providers” and “providing information … in furtherance of self-monitoring and mitigation efforts.” The document emphasizes that the platforms may take action “if they choose to do so within their discretion.”

The FBI’s guidance echoes several elements of jawboning reform proposals, including the model legislation published by the Foundation for Individual Rights and Expression (FIRE), a Cato Institute paper that outlines several legislative options to codify a “transparency-based approach,” and the model executive order I proposed in a Lawfare article in June. An executive order on jawboning would mirror the now-standard practice of issuing a day one executive order on ethics, which are neither legally required nor focused on drawing a line between constitutional and unconstitutional behavior. Instead, the idea is to establish a set of norms for how government employees should behave.

Each of these reform proposals aims to provide the kind of guidance on jawboning dos and don’ts that the Supreme Court omitted from its opinion. Regardless of what is constitutional or not, how should government employees behave?

The FBI’s guidance is consistent with the primary feature of those proposals: to provide more transparency. One of the principal problems with jawboning is that it subverts an open and democratic debate about content policy—how should the government govern online expression?—and shifts it behind closed doors, where government officials can pressure platforms into changing their decisions. A government’s citizens and a platform’s users don’t know the contents of those discussions, and without much information, they are unable to advocate for their interests in response. The FBI’s document offers clarity about what the FBI is doing and why. The model executive order, FIRE legislation, and Cato proposal all include options for governments and platforms to be more transparent about the nature of these communications.

The FBI document also is consistent with the model executive order in providing guidance on the content and tone that bureau employees should use in their communications, and in being explicit about the weight that platforms should attach to them. The model executive order suggests minimizing language that is threatening or that requests removal of specific pieces of content, just as the FBI guidance requires clarity that the agency is not requesting content removal or a change in policies and that platforms are under no obligation to change their decisions.

FBI employees also have a duty to clarify that their communication is not a threat, just as the draft executive order suggests avoiding “an express or implied threat, including the threat of legislative reprisal, criminal or civil enforcement action, holding public hearings, or initiating critical public communications.” This requirement is one of the most important elements of the FBI document. Rightly or wrongly, tech employees often believe that government requests are accompanied by implicit threats: If you don’t remove this content, we’ll initiate antitrust enforcement actions or retributive tax policy. In a press conference, Nancy Pelosi once referenced both of those avenues as options for reprisals.

Like the model executive order, the FBI guidance specifies which government employees can communicate with tech platforms. The FITF is the lead entity within the FBI for engaging with tech platforms, though the guidance does not specify if “lead” means that it is the exclusive entity that engages with platforms. If other FBI employees are involved in the process, the document does not describe how.

Options for Strengthening the FBI’s Guidance

The FBI’s document is an impressive beginning for jawboning governance, particularly in light of the absence of comparable guidance documents from other government agencies. But it still omits several key elements from the jawboning reform proposals.

All three other proposals would require more transparency in the form of a reporting requirement for the government to disclose either information about individual requests or aggregate quantitative data about the number of requests. The idea is to provide the same kind of transparency for jawboning requests that companies now provide for user reports, government censorship requests, and government requests for user data. Just as tech platforms aggregate data and then publish public-facing reports multiple times each year, the FBI should report publicly on its threat sharing program at regular intervals. To facilitate transparency, the FBI could require its employees to share threats in writing.

Transparency should also include some mechanism for reporting violations of the guidance. Companies should have a way to file a grievance, such as an FBI communication that requests the removal of content or that threatens retaliatory action if a platform fails to take certain action. Other government employees should be able to report when their colleagues act contrary to the guidance as well.

The final gap in the FBI’s guidance is that it neither restricts the number of people in government who can speak to tech platforms nor restricts communication to specific people at the platforms. The guidance specifies that FITF will take the “lead” in engagements with tech platforms—but it does not require all communication to occur from FITF, and it does not limit the employees within FITF who can engage with platforms.

The model executive order proposed limiting communication to three designated employees at each executive agency, and requiring those employees to communicate via a company request portal, if a company establishes one. Limiting the channels of communication would help to ensure that engagement is consistent with the terms of the guidance and that it is subject to proper oversight.

One additional omission in the FBI’s guidance merits mention. Much of the skepticism about imposing restraints on jawboning is rooted in a concern that it will chill valuable communication. Governments should be able to advise companies on the state of existing law, for instance. And they should be able to relay to platforms how their products and business models are affecting the public and society.

Communications focused on relaying that kind of information not only should be exempt from any restriction but should be encouraged. The FBI document nods in this direction several times—in one instance, the FBI says the purpose of its sharing program is to “enable an effective dialogue with Social Media Platforms focused on understanding the capabilities of these providers.” It could go beyond this general statement of purpose to emphasize that the FBI should routinely provide advisory and educational information to tech platforms.

The FBI’s Guidance as a Model for Other Agencies

The FBI’s guidance may be imperfect, but it puts the agency in the lead position within the federal government in jawboning governance. Even though the Supreme Court’s decision in Murthy didn’t require the FBI to take action, it developed a thoughtful public document that announces a protocol for government-platform communication. The guidance emphasizes its fealty to First Amendment principles, reassures companies that they need not feel threatened in response to FBI communications, and offers some measure of transparency about its process and its rationale.

Other agencies should consider following the FBI’s model in the months ahead. They should move swiftly: It will be useful to establish governing procedures before the beginning of the new administration so as to provide guidance to new government employees. If other agencies fail to make progress, transition teams can use the FBI’s effort as a foundation for mapping out an executive order they could issue on Jan. 20, 2025, to outline government-platform communication in the new administration. Whatever the path other agencies and the White House pursue, the FBI has provided a baseline that could lead to a more principled approach to jawboning in the future.


Matt Perault is a contributing editor at Lawfare, the director of the Center on Technology Policy at the University of North Carolina at Chapel Hill, and a consultant on technology policy issues.

Subscribe to Lawfare