Lawfare Daily: The Supreme Court Rules in Murthy v. Missouri
Published by The Lawfare Institute
in Cooperation With
On June 26, the Supreme Court handed down its decision in Murthy v. Missouri—the “jawboning” case, concerning a First Amendment challenge to the government practice of pressuring social media companies to moderate content on their platforms. But instead of providing a clear answer one way or the other, the Court tossed out the case on standing. What now? Lawfare Editor-in-Chief Benjamin Wittes discussed the case with Kate Klonick of St. Johns University School of Law and Matt Perault, Director of the Center on Technology Policy at the University of North Carolina.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Introduction]
Kate Klonick: I don't think Alito does a very good job steelmanning their argument here. There were no facts. You can't take an idea that, a legal idea, that needs clarity, and then shove a bunch of bad facts at it and make good law. Like, you're not gonna clean that up. That's not the place to clean this up.
Benjamin Wittes: It's the Lawfare Podcast. I'm Benjamin Wittes, editor in chief of Lawfare with Matt Perault and Kate Klonick.
Matt Perault: Hopefully in January 2025, alongside or as part of an ethics executive order, you would have an administration issue guidance on how it should communicate about speech related decisions with private entities.
Benjamin Wittes: Today we're talking about the Supreme Court's decision in the Murthy case on jawboning.
[Main Podcast]
All right, Matt, get us started. What is jawboning and why should anybody care?
Matt Perault: So jawboning is when the government puts pressure on a private entity to make a certain decision on speech. I'm being a little bit vague with it because I think what we were hoping for this Supreme Court term was to get more guidance on the specifics of what it means, at least as a constitutional matter. And as I think we'll discuss in a minute, I don't think we did.
The reason I think we should care is that, at least in my experience when I worked at Facebook, and that might not be the experience of every tech company or every person even just at Facebook. My experience is that we were jawboned in some form, might not have been unconstitutional, but we received pressure from the government to change our speech practices basically on a daily basis. And that happened not just from a Democratic administration or Democratic members of Congress, but from Democratic administrations, Republican administrations, Democrats in Congress, Republicans in Congress, and also happened abroad. And I think that's an important dynamic as to understand as well, that governments like Brazil and India put a lot of pressure on tech companies to change their speech decisions.
So I think, I think we should care because the influence is real and I think it matters. And I think it is problematic because it takes a bunch of conversations and debates about what speech should look like and puts them in the shadows as opposed to in the spotlight. And so companies will make their decisions in sort of private discussions, or as a result of private pressure that they're receiving from government entities, rather than debates in Congress, for instance, on what the right parameters of something like Section 230 should be.
Benjamin Wittes: Okay, Kate, the Supreme Court on Wednesday issues its big jawboning ruling, and once again the big internet speech case fizzles out. This is becoming an annual June ritual. Tell us what this case was, and then we will get to the great fizzle out. But what was at issue in Murthy?
Kate Klonick: Yeah, so Murthy was a case that was brought by a number of individual plaintiffs and state AGs for Louisiana and Missouri. And the idea that it was brought against the Surgeon General Vivek Murthy and the Biden administration, and it was brought in the Western District of Louisiana. And in particular, just because of the structure of that court, the thing that they could basically guarantee in bringing the case in the Western District of Louisiana was that they would get one particular judge, John Doughty who is a Trump appointee. And so to kind of bring us back, what they were, what they were bringing this case about was basically a series of questions about what, how much the Biden administration had pushed, the Biden administration through the CDC and through the Surgeon General and through, you know, the White House itself, had pushed Twitter, and other entities like Facebook, to, to basically take down speech that was related to misinformation, particularly around COVID.
And one of the things that came out in kind of some of the, the early stage of discovery was a number of documents and kind of correspondence that didn't look great. That showed some very kind of angry and forceful and foul language of like people in Washington, screaming at people at the platforms that control what speech stays up and what speech comes down, asking them, you know, to remove things. What isn't in the case, and this is really clear, is that that's not enough for jawboning. You can't, it's not just about asking. Right. The government can ask, the government always asks. This is like, this is the entire dynamic between the government and private entities is like, trying to kind of make soft policy or to try to like get things done, you know, in certain respects. We can get into what that means for speech in a second, but essentially what there wasn't in this case, and this is something that the district court, you know, didn't agree with at first or still probably doesn't agree with, that the Supreme Court now agrees with, was there's just no traceability, which is essentially a causative factor.
So there's no causation between, you know, these calls from the government to take down certain types of content on online platforms and it actually coming down. And so there just was no injury. There was no injury to any of the plaintiffs. And so essentially what you had today with the fizzle out, or yesterday, excuse me, what you had with the fizzle out was that you had a decision from Amy Coney Barrett and the six, speaking for a six justice majority, saying, like, listen, guys, there's no standing here. Like, not only did you, like, is your toe not broken, like, you, we don't even, like, you barely stubbed your toe. And it had nothing to do with the government. Like, like, there's, like, it was not clear that you had any kind of significant kind of censorship by these platforms, and it wasn't, there is no, like, causation between the government making these calls and putting pressure on the platforms and the platforms listening. And so that was what happened with the great fizzle out. And yes, this is, this is becoming a thing and we can talk about that on its own in a second.
Benjamin Wittes: All right. So one of the interesting things about this opinion is that just as a factual matter when you read the opinion by Justice Barrett and the dissent for three justices by Sam Alito, you are reading about two entirely different worlds. They have a, that kind of, I think, roughly parallels the kind of information bubbles that a lot of Americans live in philosophically. So, Matt, how would you describe the debate between the majority, and it presents as a standing debate, but there's something larger going on there than just, just a routine standing argument. Describe for us the debate between the majority opinion which represents six justices and the dissenting opinion by Justice Alito for himself, Justice Thomas, and Justice Gorsuch.
Matt Perault: Yes, well, so I think Kate, Kate said it well. I mean, I think the question that, that they're going back and forth on is, was this conduct of a sort that actually mattered. Did it actually influence the decisions that tech companies made in a real way? And I sort of don't want to take, I'm finding myself not wanting to take the bait on the question because my hope, which is totally naive, but my hope for this is that it's not political. That there, that we wouldn't cheer because, just because Democrats are making the ass and get away with it and then boo when Republicans make the ass and get away with it. That there should be something principled here about the way we want tech companies and government employees to communicate and behave.
Benjamin Wittes: No, no, but, but, but their, their dispute between them, which first of all is between two conservative appointees.
Matt Perault: Yeah.
Benjamin Wittes: Both of whom are representing at, two other conservative appointees. In Justice Barrett's case, she's also writing for the three Democratic appointees. But there is a dispute of principle between Justice Alito and Justice Barrett here about, you know, when a tech company takes down material and there is government pressure involved or government communications involved, how close a nexus do you need between my communication was taken down as a result of government censorship? How close a nexus do you need before that becomes cognizable for standing purposes? I, I think they have a real dispute here, no?
Matt Perault: Yeah, they do, and, and, and I don't know exactly what to make of it. Like, I think as a legal matter, though I'm certainly no standing expert, I think the majority made a more persuasive case that the factual record here is just weak. And part of the reason that I have that in mind is just because of how much of a debacle the oral argument was, I think in trying to make the case that there was this factual nexus. I mean, there was a number of justices were asking questions at oral argument about, you know, clearly you are not making the connection that you need to make to establish standing. I think Alito's opinion takes a fairly crumbly factual record and puts the best argument forward that it’s robust or more robust than it seemed, I think, at oral argument. I think having listened to the oral argument, I'm more persuaded that it was, that it was a weak factual record.
But I don't know, like sort of taking it out of just the standing question, I don't know what to make of this issue actually, because the way that organizations make decisions is really, really complicated. I was at Facebook for a long period of time. I was involved in many of these types of discussions. I can't tell you exactly what influence I had just being in the room and raising various arguments from time to time. Was that persuasive or was it coarse, not that I could be coarse, but was it even persuasive within a context in which there are, there's a public policy team, there's a communications team, there are product teams, there are sales teams, all of the various different equities internally are raising questions about the pros and cons of various different decisions.
And I think some of the things that Alito points to in the dissent, you know, emails from senior level executives that are raising various different concerns. It's hard even just reading his dissent to understand exactly how much weight did each of those emails carry and how responsive was it or coerced was it by the government communication. And I think for me, I should say for me, I think part of the motivation is that there is good legal work going on here. I think both in the majority and in the dissent, but the nature of a case, in my view, makes it ill-suited to really reconciling the issues that are at play here.
Benjamin Wittes: All right. So Kate, how would you describe the dispute between them? And what do you think the role of politics is? It's a, it, it is, Matt is certainly correct, it's not just conservatives, it's not a conservative versus liberal dispute here, but it is true that all of the dissenters are conservatives. What's going on here?
Kate Klonick: Yeah, so I'll just build on kind of what Matt said. So, first of all, I think that it's really important to, to point out that jawboning is, is like, I think a principal's issue and it should be a principal's issue. Clearly there should not be, when jawboning happens, I am against jawboning. I think it's an actual problem. I think that we need better definitions around how, how it's laid out and what the traceability and the causation is and what rises, what creates standing for it. That's still, as Matt points out, that's still something that we don't have any clarity on.
The best thing that we have is Bantam Books, which is a case in which essentially you had a bunch of police officers go into a bookstore and say, you know, it's a shame that these books are here. You wouldn't want to get any citations at your, for your, for your store, tickets at your store, or like have anything else like, happen if, like, you should take down, you should, like, maybe put these books away. So that was kind of, that's like the, the, the outward police power of kind of governments focusing on changing and restricting how a public business, a public facing business, like, publishes its speech, publishes its, like, you know, what, what it has a right to put in its windows and what it has a right to carry on its shelves and what it has a right to, to speak about. So I think that that's like, I just want to put that out there that, that I think that if anyone, anyone worth the salt doesn't think that this entire case was a bad idea or that it didn't raise an issue that was on principle important.
The problem was with the, was, was, was truly with what, like, not only from soup to nuts, in my opinion, this was political because first of all, it started with Trump. And, and Trump would be guilty, more guilty of, of even by Alito's standards, this was the irony in the, I thought, like, I feel, I do feel the, I don't think it's a well-reasoned dissent to be totally honest. I think it's, it's, I mean, I think it's the crankiest of crank dissents that like essentially Alito says at some point that like, well, because you can have an executive order or you can have some type of, executive action on Section 230. That's a real threat that could really coerce a company or platform into deciding to change their policies and speech or take things down. Well, guess which president actually issued an executive order on Section 230. It was Trump like in 2020. Like, so like, let's just kind of like, you know, refresh that that was kind of out there at the time. And so like the idea, so I think it was kind of like a real, a real kind of irony that we had this go out or like happen in this case brought against the Biden administration by these kind of characters.
The second part was like, as I said before, they forum shopped. They took it straight to a Trump judge and that decision was just completely spurious and misstated the facts. There were a number of points in the oral arguments, and you saw it again in Barrett's opinion, in which the district court like literally clear, it was clearly erroneous. Like, misstated the record, and this is something Sotomayor called out the, the Louisiana Solicitor General on in oral arguments. There were a number of parts in which this was just kind of, this was very, very obvious. I don't think Alito does a very good job steel manning their argument here. There were no facts. You can't take an idea that, a legal idea that needs clarity and then shove a bunch of bad facts at it and make good law. Like, you're not going to clean that up. That's not the place to clean this up.
So, I mean, I guess on some abstract principle that I am, I mean, I'm obviously for having clarity around what do we mean around jawboning? . think the Bantam Book standard is vague and we don't really know and it means substantially coerced and all this other crap that like, no one knows what it means. But there's tons of areas of the law that need clarity. I don't want those to happen with this court and on these facts. So I was thrilled with this decision, to be totally frank. And I think that like, you know, it was political.
And to get to kind of your underlying point, Ben, like about why these keep fizzling out. It's because back in the, the Knight v. Biden case that the Knight First Amendment Institute brought around kind of Trump's, the, about, around Trump's Twitter banning as a violation of the First Amendment that came out of the Second Circuit that was mooted. Thomas wrote this, like, long, kind of opining, concurrent saying we sure would like to have some internet law cases to deal with this issue of like private side censorship. And he basically put out a call for papers and like that's what he's got gotten like that's what that that's what's happened. And they've just picked some duds. Like I mean you get these records in and they, it's hard to read the whole record and to make a good judgment of what the facts really are. And what's really there on, on a, an application for writ. And they've just really picked some really bad duds. And then they just keep having to basically say like, well, I, you know, guess there was no standing here. And it, is it because there's no standing in general, because there's no really underlying injury to all of these imaginary right-wing plaintiffs that think that they're being censored? I don't know. But like, you know, it, there certainly isn't in the cases that they've been presenting. End rant.
Benjamin Wittes: I want to pick up on a few aspects of the rant. The first is, I just want to see if we can all agree on what, you know, so Alito in the introduction to his dissent singles out a particular plaintiff whose speech was taken down, I think by Facebook, and says, come on, you know, this person clearly has standing. So my question is, what would the case look like where we all agreed that there was standing? Right? If we imagine a jawboning situation, putting aside for a second Matt's sense that the courts are maybe not the best place to adjudicate this. We'll get to that in a moment. What does a jawboning case on social media look like where we would all say, okay, yeah, Alito's right, that person has standing? Matt, get us started on this.
Matt Perault: Right, so I think it would be communication from someone with the authority to actually take some kind of punitive action against the platform.
Benjamin Wittes: So more than just a request or a demand. It's, it's a request or demand backed up.
Matt Perault: You're asking me to like sort through blurry existing law and find sort of a clear principle, and I'm not sure there is one. But I think like the strongest case would be someone who has the ability, someone makes a request, but it can't be a random person without authority to take punitive action. It's a request from someone with the ability to take like, some kind of retribution against the platform. And that might mean not a member of Congress, which I think is sort of a bizarre way that this doctrine pushes, because like, is an individual member of Congress, even if they threaten to repeal 230, does that individual member of Congress actually have the ability to do that? I think, you know, the last couple of years have suggested that's probably unlikely.
So maybe it's the head of the Antitrust Division at the Justice Department. I don't know. That person also is probably not a person who's going to be very interested in individual pieces of COVID misinformation, for instance. but that person says to a platform, take down this content or we'll initiate an antitrust action against you. And then the platform would have to remove it. So you probably need something in the record of a communication from someone within the platform who has the authority to take that action saying, based on the communication that we just received from the head of the Antitrust Division at the Justice Department, we need to remove this content now.
Benjamin Wittes: And, and does it have to be content, like a specific piece of content about, what if, what if, you know, I said as the head of the Antitrust Division or as, you know, Lina Khan, you, Matt Perault who's running your own social media company, sort of Peraultbook. You need to adopt more restrictive policies. Adopt them now or I'm gonna destroy you in every way that a lit-, you can destroy Peraultbook by litigation. So it's a very explicit threat. You act under that threat. And then sometime later my content gets removed as a result of your new policies. Is that jawboning or is that, hey, you know, Peraultbook has its relationship with the government. I have my relationship with Peraultbook under those terms of service. Tough noogies.
Matt Perault: Yeah. Well, so first of all, I want to even just be clear on the question. I believe that that's jawboning in a colloquial sense. Like that that is a type of practice that we should be concerned about.
Benjamin Wittes: Right, but I'm asking about standing.
Matt Perault: Yeah. It might not be unconstitutional jawboning. I think the question that I would have there is, is that, is not on the side of changing your policies and then it having, eventually having the effect of having your content removed. That feels to me like injury, maybe Kate will weigh in if, if that's not sufficient in her view. I think the question in that case would be whether the threat's sufficient. If you just say, I'm going to, I'm going to do a bunch of stuff and it's going to be painful for you. It's not clear to me based on,
Benjamin Wittes: I'm going to sue you. I'm going to, I'm going to file an antitrust.
Matt Perault: That feels different. But even just, I mean, it's like even just in us, trying hard to come up with this, and you're thinking in your head, I'm going to threaten punishment that's problematic. The way I heard that is like, I think there are people who would say that that's vague. Like, Nancy Pelosi can say like, you like, you know, you like your tax breaks, you like your 230 protections, well, watch out, that that might be too vague a threat. And you might actually have to say like, we are going to initiate litigation, and maybe even be like specific about the kind of litigation you're going to initiate. A vague threat of like, we're going to take you to court might not be sufficient.
Benjamin Wittes: What do you think, Kate? When you read this opinion what's the jaw boning case where you can imagine standing actually being found now?
Kate Klonick: I mean, I think the most problematic part is not the traceability part. That's not where we need definition. Where we need definition is what the level of, what the threat is and the actionability of the threat is like really where we're missing kind of, kind of real players. Because as Matt points out, you can have a member of Congress, and then you can have Nancy Pelosi. Now, are those the same threat? Like, I don't know. Like if you get a call from one of those and you get a, if you get a call from like AOC versus you get a call from, I don't know Joe Schmo, like, representative of, like, the Adirondacks. Like, that's a different type of problem for you. Hearing from Nancy Pelosi or someone that you're, you know, that you've lost their support on important, kind of, Section 230 or Kid Online Safety Act type of, like, legislation that's pending in Congress, yeah, I think that that might rise to a threat. But we, again, we have no way of knowing. And these are things that need to be fleshed out and they are things that, like, deserve to be kind of had, there deserves to be a conversation with.
And I think, I do want to, like, point out one thing that I think is really important here. There might be, like, and I, I think that there might be a tendency to take down or kind of pare back conservative speech because it tends towards a bunch of safety nets that were put in place during a period of time, and I kind of think, like, the problem moved to, to, like, kind of, the, the, the Overton window has shifted. And so a lot of things are getting caught in the net that were not originally, like it was used to be fringe speech and now it is, like, kind of, slightly, more, less fringy. Like as we see a lot more of it. And so a lot of it's getting kind of caught in the net. That's one problem that you're seeing with online speech.
There might be some type of broken toe or there might be some type of real injury that is happening. The problem is, is that it might, it might not be because of the government and it probably maybe isn't. It's maybe just because a lot of the people that work in these tech companies do kind of have a certain type of bias or are kind of aligned on how they want the platforms to look. And what they want the speech to look like on those platforms and that's not against the First Amendment. So that is, I mean, or I guess we'll find out as soon as Net Choice comes down. But that's a different claim than a state, state action claim, like that the state is telling you to do these things, and that is why these cases are so thin.
Like there might very well be that like that there are certain, certain people, although I, I don't think those facts were alleged here. But there might be certain people that were, like, kind of, like, became persona non grata and got censored on these platforms because of their right-wing views. But those were decisions that were made by a private platform by private individuals against other private individuals. And that's just okay, like there's nothing unconstitutional about that
Benjamin Wittes: What if you know these platforms have systems in which basically anybody can complain about anybody. So presumably the government is not differently situated from Kate Klonick in that, you know, you see a piece of potentially offending content, you can flag it for the platform There is once the government is the agent flagging it, a First Amendment consideration right. There, (a) should the government be doing that, but (b) once the platform acts on it, are you in jawboning land? And I take it that the, the distinction that you're making is that if there's no threat of government action, then the government is merely whining. And whining is not a First Amendment jawboning problem.
Matt Perault: Sometimes it's not even whining, it's actually truly collaborative. Like, Yoel Roth wrote a piece for the Knight Institute where he was highlighting the nature of his interactions when he was at Twitter with the FBI, which are consistent with my understanding of how the FBI often interacted with Facebook. It's, it's collaborative and productive. So, if the FBI is flagging terrorist content that is either illegal or a violation of a company's not, not illegal, but a violation of a company's terms. The company wants to have that information and is grateful to the FBI for raising that. And I think the entire nature of those communications is often really different. It's, it's not it, even though it is coming from a government actor and a powerful government actor, it's the tone often is not threatening. They’re with, often it's between people who have a long running relationship. And so it's trusting. And I don't think they're fearing that, like, if the FBI is unsuccessful in getting a piece of potential terrorism content removed that all of a sudden the FBI is going to be trying to repeal Section 230. So I think it's very different.
And Ben, even in your hypothetical, I think one sort of twist on it, is you could imagine the inverse of what I'm describing. Like an incredibly unproductive, sort of a, some of the, some of the communications that came out in the record were kind of like name calling expletives. You know, sort of problematic, I think, in terms of just what you would expect in terms of good governance communications. You can imagine that coming in theory from the chair of the FTC or someone at the Antitrust Division. But the content at issue is, is content that a platform actually wants to take down on its own.
Benjamin Wittes: Right.
Matt Perault: And so if it, if it was an example of like, again, I don't think this is, you know, how most FBI communications are, but you can imagine someone with the ability to sue Facebook or X raising in a very hostile, intimidating, coercive way some piece of terrorism related content. And the platform saying, great, thanks so much. We'll take it down. I don't think that that is unconstitutional jawboning either.
Benjamin Wittes: Right. So, right. And whether it's constitutional or not cannot depend on whether the government actor is polite or obnoxious.
Matt Perault: Right.
Benjamin Wittes: Right. I mean like there has to be some more objective standard than, than whether the person said, Kate, please, I want to flag this content for you, I think it violates your policies. Than if the person said, fucking take this down right now.
Kate Klonick: I do think that in the one instance that that, that that becomes different, it makes, it would change the seriousness of the threat.
Benjamin Wittes: No, but neither of these contain threats.
Kate Klonick: Right, but-
Benjamin Wittes: It's just a matter of, I mean, if, if I say, Kate, would you please take this down?
Kate Klonick: No, exactly.
Benjamin Wittes: And if you don't, I'll be forced to, to, you know, sue Facebook for, for antitrust violations. That's just as bad as if I say that same thing rudely, right? It, it can't depend on the, on the hostility of the communication.
Matt Perault: I don't think the hostility should be sufficient, but I think it's an issue. Like my, my sense, I wasn't at Facebook at the time when Biden said Facebook's killing people. But you can imagine a different rhetoric of saying like, you know, we have concerns about content that is being circulated on lots of media platforms, including technology platforms, that content we believe might be harmful in people's lives. I think the sense within companies about, wow, we need to take action because the government's coming for us would be different if that was the kind of communication versus in a pub, in public remarks saying Facebook is killing people.
Benjamin Wittes: All right, so, all of this leads us to the question, which is important for standing purposes, but it's also independently important for policy purposes. What's the difference between coercion and persuasion in this situation? The government is perfectly entitled to advise, apparently companies that content may be damaging, that it may be violations of their terms of service. By the way, so is anybody else allowed to do that. On the other hand, we all agree that if there is coercion or a th-, a real threat of coercion, that gets into some unattractive territory for constitutional purposes. So, Matt and then Kate. Is there a line we can draw here?
Matt Perault: So, so I don't know if it's a Freudian slip on your part, but I like your division between advisory and coercion, but that's not actually the legal test. The legal test isn't advising or educational or informative communication on one side and coercion on the other. It's persuasion and coercion. And I don't know what those, what the line is between those two terms. It seems like an incredibly narrow gap.
And I think the funny thing about it, and Kate may be able to explain how this came to be. The funny thing about it is that the Bantam case doesn't actually say coercion on one hand and persuasion on the other. So I'm not a First Amendment scholar like Kate is, so I hadn't read Bantam Books. So I read it relatively recently at some point in the last couple of months. And the line in the case that mentions coercion actually has persuasion right next to it and uses it sort of as a, as a synonym. So it says, though the Commission is limited to informal sanctions, the threat of invoking legal sanctions and other means of coercion, persuasion, and intimidation, the record amply demonstrates that the Commission deliberately set about to achieve the suppression of publications deemed objectionable. Coercion and persuasion in that sentence appear as synonyms. And then the thing that the court is talking about as being problematic is deliberately setting about to achieve the suppression of publications deemed objectionable.
And I think that's what's happening in, in these cases. And it's not clear necessarily it's coercion, but it is setting about to achieve the suppression of publications. And so I think this test that we have that I think as, as a doctrinal legal matter is now relatively clear. There was a case earlier this term. We thought we were gonna get the guidance in Murthy. We actually got it in an NRA case. So the guidance is clear that the coercion persuasion test lives on. I just don't think practitioners know what that means.
Benjamin Wittes: Kate. Is this a viable line?
Kate Klonick: I mean, this is one of the biggest issues of kind of, of why, I mean, I think that a lot of legal scholars are like, yeah, we could use some clarity on this. And this, that this case raises some important issues. And I think that Matt is right. I just, again, I think that sometimes I'm a person who tends to read Supreme cCourt decisions and not think that every single word is off is often like so precisely chosen.
So like when they have a string of similar words like that, I kind of think that like that's, that is just like kind of maybe sometimes just a human writing things that all seem like the same thing when they're thinking about them together. And when they are picked apart might have slightly different meanings. And I think that that is very true. And we don't have great clarity around that. And that's something that needs to happen. But that is a remedies question and I think that it is you know, I think that that is specifically something that in particular, this fact pattern was not equipped to kind of provide any clarity on. And in the way that we want there to be kind of more clarity about what the difference is between persuasion and coercion, that that would have better institutional capacity elsewhere.
Benjamin Wittes: All right, I want to turn to what the heck we do about this, since we seem to not be likely to resolve it in litigation over COVID disinformation. And yet to go back to Matt's original point, it would be worth having a principled adjudication of this question by one means or another. If you're a conservative who believes that conservative speech is being suppressed at government request by the platforms, that's a problem. If you're a liberal who's a little bit afraid of what platforms are going to do during a second Trump administration, it's also a problem. So, Matt, you have a proposal on this. What are you suggesting?
Matt Perault: So, I thought an executive order would be a useful vehicle for dealing with this, and I thought a useful analogy is ethics executive orders, which are not mandated by law, they're not about what's constitutional and what's unconstitutional. They're about how governments want their employees to behave. And they're usually issued on day one of a new administration. And the idea is, for this administration here are principles that we think should guide the behavior, the conduct of administration employees.
So my thought was that hopefully in January 2025, alongside or as part of an ethics executive order, you would have an administration issue guidance on how it should communicate about speech related decisions with private entities. And then I provided a whole bunch of different options for what that might look like. I'm less wedded to the idea that like any of those particular components are critical or vital and more that, you know, I think an executive order of some sort would be a vehicle that an administration could use to set up some guardrails here.
Benjamin Wittes: Kate, what do you think of the idea of solving this problem by executive order? It's principally a problem of government speech, federal government speech, and the president can regulate federal government speech. What do you think?
Kate Klonick: I mean, I generally am not a fan of executive orders, so I think that I'll, I'll start there. I don't think that they're the, the best way, for nominally lasting kind of change, but I don't think that they're, I think this is principally not something that should be coming from the least democratic of the branches. Or maybe not the least democratic, but like the, the second to least democratic of the branches. The other thing is that I don't think it's just very practical because an executive order needs like a fair amount of kind of, it needs like an audience and a constituency group to kind of stand behind kind of creating something like that. And that's just not something that's going to happen with jawboning. I don't know who's going to get out there and kind of champion like the, like a, the president or the executive doing something like this. So, so there's that.
And the other thing is that, I mean, I just, I, I had a really taken Matt previously and other things that he'd written, especially his piece in the Knight First Amendment Institute's kind of a series on this, on the issue of jawboning to kind of find more institutional capacity for this in Congress. And I, I still feel that way. I think that the institutional capacity for this is going to be in Congress. I think that drawing out those lines Is something that Congress would be best suited to do. Actually, I mean, I think it's a problem, but I don't think that this is a problem that is in such and such a hurry and a need for an answer, that we need some type of executive order on it. Like I just doesn't, that's not something that I actually, that I think that that needs to happen right now.
In particular, like for, for what, like if we do get, I mean, if Biden decided to issue that now and he lost the election, that executive order would be good for all of like, you know, a solid six months. So like, I just, I just, I guess there's kind of, I would like there to be more clarity around this. I just don't, I just don't see the practical kind of reality of it coming through an EO.
Benjamin Wittes: Let me ask you about that, Matt, because one possible answer to this whole set of problems is that it's not really that much of a problem. That if you don't have a party with standing, it's really because the underlying conduct isn't that bad and doesn't actually affect real people in a direct enough way to, you know, to, to matter very much. You don't have injury in fact traceable to some government action. These are actions that, you know, social media policies are, are actions that affect millions of people's speech. If, if you can't trace it to government behavior, it's probably because it's not really traceable to government behavior. Why should I not react to this with infinite complacency? Yeah, there's no standing. It's actually because there's no problem.
Kate Klonick: Yeah, I, I think that I want to just really quickly add something here to what you just kind of how you frame this question that I also like to, to ask Matt: Which is that if you do have this issue, of, of jawboning and you have this problem of the platforms making these decisions on their own, or even at perhaps the behest of, of government.
There's something also about this that I think that Matt is in a great position to also answer a question on, which is like at the same time, everyone's been, been like really complaining about how slavish these platforms are to their economics, to their bottom line. And so like, which is it like, are they slaves to like getting calls from the government and the government telling them what to do? Or are they slaves to kind of like this engage these engagement principles and having access to markets and things like that? Are those the same things? I guess that is also something I just don't feel like there has been kind of enough discussion on.
Matt Perault: Yeah. So I, so I definitely don't think platforms are just slaves to the bottom line. I don't, I think that is like overblown rhetoric that just, it's not, again, it's not consistent with my experience. My experience might have been limited. But the idea, I think lots of people think that platforms only care about money and not at all about mission. My sense is they care a lot about mission and, and they also obviously have shareholder obligations and they care about the bottom line, but it doesn't drive them exclusively. I'm at a university, Kate, you're at a university. I think sometimes the idea is like other institutions only care about mission and don't care about money. My sense is that's blurrier, you know, universities care about money as well as mission. So I don't think tech companies are unique in sort of having concerns on, on, on both sides. They certainly make lots of decisions that aren't motivated purely by money.
Ben, on, on your question, I think, I think you're sort of suggesting that like the only set of things that we should care about are the set of things that are deemed to be unconstitutional. And I don't think that's right here. I don't think it's right in lots of areas. So one good example is hate speech is constitutionally protected speech, and I think it should be. I don't see there's, I don't see in the future a Supreme Court decision that rules that you can regulate, you can ban hate speech. But I think we would probably all agree that in your conduct as people, or if you were government employees, that engaging in lots of hate speech wouldn't be sort of best practices or favored conduct. And so I think there's a lot of stuff that exists beyond the reach of the law. And for very good reason, like I don't think the Supreme Court should relax its standing doctrine just so it can bring in some jawboning cases. I think it's, it's important that the bar is high in court.
I think the question is, is that, is that sufficient? And my view is that governments are able to exert a lot of pressure on tech companies to take decisions about the content that we want to see and the content that's lawful or permitted out of public debate and out of public discussion. And puts it in the shadows in ways that I think result in platforms making suboptimal decisions that don't benefit their users. That's not solely the government's responsibility. I mean, platforms could say, no, we are not going to take action, but I don't, I don't think that's realistic. I think platforms really genuinely fear retributive action from the government.
And you can look like, I guess the question might then be like, well, what are examples of that? And one that I was thinking about yesterday is the U.S. Trade Representative's decision to roll back free flow of information principles in their international negotiations. That's quite consequential for companies. It's not just consequential for large companies. It's consequential for a wide range of tech companies. I think it is hard to make an argument that that decision has nothing to do with the nature of interactions between tech companies and the government over a long period of time.
Benjamin Wittes: We are gonna leave it there. Matt Perault, Kate Klonick, thank you so much for joining us today.
Matt Perault: Thanks, Ben.
Kate Klonick: Thank you.
Benjamin Wittes: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a material supporter of Lawfare using our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Have you rated and reviewed the Lawfare podcast? If not, please do so wherever you get your podcasts and look out for our other podcast offerings. This podcast is edited by Jen Patja, our theme music is from Alibi Music. As always, thanks for listening.