Lawfare Daily: A World Without Caesars

Published by The Lawfare Institute
in Cooperation With
This episode of the Lawfare Podcast features Glen Weyl, economist and author at Microsoft Research; Jacob Mchangama, Executive Director of the Future of Free Speech Project at Vanderbilt; and Ravi Iyer, Managing Director of the USC Marshall School Neely Center.
Together with Renee DiResta, Associate Research Professor at the McCourt School of Public Policy at Georgetown and Contributing Editor at Lawfare, they talk about design vs moderation. Conversations about the challenges of social media often focus on moderation—what stays up and what comes down. Yet the way a social media platform is built influences everything from what we see, to what is amplified, to what content is created in the first place—as users respond to incentives, nudges, and affordances. Design processes are often invisible or opaque, and users have little power—though new decentralized platforms are changing that. So they talk about designing a prosocial media for the future, and the potential for an online world without Caesars.
Articles Referenced:
- https://arxiv.org/abs/2502.10834
- https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4178647
- https://www.techdirt.com/2025/01/27/empowering-users-not-overlords-overcoming-digital-helplessness/
- https://kgi.georgetown.edu/research-and-commentary/better-feeds/
- https://knightcolumbia.org/content/the-algorithmic-management-of-polarization-and-violence-on-social-media
- https://time.com/7258238/social-media-tang-siddarth-weyl/
- https://futurefreespeech.org/scope-creep/
- https://futurefreespeech.org/preventing-torrents-of-hate-or-stifling-free-expression-online/
- https://www.thefai.org/posts/shaping-the-future-of-social-media-with-middleware
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Ravi Iyer: The results of those kinds of experiments that, you know, reducing the incentive to comment back and forth or to reshare things actually improves the ecosystem, I think is another thing that we can learn about specifically, how do you create more prosocial media?
Renee DiResta: It's the Lawfare Podcast. I'm Renee DiResta, contributing editor at Lawfare and associate research professor at Georgetown McCourt School of Public Policy.
I'm with Glen Weyl, economist and author at Microsoft Research; Jacob Mchangama, executive director of the Future of Free Speech Project at Vanderbilt University; and Ravi Iyer, managing director of the USC Marshall School Neely Center.
Glen Weyl: I, I just think no matter what our goals are, the design of sort of the overall information ecosystem and what gets surfaced is critical.
Renee DiResta: Today we're talking about design versus moderation. The way that social media platforms are built influences everything from what we see, to what is amplified, to what is even created in the first place, as users respond to incentives, nudges, and affordances. These processes are often invisible or opaque, though new decentralized platforms are changing that.
[Intro]
So we're going to talk designing a prosocial media for the future, and the potential for an online world without Caesars.
I want to just kind of bring you guys in right now just thinking about the difference between moderation as policing failed end state versus design, right—design as a proactive way to cultivate behaviors, to subtly shift norms, to guide users in particular directions, not necessarily through top down rule enforcement, but rather by determining the affordances of a system and what the system lets us do.
So one of the reasons that I'm excited for this conversation with you all specifically is that when I read your work, you all have such deep thinking about the specifics of ways that system design can produce better social media.
I know that Glenn and Jacob, you guys just had a paper recently released, you titled it Prosocial Media. And I'd love to just start with that. I think the term prosocial media is wonderful. I'd like to maybe ask you to define what that means and tell us a little bit about your work.
Glen Weyl: Yeah, so I think the key idea that motivated the term prosocial media is that obviously social media are doing something social. They're using social information to serve content, but that doesn't necessarily mean that they're achieving the goals that many people had and creating social media, which was to strengthen connections across people, you know, help communities be stronger, and, you know, reinforce the social fabric that they build on.
So social media could, in theory, either be like sustainable, you know, agriculture that reinforces, strengthens the soil at the same time as it harvests from it, or it could be like you know, clear cutting agriculture. And I think many people believe that, you know, social media has actually been undermining the social fabric as it's been harnessing it, and we want to try to make that more sustainable, more regenerative so to speak.
Jacob Mchangama: Yeah, I, I think what excited me—you know, I have a, a much more narrow focus than, than these two brilliant gentlemen, I come at, at, at this topic from sort of a free speech perspective—and I think what excited me about the prosocial media approach is that I think people in the free speech space have very often been sort of on, on the defensive and making these abstract principled arguments that were A, a bit difficult to apply consistently when it comes to social media, but also B, just not convincing a lot of people because social media makes. you know, the harms, real or perceived of, of, of speech so much more visible to a lot of people. So, so it makes them much more willing to engage in trade offs and restrict speech than they were in, in the analog world.
And, and so I think that the prosocial media approach in many ways is a good way for free speech activists to frame a much more positive vision for, for social media, one that a empowers users at the expense of centralized platforms and or government—you know, today we're, we're seeing a huge development towards government mandated content moderation—and also one that that that says, well, you know, yes, free speech has some harms, especially in an in an online connected world where anyone can can share anything with with anyone across borders and where some of the harms that can be involved in free speech can be very visible, can travel with lightning speed and can lead to to to real life harms.
But here are some some, some models that might actually use the power of speech and access to information, information to mitigate and diffuse some of those harms in ways that are constructive, but that rely basically on speech rather than giving outsized power to platforms and or governments.
So I think that's an incredibly empower—powerful and empowering vision for social media that resonates really well with a, with a, with a basic commitment to what I would call egalitarian free speech.
Renee DiResta: I want to hold the decentralization piece for a little bit cause I know we're going to talk about what makes new experimentation possible, I know. I've written about that; we've talked about that in the context of middleware here a little bit.
I want to focus in on specific features and designs. You know, you guys articulate a ways of thinking about this. We talk a lot about bridging and balancing and this idea of bridging and balancing as a goal. What do we mean when we talk about bridging and balancing as a way to create a more prosocial web?
Glen Weyl: Well, I, I don't want to go on too much of a historical digression here, but I think it's useful to understand that both of the kind of competing visions or, you know, falsely, as you pointed out, Renee, maybe competing visions of what media should be like today really came out of World War II.
There were two different movements. One was that right after Pearl Harbor, Henry Luce, the publisher of Time, convened a commission called the Hutchins Commission that came up with principles that the media could abide by in order to avoid it being nationalized because they were very worried that, you know, division in the media, misinformation had led to U.S. not being prepared for Pearl Harbor, and that the government was just going to nationalize the media as a result. So they wanted to avoid that.
On the other hand, there was a group of people led by Margaret Mead and other social theorists who thought that, kind of, the concentrated nature of the media, that, you know, broadcast nature of radio and journalism had led to fascism, and that the way to address that was to have like a much more multi sided media.
And I think we've arrived at both kind of this desire for bringing people together, and this desire to have lots of voices heard from those two respective movements. And I think the real question is how we can bring those together.
And that's really our goal in this paper is to use this notion that really came out of the Hutchins Commission of content that brings people together across divides and content that reflects the diversity of positions people have—that's bridging and balancing—as being sort of critical elements that need to show up while we also ensure that we have all the different diversity of angles that social media allows without too much gatekeeping.
Renee DiResta: And you're talking specifically about, I guess, the question of how. So I've seen Audrey Tang, who is the digital minister of Taiwan, is a co- author on your paper also, has spoken a bit about surfacing where content comes from, right, kind of labeling communities that it originates from.
I was kind of intrigued by this idea because when I was at Stanford Internet Observatory, we would do these things that we kind of called narrative traces, right? Where did something come from? Where did that meme originate? And for us, that was a question of like, is it authentic, right? Did it come from some authentic community? Was it something that was kind of dropped in through an influence operation from a state actor or whatever? What is the provenance?
How do you guys think about that? Why do you think surfacing where something originates is helpful for this bridging or prosocial behavior? How does it help us have a better web?
Glen Weyl: I think there's actually two different aspects to this. One is where does it originate from, and Audrey's done some amazing work on that in Taiwan on basically creating liability for things that don't have signed provenance. So I think that's really a fascinating approach and one that I'm a big fan of.
There's another element that we emphasize more in this paper, which I also think is important, which has more to do with where does the popularity of this originate?
Renee DiResta: Okay.
Glen Weyl: You know, there's the people who created it and their signatures, but then there's the people who liked it and re, you know, posted it and so forth.
And obviously the first one you can do just by having some kind of disclosure about where it came from or, you know, cryptographic signature, which is what Audrey worked on.
But the second one is complicated because there's going to be thousands or millions of people who liked or retweeted something. So you can't just list all their names. You have to give some characterization of them. And that's why we focus on this notion of using the internal learning that the machine learning tools are doing about the communities that something's appealing to—because that's how they're doing personalization in the first place—and then be trying to be transparent about that as a way of giving a sense of who this is popular with so that you know the audience that you're sharing something with, which I think is a really important element, not just for sort of misinformation or news related reasons, but also just for isolation reasons.
I think, you know, when you used to, you would go to a concert or you would attend a lecture and you would get a sense of the other people in the room. And that's much harder online. And obviously it does happen in environments like Reddit, but we'd like to bring aspects of that to this by using transparency about the internal understanding of the community that the models already have
Renee DiResta: Ravi I'm curious when you were at, you know, your role at Facebook and other places, how did you all think about that question of what communities were engaging with content? Was that something that you were also attuned to, this question of bridging as far as what was curated out to more people?
Ravi Iyer: Yeah, I mean, I actually started my time at Meta working on polarization. And so, I think there are three findings or, or, or things that we learned that are relevant here and that can sort of give some specificity to what prosocial media could be.
One, you know, there's some famous—there's a Wall Street Journal article about this but there's also like other books that are not about Facebook about this—and basically it's the, the finding that many publishers and politicians say they, they produce worse content. They produce divisive content, content they're not proud of because of the algorithms on social media.
So, you know, Jon Peretti of Buzzfeed went to, you know, Facebook and said, like, look, we're producing divisive content, not because we want to, but because that is what does well in your algorithm. Many politicians in Europe say that. And so that's not a moderation thing. That's not about, you know, figuring out what you can say or not say. That's, that's a company sort of incentivizing, effectively paying people with attention.
The second thing is that a lot of people see content they don't like. And a lot of people don't like divisive content. People don't want to argue with their, their relatives online all the time. You know, there's one study, 70 percent of Facebook users see content they want to see less of. You know, they often see it multiple times per session. They often see it within the first five minutes of scrolling.
And so, you know, there's a business incentive to actually reduce these kinds of divisive experiences. It's actually like turns people off of these products. And that's why you see, I think, a lot of people moving from, you know, some of the more divisive platforms to like a Bluesky or to someplace where it just feels like you can have a conversation again.
And then the third thing is that, you know, we, one thing I worked on with these break the glass measures, design measures that really, they're kind of like temporary design things that, that, that changes ecosystem and sort of change those incentives.
And we did that in part because when you rely on moderation, you make a lot of mistakes. And so if you're working on something like Myanmar, Ethiopia, something in some far-off place, it's really hard for a company—for anyone, it's really hard for anyone—let alone a company thousands of miles away to make decisions about what people should or should not say.
But if you can say something like, you know, look, maybe we shouldn't be optimizing for the thing that gets the most comments, ou know, obviously the thing that gets the most comments is not always the best thing, right? Like the, the, the picture of my night out last night that got the most comments might be a great picture, but like your health information is not meant to be like debated back and forth, right? Like it's meant to be boring. And the fact that you're talking about a lot actually maybe means that it's not great information.
And so, you know, the, the results of those kinds of experiments that, you know, reducing the incentive to comment back and forth or to reshare things actually improves the ecosystem, I think is another thing that we can learn about specifically how do you create more prosocial media.
Renee DiResta: With the break glass measures, that was also as I recall— particularly around like post Jan. 6—that was also deprecating political content, right, and that was sort of trying to resurface more content that bridged people in the sense of things that were more human, right? Here's more from your friends, more baby pictures, more wedding pictures. Is that the sort of things that were kind of upranked instead?
Ravi Iyer: I mean, there are lots of things that were done, and I think there's an article in Tech Policy Press about all the very specific things that were done around Jan. 6.
I mean, the things that I think are most worth learning from are removing some of these engagement incentives, so not just removing a whole class of content, but actually sort of improving the incentives within that class of content. You know, people should be allowed to talk about politics, but it shouldn't be incentivized to talk about it as entertainment, and when you optimize for like the thing that gets the most comments, it gets to be more like entertainment.
The other thing that was done around Jan. 6 that I think is worth learning from is rate limits, like there was a reduction in the amount of times you could invite people to a group. I, you know, if you were to ask yourself, how many times should a person be able to invite people to a group, or how many times should I be able to message strangers, and how many times should I be able to do anything, you will come with a far lower answer than platforms have rates for.
Renee DiResta: Yeah, no, I remember that. We were always mystified by that, actually, yes.
You would see a lot of the time when we would look at, even looking at like markers of inauthenticity, you would see one person mass blasting the same post in the same second into 60 different groups, and and it was always kind of a remarkable affordance that, that you had the power to do that.
This was where actually like, funny enough, the old freedom of speech vs. freedom of reach articles that the argument that is and I were making back in 2018 that got reframed as content moderation, right? Like Elon put it on top of his content moderation, you know, page or whatever. But we were talking about it in the context of what you're describing in the context of curation, actually. Like, what is it that should be curated and amplified like in the moment. What is the incentive that you create for particular types of content to be boosted? And I think that's a really interesting question.
And then I remember that maybe this takes us kind of into the, the question of like who decides, because one of the things that you know, break glass measures actually became politically controversial, right—and Jacob, I don't know if you want to kind of pop in on this with your opinion—but this question of why does the platform get to decide? This is a very opaque shift. It obviously has impact particularly if political content is the thing that gets deprecated in these moments, or people begin to feel that that inability to invite people into groups is somehow limiting the potential growth of a political movement or something along those lines, like this is where you start to see that tension come in.
The question around transparency and to what extent, to what extent design intersects with the regulatory conversation is a very interesting one, right? Because areas where the moderation conversation quite clearly can't, it's not as clear cut, I think, that the design conversation doesn't intersect in the regulatory conversation, and I'm curious what you all are, are seeing and thinking about on that front.
Jacob Mchangama: I think transparency is obviously important because it, it reduces the sort of the, the, the speculation and, and conspiracy theories around, —probably, probably doesn't eliminate it, but it, it, it ideally reduces it, especially if there are also ways to, to track how our platforms actually implement it.
I mean, of course, there is a spectrum where you say, you know, if you distinguish between freedom of speech and freedom of reach, and if you have clearly ideological ways to amplify reach and de-amplify it, then, you know, I, I think we're getting into, to free speech territory.
But the more, I guess, you allow users to have input on this, the better, because that then limits the, the, the platform's ability to skew the conversation. But then have full transparency on where the platform decides and what its design is, is, is actually based on and the amplifications of that, I think is, is the optimal solution. How you implement that in practice is something that I would, would leave for smarter people than myself, like, like Glen and Ravi.
And, and I, I think you would probably never be able to have a system that would, that would satisfy everyone just because we, we deeply disagree about these things and, and everyone, when, when you look at a platform and what goes on, everyone will have this tendency to say, well, you know, I think that a lot of speech, you know, I have a voice here, but why am I not, you know, why do I only get, why do I, I personally have such a extremely pathetic reach on Bluesky, for instance, and that must be, that must be because Jay Graber has designed it in such a way that people like me don't get, get to, to, to-
Renee DiResta: Can't possibly be my post, must be that somebody's putting their thumb on the scale. No, I get it.
Glen Weyl: I do think Jacob's getting at something important, which is the reason why I spend a lot of time focusing on terms like balancing and bridging or like trying to come up with these big principles in terms of some communication rather than like relatively technical tweaks, even though, of course, they have to be implemented technically, is that I think like the legitimacy and the way that we talk about these things and the ability to relate them to sort of democratic principles is actually like central to what it is for them to be good design features. You know what I mean?
Renee DiResta: Yeah, say more about that.
Glen Weyl: I mean, I guess, like, if you think about, if you think about our democracy, right, like, we have a principle of free speech, but we don't have the principle that anyone can come and speak in front of Congress at any time, right? The people who get to speak in front of Congress have some kind of democratic procedure that ensures that they're representative of the population in some sense, and that there's some process of doing that representation that is like written down somewhere in a document, and that like people are concerned about the adherence to the rules of that document. And so there's just like a huge amount that's put into the allocation of reach of like the effective voice that we have, as well as having free speech.
So I think this is something that's very well established, and I think the more that we can tie however it is that we are organizing things to like principles are kind of meaningful and can be written down and legitimated in this sort of way—that was also very critical to what Audrey did in Taiwan—I think the more that we're going to be able to, you know, get the legitimacy that's necessary for any of this to work. Because the reality is, if we moderate out X and Y, but no one thinks that was legitimate, they're going to go find it somewhere else anyway, and they're not going to buy into what they're getting on the platform. So that legitimacy, I think, is just as important as the efficacy.
Jacob Mchangama: But it also, and I think it has to be, there has to be a very strong element of, of bottom up legitimacy because otherwise you're just getting back to sort of the, the, the digital version of the, of, of the analog public sphere where you, where you have sort of traditional institutional gatekeepers and then there's not going to be buy in from those who didn't have a voice before.
So, so I think that's, that's incredibly critical and sort of going back also to this. ideal of, of egalitarian free speech underlying this.
Renee DiResta: No, I agree. I think, have you seen the—there's a paper Susan Benes wrote, I'm, I'm blanking on her co author's name—unfortunately, but it was on time, place, and manner restrictions, right? I think some of us have talked about this in the past. I've written about it in the past also.
I was writing for a while about circuit breakers, the dynamic around it was like information flows, right? How do we think about design and information flows? When I was on wall street, circuit breakers are a thing that are put in place so that people can be put into a more reflective mindset so that stocks are not constantly whipsawing around when new news comes out to kind of temporary halt so that people can digest information. And these models that we have of thinking about design tools and friction in particular as ways of creating, temporary ways of shifting people, people's thinking, putting them into a more reflective mindset so that we're not kind of careening around from one information crisis or rage machine to the next, and the ways that design can actually do that quite effectively, I think.
I don't know if you all have have seen that paper or that research. I think Ravi, perhaps you have, I wonder what you think about that.
Ravi Iyer: Yeah, yeah, the other author is Brett Frischmann, and that's a great paper. It's about time, place, and manner, friction, and design.
I think it's a great paper, and, and I'd say that the most important thing—you know, we're talking about who should decide, we don't want these big gatekeepers—I think the best way you do this is no one decides, right? So there's a way that you can reduce reach of content, which is like you've identified kinds of content that you want to demote and you're kind of making a moderation decision, you're deciding like this. These are things I don't like I'm going to reduce those things.
But if you instead you decide like I'm not gonna optimize for what people pay attention to; I'm gonna do surveys and give them things they aspire to consume which tends to be more You know, healthier content, more aspirational content. And I'm not deciding users are deciding, right? That's just a much more legitimate way to do it and it supports users’ agency. It's not, it's not taking away from what users want.
And a lot of users, they don't want, you know, they get more sexual content than they want. They get more sensational content that they want. And so if you ask them, you know, aspirationally, what is it you want, you actually get a different answer.
So I think there's a way that you can design systems where no one's deciding. There is no, like, gatekeeper. It's really like designing so that users decide and all the decisions are really content neutral, not about what we do or do not want people to say.
Renee DiResta: I want to talk about that, devolving control to the users, maybe in the context of decentralization. But while we're talking about who decides, and we sort of alluded to the regulatory conversation—and the thing that I always thought was interesting that ties into the legitimacy piece here is that I think most people don't like the idea that social media companies decide, right? It is a form of unaccountable private power. It's quite opaque. Nobody really knows what they're doing. There have been efforts to create some transparency. Platform Accountability and Transparency Act was one such law—never managed to pass.
Ravi, I think you have looked at a number of other different types of regulatory interventions that touch more on design. What are you seeing? Where, where does the, if we say that user control is one thing, but it's a far way off, right and it may be that centralized social media incentives don't align, and we can talk about whether that's true in a couple of minutes. Decentralized is its own animal. We can talk about the trade offs there. We don't want the government making moderation decisions.
How should we think about the role of the state, whether that's America or what's happening in Europe with the DSA? How should we think about the regulatory conversation around design?
Ravi Iyer: Yeah. I mean, I think it's analogous. I use the analogy of cars and food. Like once upon a time, we didn't have regulations for how cars were designed. And so you could have a car without seatbelts or you could make food, however you wanted to in your meat factory. And then people got together and said like, look, we need some minimum standards so that you know, people don't get sick and people don't crash and go through their windshield.
And so I think, you know, the physics of social media are increasingly becoming understood and we need minimum standards for the design of social media products. And, and, you know, in some ways our First Amendment does some work for us because you actually can't regulate in the United States, you know, what people can and can't say online, but you can regulate whether a product is safe and the Supreme Court has weighed in that there's a difference between the expressive components of an algorithm and the functional components. So there is no message trying to be conveyed by an algorithm that says, you know, I want to, I want to keep you on here as long as possible and therefore, you know, and we also know that there are lots of externalities to that.
There's certainly harm to kids. And so you see things like the kids online safety act or the safe for all actor, the, or the age appropriate design code. You're seeing a lot of laws really go in this design direction, both because it's more effective, also, it's more legitimate and, you know, less prone to abuse and it's required by our, our constitution.
Renee DiResta: Maybe we should chat about the, the user option then.
So right now the decentralized option that where users have the most control and there are the most users is Bluesky, right? We've seen a pretty big adoption curve for them recently and I think everybody here is on Bluesky now, right? Y'all are, all three of you? Yep. Jacob's having some problems with it, but the rest of us are doing fine.
I guess for, for those who are listening, who are not on Blue Sky, there's a lot of different ways that users can, can control their experience. There are interesting ways to have control over what used to be curated through the people you may know algorithms, which was Facebook and Twitter's ways of suggesting users for you to follow algorithmically.
Now there are what Bluesky call starter packs where you can find one person that you trust. You can click on their starter pack and you can sort of subscribe to and follow all of those people. So it solves the cold start problem and you have some agency over immediately going and finding people that you like that you trust that you find interesting. Thanks. And then seeing who they like and trust and find interesting. And so you can kind of build your initial social graph that way. So there's the sort of social graph building piece.
There is the ability to create and subscribe to feeds. So for a long time now, you can go, you can just pick different types of feeds that you want. There's a gardening feed that I subscribe to as a really crappy gardener. You can find people who will help you, like, figure out why your plant is dying. There is a Blacksky for, you know, kind of, people in the Black community who want to find, You know, that's sort of like Black Twitter community on Bluesky. There are, you know, so many different types of kind of identity affinity group feeds that you can find and follow. There's different topics, newsfeeds; there's one that's really great That's GIF links if you just want to subscribe to all the different gif links that people drop on the platform and just you know read news for free basically. So it's a really just a kind of a cool way to immediately curate your feed.
And the thing that's really nice is they make it very easy to toggle between feeds. So if one feed is very boring, if your discover feed or, you know, your, your friends that you follow are not posting very interesting things, or they're kind of quiet that day you know, you can pop into one of your other 10 feeds and immediately see like what else is happening elsewhere on the platform.
And then finally, the other thing that they have which kind of at the intersection we can say of moderation and design is there's the labelers. So you can actually choose to have certain content either obscured or kind of hidden in your feed. You can—they'll put up a little interstitial over it, label it—and you can also have shared block lists. So that's roughly speaking the different ways in which users have incredibly granular control over very different types of, of the Blue Sky experience.
I think one of the, one of the reasons that, that we've seen this adoption, I think, is really the mainstream platforms swinging the pendulum pretty hard on the moderation front, right? So I think a lot of the migration to Blue Sky was in response to Elon buying X, and I think the liberal audience is feeling that they didn't really like what happened to curation on X. They didn't really like what happened to moderation on X moving to a different platform. I think you saw Zuck do the same thing. He recently had a pretty big shift in what he said X was going to moderate around again, a little bit of a bump there.
Curious how, how you all see this, this shift to decentralized platforms. I have seen it as a, an opportunity to show users what is possible, but I'm not sure how many users are thinking about it in those terms. I, I kind of get the sense that more people are there because they think of it as a vibe shift, right? They're feel, they're fleeing what they see as like bad moderation and curation vibes on other places. And so they're coming over to this new place, but they're not necessarily thinking about it in terms of, wow, it's really fantastic that I have more agency.
Jacob Mchangama: My hunch is that, that, that you're right. So if you fleed. X, and now Facebook, there's a good chance you did so because you thought that maybe content moderation was getting too lax, right?
Renee DiResta: Or, or because you saw Elon in your feed constantly, right?
Jacob Mchangama: Yeah, yeah.
Renee DiResta: Or just like, curation on X got really weird.
Jacob Mchangama: Yeah, and, and also, I mean, Elon is—something that I've written about many times—is not exactly your, your, your principal civil libertarian free speech defender; he is very much someone who defines free speech as stuff he likes and has all kinds of arguments to limit and, and moderate things that, that he doesn't like. But that's sort of the way he, he marketed it. And I think that that, that turned off some people.
And also the announced changes by Zuckerberg, which, you know, I think you can look at it as you can, you can be sort of cynical and you can say that was clear pandering to the new administration coming in in order to avoid sort of the, the worst revengeist consequences of, of a new administration that where, where, where Trump obviously was not a big Zuckerberg fan. But I think that some of the announcements were—from a free speech perspective—actually pretty good in the way that they were announced. Implementation obviously is, is different.
So I like some of the features that you, you mentioned on, on Bluesky. That's fine. I guess from a free speech perspective, the, the real difference is how light touch is Bluesky when it comes to the centralized content moderation. So if, for instance, if you look at the hate speech policies of Bluesky, they're not that very different from other platforms. It's not sort of a, you know, we have all these features that we're going to be super, you know, we're not going to touch a lot of hateful stuff centrally. I, I don't have any stats on how they implemented, but it's just, when you look at the, at, at the policy, it's not very different from, from, from the other platforms.
And, and we have to remember that the other platforms there's been—we, we put out a report a year or two ago where we looked at what we call scope creep in, in hate speech policies of, of platforms.
So we looked at, at. I think eight, eight platforms and their hate speech policies since they were first sort of, publicly iterated and up until 2022 or 2023. And you see huge increase in the number of protected cate categoristics, you see sort of lower thresholds, and even though most of the platforms say that they are committed to human rights principles, their hate speech policies actually go way beyond the definition of hate speech in the International Covenant on Civil and Political Rights, this UN convention, which on the one hand protects free speech, but then says you have an obligation to, to prohibit narrow categories of, of hate speech. And even though—I mean, these conventions are obviously not legally binding on, on, on private platforms, but they say they are committed to, to these principles. But what we found was that. very clearly the direction was towards more restrictive hate speech policies.
And just by looking at Bluesky's hate speech policy, it doesn't seem to be much of a game changer. And I'd be interested to see how they, you know, data on how they enforce this, because what we've done, we've done also a number of studies—first in Denmark, but then the latest one we did was Sweden, Germany, and Franc— where we looked at some of the most popular politicians and media outlets, and we looked at the number of deleted comments there. And we found that the vast number, I think, between the 90 and 98 percent on YouTube and Facebook, respectively, were perfectly legal comments, and the most of those that were deleted were not only legal, they were not particularly controversial.
So that suggests that this scope creep has had a, a, a consequence and impact not only on, on, on lawful speech, which you'd expect a fair amount of lawful but awful speech to be, to be, to be moderated away, but even sort of speech that is that is not even particularly controversial.
Ravi Iyer: I mean, I’d just like to echo, that like that dovetails well with my experience that you know hate speech and i'll agree with both jacob and in some ways with Elon that you know the concept of moderating on hate speech, you know The goal is reasonable, but the way it actually gets implemented in practice actually has a lot of negative effects.
So a lot of things you end up taking down are things like men are scum or, you know, they're not, they're not things that we're actually thinking are harmful. And then, and a lot of things you end up leaving up are what you call fear speech. So people talking about a crime committed by an immigrant, just reporting on it, and then you see all the, the vitriol it generates. And so you're never going to get at that kind of thing with a policy.
And so I think, you know, you're right, Renee; I don't think people are responding to differences in moderation because I don't think those actually make a huge difference in divisiveness. I actually think the thing they're responding to is a vibe change between Twitter or X and Bluesky. Like, people don't want to post something and, you know, get attacked by 300 people. They want to have a reasonable conversation with regular people. And so if you have a platform where it's normative to just attack each other, then, you know, regular people are going to leave.
Renee DiResta: Well, I think design really does so much toward shaping norms and that's, this is where I think that ties back into what Glenn is describing and the work around what do you curate, what do you surface. We talked a little bit about bridging as a means for surfacing disagreement without being disagreeable, I think is how I've seen it expressed in its simplest form. Glenn, I don't know if you want to talk about that.
I want to also mention Masnick's work on overcoming digital helplessness and talking about the agency piece, but give, give me a little bit about that that concept of how do we create that sense of where users do feel comfortable where the norms are such that you feel like you can speak without being, you know, barraged by a mob of people because of what is curated and surface. It doesn't create main characters constantly.
Glen Weyl: I mean, I think it's important to understand that this emphasis on design over moderation is both a defender and an attacker thing. It's both good and bad. Like, so for example, the, there's wonderful work by some colleagues of yours when you were at Stanford Renee, Molly Roberts, Jen Pan Gary King.
And what they show is that the, you know, most effective stuff that the Chinese do is not actually the Great Firewall. It's–
Renee DiResta: It’s the forum sliding.
Glen Weyl: tIt's flooding the space with division attacks, with distraction, garbage, basically. You know, you can talk all you want about free speech, but if the room, if there's deafening noise playing everywhere, it's not feasible to speak over that, right?
And so I, I just think no matter what our goals are, the design of sort of the overall information ecosystem and what gets surfaced is critical, you know, to achieve the goals of making people feel that they can be, you know, part of the conversation, to me means doing exactly what you were saying with the Bluesky feeds, while maintaining some of the ease that you get from a more algorithmic curation, which is people need to know the context of the conversation. If people don't understand where they're speaking, it's going to be very hard for them to do that. There's completely appropriate times to start ululating or speaking in tongues. It's called church, you know, or mosque, right? But that's probably not something to do when you're in an academic conversation about chemistry. And if we let everything get mixed together and people don't have any sense of that context, then you're going to get a lot of inappropriate behavior, not for any particularly malicious reason, but just because people don't know what conversation they have.
Like, you know, men are scum, for example, is a very contextual thing. Like if you are in a conversation that is, like, meant to be bridging a bunch of different things on controversial issues related to feminism, or abortion, saying men are scum is probably like it could be a pretty problematic remark. If you're having a conversation about you know sexual abuse It might be a very appropriate thing to say.
So not giving a sense of the context or the audience that you're speaking to can really undermine our ability to have civil conversations and, and I think that restoring that in ways that are consistent with the ease of a algorithmic feed is really important. That's a lot of what we're trying to do.
Jacob Mchangama: But I think here it's, again, it's important that we, that we still have those spaces for those who want the robust, uninhibited discussions. Also because, I mean, human beings are, you know, we're driven by our emotions a lot of the time, right? We're sitting here, we're having a rational discussion. We're saying, what would an optimal, optimal information space look like And, and, and we can have great ideas about that, but the human beings that navigated are not always motivated by those.
And so you have, let's, you know, the latest example, the, the Khalil Mahmoud case, you know, that's something that has upset a lot of people and they're going to vent their frustrations about it and their fears about government overreach on, on, on free speech. And they're not necessarily going to express that in a very polite way because they think that the, the, the government is, is, is curtailing First Amendment rights in a way that's really scary.
And you have to have spaces for that, even though it sometimes delves into hyperbole. Then you can have a, you know, a, a feed where First Amendment lawyers have a much more substantive discussion about the niceties of the case. And I want those things to coexist.
Renee DiResta: Do you want them to coexist on the same platform? I go back and forth on this. This is the challenge with decentralization, right? It gives people the opportunity to move in response to the vibes, meaning you don't have to be on Twitter. You can go be on Bluesky, which is currently perceived as Lib Twitter. My hope is that it won't be for very long; my hope is that people recognize the, the technological capacity, the ability to build, much like the Fediverse, right? Run your own server, do your own thing, set your own rules. Reddit, again, the same thing. You've got infrastructure, make your subreddit. You can have r/conservative and r/liberal coexisting in the same place on the same infrastructure.
Are you looking for people to be in dialogue with each other? Because that I think is the piece that is struggling. There are a lot of different social experience sites that are coming about and if you want to find, you know, the saltiest possible world, it's always been there. It's called 8chan. You can go right. T
The question is like nobody's ever been deprived of that experience; the question is like where you know how do you create the spaces where the disagreement manages to come in contact and, and achieve consensus. Because my big concern is that we've created places that people can go to for the vibes, but we haven't created, we haven't found ways to use design as solutions to do that bridging and create that consensus. And even as we've created more small public squares, which I think are good, we have not yet found the design solution that bridges that consensus space.
Glen Weyl: Renee, the way I think about it is that giving the space for the small conversations might seem like a contrast to doing the bridging, but I would actually argue that it's like a necessary other side of the coin. Because until we understand what those smaller spaces are, we don't even know what to bridge across at some level, right?
So I actually think, you know, my optimal, my ideal design for this type of a situation is one where there is a common platform that has affordances for both of those things and actually uses the data from each to inform the other. Because by having the awareness of the small conversations, we know what the larger conversation—if you choose to tune into it—-is going to need to navigate and bridge. Because without the smaller ones, there's just no way to be attuned to that.
Renee DiResta: Mike Masnick wrote a really interesting piece in January of this year, it's called “Empowering Users, Not Overlords, Overcoming Digital Helplessness.” It is asking users to make a pretty big mental shift, concept shift, and how they engage with their role, their own role, their own agency on social platforms.
This question of, you know, I remember some controversial people landed on blue sky and even though they didn't post very much and they did nothing that was sort of directly immediately you know, obnoxious on Bluesky, some members of the community were extremely angry that they were there, right? Because of their past behavior on other platforms that they found upsetting, offensive, etcetera.
And this question of you know, you can have a very strong block feature. You can empower users with specific tools. You can even create again with federation, the ability to defederate from other servers.
What do you think shifts the way that users respond rather than kind of calling for the mods to take an action? Do you think that that's a reasonable expectation, that that people should be rethinking their relationship to their own agency here? Or is that an unrealistic expectation?
Glen Weyl: I mean, I think there's some different categories of users, so I don't think everything needs to be devolved. There's no one type of user, right? Like there's, there's some people who have massive followings who have official positions within certain communities. And I think the notion of having those people take on additional responsibilities, which they already do in the world, is very consistent with the role that they play.
I mean, there are people who are quote, users of Bluesky, who are also literally the editor of the New York Times or, you know, the pastor of a megachurch or whatever. And the notion that one would expect those people to take on roles in the digital space that are commensurate to their roles in the physical space, or that there would be digital native equivalence of those that also exist makes to me a lot of sense.
The the notion that everyone needs to be acting in such a sophisticated way seems sort of unrealistic to me and You know, I think the best designs would allow people to sort of sort into those roles and take on those responsibilities as appropriate to the social role they're playing.
Ravi Iyer: I mean, I'd say we do want people to interact in the same space. And my goal, you know—I think there should be a room for everyone, but I think I would prioritize regular people. And I think a lot of these platforms don't prioritize regular people. They prioritize the hyper engaged. You know, online warrior, and that is not a regular, that's not most people in society.
And so I think we in the world know how to make spaces that prioritize regular people. And if you are like an online warrior who just wants to argue about everything, we know how to sort of exclude the people from those spaces or make them take their turns or limit how, how much they dominate a space.
And I think we just need spaces like that online where, you know, and I think you should be able to. argue and say things in strong ways or say things in academic ways, but you should be well intentioned. You shouldn't be there to create an argument. You should be there to, to have a thoughtful discussion. And unfortunately, our spaces aren't designed for that.
And so it may take a shift. We may need to have—the reason we don't see it happen as much is because we often prioritize like a space that's used a lot, and so we're used to like, I have to refresh my feed constantly and see what's new there. Like there isn't maybe something new to be learned, right? And so maybe we need to check our feeds every two days instead of every 30 minutes. And then maybe the conversation would feel more natural.
Renee DiResta: I liked the, you know, Jay's talk at South by—I don't know how many people notice this, but Zuckerberg had been going around in his, you know, sort of Roman, how, how often do you think about the Roman empire meme, sort of like all Zuck or nothing sort of, shirts comparing himself to Caesar, the all Caesar, either Caesar or nothing—and then she had a shirt on that said a world without Caesars.
I'm not, I would butcher the Latin pronunciation, so I'm not even going to try, but it was just sort of a nice way of wearing a shirt that sort of articulated the ideological distinction between a platform that is so run in accordance with the vision of one person in a very top down controlled leadership versus the world without Caesars, which I think is really a very appealing way to phrase this, this, the potential of decentralization.
Since I know we only have a couple minutes, I'm curious, you know, platforms, lawmakers, users, everybody has very different visions for this future of speech online. I'm curious what you all see is the most realistic outcome. Where do we see things going over the next five years?
Jacob Mchangama: I think that we're at a moment where lawmakers in a lot of countries, including democracies would be skeptical about—especially when it comes to sort of decentralization—if you were to say, let's allow users to have more control and then minimize our content policies and our centralized moderation.
I think that, you know, if you look at what's going on in Brazil, for instance look at what's going on in India and look at the European Union, I think the European Union is, you know, with, with the way that the Trump administration is, is acting, I think there's even more skepticism about American platforms in Europe and even more of a wish to say we need to have control over what's going on on these platforms because they undermine our democracy. Unfortunately, I think some of those reactions are going to, to potentially frustrate some of the ideas that we're discussing today.
One of the things that, that I also liked about the article is that some of these ideas have, have, have been implemented in Taiwan, and, and so I, I spend a lot of time sort of saying, you know, we don't have to always think about, for instance, in free speech debates, the dichotomy between Europe and the U.S.
For instance, there are actually really interesting places around the world, a country which faces an existential threat, including a state-sponsored disinformation on a scale that, that, that no other democracy faces that actually tries to, to navigate this challenge without resorting to some of the solutions that, that, that democracies, well established democracies, unfortunately, are flirting with. So, so I, I try to point to that as a, as a way forward. And I think that would, that would do well. Unfortunately, it seems to me that a lot of, of lawmakers are not necessarily don't know that, and they still think in this very, these very binary terms.
So in, in the short term, I'm, I'm probably pessimistic, but I think what you need to—this goes back to my initial remarks that, especially when you work in the free speech space, you can't just talk about John Stuart Mill and principles—you have to show something concrete, something that works, where people say, okay, I actually see this is something that works. This takes care of some of my concerns. And now I'm no longer so inclined to say, well, I need a platform to implement my free speech or my policies and, you know, do away with the people that I don't like, or I want to, the government to, to adopt these rules to protect me from whatever evil forces I see out there.
Glen Weyl: Renee, I don't know if you've been following the financial markets or the newspapers, but it seems like it's a general time of uncertainty and no one knows where it’s going. And that can be a problem, but I actually think it's kind of great. I think predictions are disempowering.
Renee DiResta: Fair enough.
Glen Weyl: Uncertainty is empowering. You know, I think it's a moment for us to steer things. And to together make that change and to focus on it. So I don't, I don't know, I, I, there's a lot of bad outcomes, I'd be happy to talk about, and there's a lot of great ones. And I think it's our chance to seize the reins.
Ravi Iyer: I mean, I, I am actually more optimistic. I think that we all walk around with our phones that we have, you know, complex relationships with, you know. If you ask people, most people, including kids, want to use their phones less, and we have all these apps that are trying to get us to use them more. And I think that there's just too much energy in the system, too many people who are unsatisfied with the status quo for nothing to change.
I do think that the moderation paradigm is somewhat held us back here where, you know, I think you get into never ending wars about what people should and should not be allowed to say online, and I think the design paradigm is taking hold. There are more and more people thinking about how these platforms are designed, how do we give people choice? The Digital Choice Act recently just passed in Utah to actually like force people to force platforms to allow that choice across users.
So I think there's just too much energy in the system. And I talk to policy makers every day who are trying to make that change. And so, you know, maybe it's not going to happen immediately, but people are not happy.
Glen Weyl: And Ravi, you deserve congratulations for the wonderful work you did on that. So, thank you.
Renee DiResta: Well, I know we are at time and I just want to thank all of you for joining me today to chat about this. I feel like, you know, we could do an entire another hour on what's happening in Europe and Brazil, so we will have to actually do that at some point.
But thanks so much for, for talking about the papers, your work both on the regulatory front, on the academic and design front and the, on the free speech front. Really enjoyed the chat, looking forward to having you all back in the future.
Glen Weyl: Thanks to you all.
Ravi Iyer: Thanks for having me.
Renee DiResta: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org slash support. You'll also get access to special events and other content available only to our supporters.
Please look out for our other podcasts, including Rational Security, Allies, The Aftermath and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine. Check out our written work at lawfaremedia.org.
This podcast is edited by Jen Patja and our audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.