Cybersecurity & Tech Democracy & Elections

Lawfare Daily: Democratic Backsliding and the Role of Technology

Quinta Jurecic, Joseph Cox, Orly Lobel, Aziz Huq, James Grimmelmann, Jen Patja
Wednesday, June 4, 2025, 7:00 AM
What is the connection between technology and democratic backsliding?

Published by The Lawfare Institute
in Cooperation With
Brookings

Political scientists who study democratic backsliding—the slow erosion of a country’s institutions—have raised alarms about the state of democracy in the United States under the second Trump administration. At the same time, the administration has embraced technology—particularly AI—as a tool for implementing many of its policies, from immigration enforcement to slashing government functions and staffing. And the ties between Washington, D.C. and Silicon Valley appear tighter than ever, with Elon Musk wielding unprecedented control over the executive branch through his quasi-governmental DOGE initiative. 

How should we understand the connection between technology and democratic backsliding? Are they interlinked at this moment in the United States? How has technology played a role in supporting or undermining democracy during other historical moments?

On May 2, Lawfare Senior Editor Quinta Jurecic moderated a panel discussion on these questions at Fordham Law School’s Transatlantic AI and Law institute, featuring panelists Joseph Cox, a journalist and co-founder of 404 Media; Orly Lobel, the Warren Distinguished Professor of Law and founding director of the Center for Employment and Labor Policy (CELP) at the University of San Diego; Aziz Huq, the Frank and Bernice J. Professor at the University of Chicago Law School; and James Grimmelmann, the Tessler Family Professor of Digital and Information Law at Cornell Tech and Cornell Law School. 

Thanks to Fordham for recording and sharing audio of the panel, and to Chinmayi Sharma and Olivier Sylvain of Fordham Law School for organizing the event.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Intro]

James Grimmelmann:  One of the things that DOGE has done immediately is dismantle 18F in the United States Digital Service---the places in the government that were most in sync with current Silicon Valley best practices for software development maintenance.

Quinta Jurecic: It's The Lawfare Podcast, I'm Senior Editor Quinta Jurecic. Today we're sharing audio of a panel discussion on democratic backsliding and the role of technology, which I recently moderated at Fordham Law School's Transatlantic AI and Law Institute.

Aziz Huq So a system that, the system that creates enormous benefits can also be against the flipping around. The helping hand turns around, becomes the hand bearing the knife.

Quinta Jurecic: The panelists were Joseph Cox, a journalist and co-founder of 404 Media; Orly Lobel of the University of San Diego; Aziz Huq of the University of Chicago Law School; and James Grimmelmann of Cornell Tech and Cornell Law School.

Many thanks to the good people at Fordham for recording and sharing the audio and Chinmayi Sharma and Olivier Sylvain of Fordham Law School for organizing the event.

[Main podcast]

I want to start by digging into, of course, the title of the panel. We're talking about technology and democratic backsliding. Usually democratic backsliding is defined by political scientists as a sort of sustained period of erosion of democratic health in a polity, so rather than, you know, a single moment, a coup attempt, something like that, it's a long, slow decline, and there are plenty of conversations that can be had about whether the United States is experiencing such a thing right now.

But before we get into the meat of this, I want to just start by talking about how we understand these two concepts. What is the relationship between technology and democratic backsliding? I want to start by just going down the line. Orly, let's start with you.

Orly Lobel: Yeah, thanks. Well, thanks to the organizers and our lovely moderator. I think I understand it just like you were describing, that we have a baseline, we have some understanding of what democracy and democratic rights and human rights in a polity are, and we experience, not one moment, but all kinds of chipping away---threats, plans, proposals halting of different programs that happen---and you only kind of see the, the bigger picture, you see the, the woods when you step back or you see, that extended period, at some moment.

I think right now we, we are seeing it also in real time and with technology---we'll talk about this a lot more on the panel---but for me it's important to differentiate between the role of tech companies and tech billionaires and like personalities and the role of technology itself. There are newfound capabilities that are happening at the same time that a lot of other things are happening and how do we discern or, or stay more focused on what really is contributing to different shifts that we are seeing.

Quinta Jurecic: James?

James Grimmelmann: I think the most significant overlap that I can think of is the way in which internet and internet media hollowed out news media around the world. And while the internet initially seemed to offer access from a much wider variety of voices free of gatekeeping institutions, and in very large measurement has, one of the effects of that was to undercut the business models on which journalism and many other media are funded, and to do so in a way that effectively hollowed out a huge professional middle class of working journalists, undercutting a sense of political democratic conversations that could take place in a manner that produced shared consensus about the nature of the world and what was happening, and avoided some of the worst pathologies that we've seen. Instead, that's been captured by large tech companies that have used this to fund lots of innovation, but at great cost to the conversations on which democracy depends.

Aziz Huq: If one takes a formal definition of democracy that focuses upon institutions such as free and fair elections, the existence of liberal rights of speech and association, and the existence of bureaucratic bodies like election authorities, prosecutors that operate free of political influence, what one finds is that measuring those institutional traits, there has been a global decline in the quality of democracy since about 2010. The decline is not limited to one part of the world. It's observed in almost every continent.

In my own view, technology plays a peripheral as opposed to a central role in that decline. The decline or the deterioration, particularly in European and American democracies, was catalyzed by the perceived failure of extant party systems to respond in particular to economic crisis. Technology is part of that crisis; for example, in the United States, economists like David Autor have chartered the effect of migration of manufacturing jobs to China and the effect of automation in, in the manufacturing sector on rates of discontent and disaffection with what used to be the, the dominant two party system in the United States.

My own view is that the public sphere and the changes to the public sphere have a relatively peripheral rule in that process. However, one question I think that remains to be answered is whether once a process of democratic erosion has occurred---power has been centralized in typically the executive branch---whether the existence of certain kinds of technologies facilitate the consolidation and the persistence of non-democratic regimes that are not meaningfully responsive to the public.

Joseph Cox: Yeah, I guess my view is a little bit more narrow than some of the other speakers, even though I agree with everything everybody said. I focus a lot on how even smaller tech companies you've never heard of, they may undermine, you know, democratic norms or what we consider parts of a well-functioning democracy.

So, you know, that would be stuff like location data companies selling the location of your mobile phone to law enforcement without a warrant, something that, you know, a lot of people would say is probably not compatible really with a fully functioning democracy. And then in the current context, that extends to stuff I'm sure get into, but stuff like Palantir and how they are building basically the technical infrastructure to deport people potentially with no due process. At least that's what it looks like at the moment.

Quinta Jurecic: When we started talking about this panel to prepare, James, you had brought up what I think is a really interesting point about the historical comparisons to draw on here. And I'm going to go ahead and break Godwin's Law and say that I think there are some interesting comparisons that we can look to looking back, not only for the role of technology and potentially allowing the consolidation of backsliding or authoritarian regimes today, but looking backward to the 1930s and 40s, and I know you had some thoughts on that.

James Grimmelmann:  I mean, one of the original prompts for this panel back in more innocent times when words had different meanings was about the role of collaboration between technology companies and government. And now looking at this, I started to prepare for this panel, that word appeared in a very different cast.

So I thought more about the way in which large technology companies affirmatively collaborated with and more implicated in the Nazi state. And so you have things here that are both information technologies. IBM's role in the information processing that led to the Holocausts used to carry it out, was absolutely part of that.

And we see that again today, that in order to carry out actions directed at people. A state needs a strong capacity to surveil and then to assemble and act upon that information. And that is exactly the role that Palantir is affirmatively volunteering for. They are putting up the sign, in effect: You want to commit human rights crimes against people? We are the ones who will help you do it.

I'm also thinking in broader terms about the way in which industrial firms like today's technology titans decided that they were going to cast their lot in with the Nazi state, and in order to take the contracting and rebuilding that it would provide, that they saw themselves as having cut off the threat of an attack from the left, that we could live under this regime.

And in many ways, that turned out to be an incredibly bad bet for them. First of all, it was an incredibly corrupt state so that Nazi high moral officials were brought in as partners and wound up taking enormous shares in the businesses. The ones who got rich were the insiders, the gorings and the second level led them into a destructive war that ultimately destroyed everything they were trying to build with a strong state and a healthy German country that in fact led to deaths of many millions and the complete destruction of the industry they'd spent their lives building. And then many of them wound up in the dock at Nuremberg for the crimes that they volunteered their companies to help carry out.

So it seems like to me, in many ways, the technology titans who go to the inauguration and are asking how they can serve, are signing themselves up for something very similar.

Quinta Jurecic: Aziz, I wanted to turn to you here because you've written recently about the concept of the, the dual state, which I think speaks to some of the dynamics that James is touching on in terms of the corruption of the Nazi state and the kind of difficulty of doing business with it.

Aziz Huq: I think this is a point that is complimentary to the one that James made. The idea of a deal state is associated with a German Jewish lawyer called Herbert Frankel. He's a remarkable man. He managed to survive and practice as a lawyer in Germany, notwithstanding being Jewish up to 1938, largely representing people on the left, e was a member of the Socialist Party.

And out of his experience representing clients before the German courts, he developed a theory of how power was exercised in the early stages---it's important to say---early stages of the Nazi regimes before, he leaves two weeks before Kristallnacht, and the account that he gives focuses upon the idea of a dual state.

What does he mean by that? What he means is that the state operates with, you might think two hats. With one hat, it operates in the normal ordinary forms and processes of law. And with its other hat, it operates in a lawless, caprice driven and cruel way.

And the key insight that Frankel had was that unlike say, the Jim Crow state, unlike the apartheid state of South Africa, in the dual state of the 1930s in Germany, there was an official, someone who pulled a switch and decided whether you were in the law, lawful state, or whether you were in the lawless state. Frankel indeed called himself when he was in, in his legal practice, a switchman because his job was to make sure that, that his clients remained within the lawful state no matter how bad the punishment that they'd receive because what was far worse was being in a state where law didn't apply.

I, to my mind, we are at a moment where something like the dual state is emerging in the practices of the present administration, and I think there are a couple of ways in which you can see a switch being pulled, moving someone from---or an entity---from the lawful state to the lawless state.

I think one of those switches has absolutely nothing to do with technology, and the other one is facilitated by technology. The one that's not facilitated by technology is simply the confident, assertion that the law does not apply to us. Think about the case of Abrego Garcia.

On the other hand, the other form of switching is when the government has a relationship with you, a relationship which is collaborative and which is suddenly and unexpectedly turned from a form of support into a knife. And the application or the weaponization of cooperation---which universities in particular have experienced in very, very sharp ways, including my own, I'm sure Fordham has too---is facilitated by technology. It is enabled by mechanization, automation and the kind of surveillance and inference tools that technology emits. It may not be the most important form of, of exercise, of excise, of control or power; it may not be that you couldn't have that, that kind of switching mechanism occur without it, but the technology dramatically lowers the price of using that switch.

Quinta Jurecic: And what kind of technology do you have in mind there? Because I can envision, you know, technology in the broader sense of the ecosystem through which universities collaborate with the government, collaborate in, in one sense rather than the other to, you know, develop scientific research, technological progress, but also technology in the sense of the ability to, you know, flip the switch and prevent grants from being dispersed.

Aziz Huq: Well, I, I, I know that James is going to talk about Palantir, so I'm going to be very careful not to tread on your tread on--

James Grimmelmann: My entire thing,

Aziz Huq: Tread on your toes. Your toes have Palantir written all over them.

But I think that the, the example of the cancellation of visas of students is a rich example. We don't exactly know the technological tools that, that are being used to single out students who are present in the United States, either on F visas or J visas for termination. The termination is not being done through any kind of--it, it's not being done through any kind of agency adjudicative process. It's not being done through any kind of, there's no, there's not been any judicial process; there have been ex poste challenges; the ex poste challenges have recently been successful.

But, but for example, the University of Chicago has a few thousand non-citizen grad students. I have lots of non-citizen students who are LLMs in my classes, and out of those thousands, seven people were identified and had their visas terminated without notice to them. That is the exercise of some kind of earth automated discretion through which individuals are identified in all likelihood, through the assimilation or the aggregation of multiple databases, potentially including social media.

So that is an example of the kind of technology and I, I would just flag that, that, that, that, that I, this is a sort of simple law and econ point, which is you lower the cost of any activity, and you dramatically increase the amount of activity that will occur. So you lower the cost of exercising that kind of discretion, you're gonna see a lot more of it occurring.

James Grimmelmann: The flip side of that also is if you raise the cost of implementing something through it, you can make it much harder, which is exactly one of the reasons why DOGE has gone for the databases in every agency. Some of it is for surveillance, but a lot of it is so that they can pull individual records or destroy the records so that it becomes impossible to reboot the agency's function once they've gone.

Quinta Jurecic: Now that we've, we've had a, a bit of foreshadowing, Joseph, I want to go to you, to talk about your work on Palantir.

Joseph Cox: Sure. And first, just a point you made about how something can switch and it becomes a knife---I think that is basically the key takeaway from all of these individual stories that you are reading about. And there's the DOGE staff or a master database, there's the Palantir stuff, there's some other ICE stuff I'll talk about as well, but this idea that immigrants or citizens or really anybody have given data to various government agencies and now the context has shifted under their feet, and now that data can be weaponized against them in a different way.

One, the, probably the clearest example is that the IRS is now going to provide data basically to ICE, severing that very important firewall. I obtained a document describing a tool currently in use by ICE called Alien Tracker or A-Track, and that has FBI, DEA data, normal law enforcement agencies, but they are already saying we plan to integrate Department of Labor data, Urban and Housing Development, Health and Human Services as well---agencies that traditionally have nothing to do with immigration enforcement.

So, sort of to get, well, the previous question was, the technology there is linking all of these databases together, but almost more important than the technology itself is the underlying data. You know, 'cause Palantir to move into that, doesn't generate its own data; it's not getting data really of of its own, it's just joining a bunch of other stuff together.

And, you know, sources I have in the banking industry and various other ones that they think it's okay. They, they, they're not particularly blown away by it, but it's a resourceful tool, but now here it's so much more powerful because it's gonna have real world consequences.

So just to very briefly say where we are at Palantir right now, and again there's you, you said, we don’t know everything about how people are being targeted, and that's basically still true even with all this reporting that's coming out, but, you know, I reported that Palantir got another $30 million to improve targeting for ICE and it turned out that was a tool called ImmigrationOS.

I then got some internal Palantir documents where they sort of lay out the justification for it.  And I think the parallel with, you know, the Holocaust and what IBM did is absolutely apt. I've been rereading that book, “IBM and the Holocaust” as I continue to report on Palantir and is very, very fascinating when you read the, the 2000 words of their justification where they say we believe we are doing the right thing 'cause we are actually going make immigration enforcement fairer and more efficient and more accountable and more transparent. Like we are the good guys here, they're basically saying by allowing the facilitation of this.

And I guess you can't deny when you're reading the thousands of words of justification that they have thought about it---but they've still reached this conclusion where they're going to provide the technical infrastructure for these things. And of course, people are still being deported without due process as well.

So the Palantir involvement is, is sort of very, very new, as in, it really started in March and April, at least this latest iteration. I don't think we've even seen the impact of it yet. Like all of these other deportations and all of these, you know, people being picked up off the street---I don't necessarily think that's necessarily to do with Palantir yet, but I think we're going to see that pretty soon, you know, we're going to see the fruits of what they've been developing.

Quinta Jurecic: One thing that I think this, this gets to is how effective this technology is. I think often when we think about sort of technological dystopias, you know, Skynet, things like that, the anxiety is about a system that is extremely effective, efficient, accurate, and the terror is, you know, that it's able to pick out specific people accurately.

On the other hand, I think a lot of what we have seen in recent months is a very different kind of anxiety, which is an anxiety about technology that works poorly and a state, in fact, that is in many ways functioning poorly. So I think the, the examples of student visas and of Kilmar Abrego Garcia---although that's less technologically focused---are, are good ones. Abrego Garcia has become such a important case precisely because he is in El Salvador by mistake.

In the cases of these student visas that have been deleted from the SEVIS system; as you said, it's often, it's not clear why people's information has been deleted. Sometimes it can be traced to, you know, a minor interaction, you know, a traffic ticket, something like that, but sometimes it's completely unclear.  And so there's an element of randomness that actually made, makes me think more than anything of the Terry Gilliam film “Brazil” where the inciting incident is a typo that sort of leads to this cascade in this authoritarian state.

And so I'm curious for all of your thoughts on how conceptualizing technology used for democratic backsliding as efficient and effective versus inefficient and sloppy, how that should affect our understanding of the relationship between technology and consolidating authoritarianism.

Orly Lobel: Yeah, no, the, I think that with all our debates about technology and specifically with the rise of AI, we have both of those conversations going on and sometimes we conflate them. The fear of technology being too good, too perfect, so, powerful and, and, and quick and fast and, and accurate that we're losing control over it, and it knows everything, and at the same time, you know, so much of the talk about AI bias and algorithmic harms are, are about the inaccuracies, the, the mistakes, the kind of not being ready yet for deployment.

And for me, and, and this goes to a lot of the, you know, we just heard a lot of really scary and concerning stuff, and I, I keep thinking, you know, what is the best that we can do? We being the people in the room attorneys, law professors, future lawyers. I think first of all that, that metaphor of a switch to move us to the law side, we heard a lot about that lever over or the switch to, to going to the lawlessness---we have now these powerful tools that are becoming more available to, again, the public to a lot of, to, to journalists to activists to use.

And for me it's always been very important when I think about technology, both wrongs and, and, you know, potential and opportunities to ask it in a comparative way, compared to what. So, you know, we heard James talk about this, like really low level technology of IBM, just really, you know, having these basically you know, digitized lists. It wasn't, you know, the powerful technologies that we have today, and that was just kind of recording things. Now we have technology that, and, and, and some of the descriptions of what Palantir was doing and, and what Aziz was describing seems to me just really using data and not using technology to improve decision making.

So for me, the question is, you know, what are we comparing? What is, what is the alternative? How do we stay informed about what is happening? And thinking all the time about the potential for the kind of digital trail that we have, that, you know, technology is enabling to bring us to that law part or to, to the oversight, to the transparency, to really kind of exposing things that are happening.

James Grimmelmann: So I would want to point out that in many ways today, the irrationality and the chaos are the point. That the ways in which the Trump administration is using technology are actually deliberately turning their back on the capability to use it with effective controls that achieve results that are reliable and predictable.

One of the things that DOGE has done immediately is dismantle 18F and the United States Digital Service, the places in the government that were most in sync with current Silicon Valley best practices for software development maintenance. Instead, they've implemented a largely chaotic process of individual people parachuting in making changes to production servers, largely arbitrarily doing things without any of the version control, without any of the code reviews or development processes that we associate with high quality software and data science in the technology companies at the bleeding edge.

And that's in fact because this is a movement that sees rationality and organization and bureaucracy as very much the enemy. And since those are the tools of the state that they're trying to root out, those are the things they're attempting to destroy and replace it with what would be a more personalist regime, one in which that switch gets flipped and somebody can make the database entry by themselves directly without going through checks and balances or controls. And that means if you deploy straight to the production server, that's what you do.

Aziz Huq: I think that Quinta's question is, is terrific, and I would point to particular effects that are achieved through automation, even in the absence of accuracy, that I think are salient in moments of democratic backsliding or transition. The first I've already alluded to is scale, and the second is internalization.

So scale is simply being able to perform an activity at an exponentially larger rate or higher rate than you would otherwise have been able to do with the resources that you have. So regardless of, of, of your error rate with respect to deportations, if you can deport millions as opposed to thousands or tens of thousands---from a certain perspective, that is a policy success. Without regard to the error rates, you're, you're, you're accurately deporting far more people, even if you're inaccurately, deporting far more people than you otherwise would be. So scale can be an objective in and of itself.

Scale I think is also important because scale---which is famously the, the first principle of Silicon Valley economics, right, it's the, the, the aim is to identify the technology, the, the killer app that will scale up and push other affordances from the market, right---scale is conducive to a certain kind of political economy, a certain relationship between the state and the private sector that is characterized by, or that has been that is well fitting to the companies that have developed out of, out of Silicon Valley in particular. And so, that scale is gonna, that that emphasis on scale is gonna, is gonna lead to certain kinds of relationships between the government and private actors. So that's the first thing that I would point out.

The second thing that I would point out is internalization. If you're able to scale something up, if you're able to make people believe, induce in people a belief that you are able to know whether they are, for example, thinking about getting reproductive care, whether they are a parent and their child has asked for medical care with respect to the possibility of a transition, right---you will behave differently if you believe that your behavior can be inferred from other things, whether or not that inference is accurate or not. So the fact of scaling up, regardless of whether it is done well or badly, can induce what we lawyers like to call a chill far beyond what the state would otherwise be capable of reaching.

There's an interesting parallel to China's social credit system, which I think is not well understood and I will concede that I do not fully understand it. But the best reporting and the best academic coverage that I've seen is by Shazeda Ahmed, who is affiliated to Stanford, and Ahmed's work suggests that the social credit scheme, which is demonized in the West---for example, it's one of the verboten uses of AI under the AI Act in Europe, right, I think because of China---according to Ahmed, the social credit scheme is just a, a bunch of cobbled together databases that kind of work well sometimes, but kind of work, don't work well others. She might be right about that, but it might also be the kind of system that generates compliance and anticipation, even if there are very, very high error rates. So scale and internalization I think are phenomena effects that you might see even with really high error rates.

Joseph Cox: Just very, very briefly, and I hope I'm not repeating myself too much, but on that, yeah. On one side you'll have, let's say the data's really bad and it, and, and it leads to, you know, more people being mistakenly deported. That's awful, that's bad. And it's, let's say, I don’t know, I'm just going to say hundreds or whatever, and then Palantir or whoever comes in and they say, we're going to improve the data quality, we're going to clean it, we're going to match it together; all of a sudden the data's a lot better.

And now you are deporting hundreds of thousands, millions. It's that, oh, well I thought you're going to fix it, and there isn't a good there isn't a good solution either way. Essentially, like when the data's bad, they're going to make mistakes; when the tools are really good, it's going to have, as you say, impacts of scale, which are going to be much, much larger, you know.

And, and the private sector is crucial to that 'cause when, when you read some of these documents, they say that HSI and ICE, they were unable to build something like this by themselves because there is a massive sense of urgency inside ICE right now to execute on these executive orders. They can't cobble this together. They, you know, for worse, obviously they need DOGE, they need a Palantir to be able to execute on these visions.

Quinta Jurecic: That goes to something that I think is, is really important, again, which is this, this issue of state capacity that the visions of government activity enabled by, you know, this sort of imagined enormous data set, which in, in fact is cobbled together from many different sources are---especially in the area of immigration---actually quite difficult for the federal government to carry out because there simply aren't enough people and there's also not enough data. And so there is a need to engage with the private sector, as you say, at the same time as DOGE within the federal government is actively destroying government capacity.

And I think this, this goes, Aziz, to your, your point about the political economy of all of this, and one thing that I have been really struck by is that the first Trump administration, many arguments were---I was part of many of them----about to what extent it could be fairly characterized as authoritarian. I think it is fair to say that whatever the answer was to those questions, the second administration is certainly rocketing in that direction much, with much greater speed than the first one ever did, and part of that is enabled by and driven by these interactions with technology companies.

And it is not obvious to me, which is the chicken and which is the egg, that there is an extent, as we've been discussing, to which the engagement with technology companies, the idea of technology, AI in particular, as a force that will kind of enable these projects is certainly allowing some attempts at carrying out this vision. But there's also been plenty of reporting on the sort of deepening ties between the Trump camp and certain figures in Silicon Valley, most notably, of course, Elon Musk, which themselves are increasingly moving in a au authoritarian direction.

And so I am curious for all of your thoughts on how we should understand that relationship. Is it purely economic? To what extent is it a sort of ideological, anti-democratic commitment, and which is the chicken and which is the egg?

Joseph Cox: I think just briefly on the Silicon Valley stuff, it's sort of a mix, right, where the Trump administration will have a vision for what they want to do, and then companies will come out of the woodwork, or maybe they'll be very prominent and then they will try to execute on those.

But then you also just have the reality these people are billionaire, multi-billionaire CEOs. They give a million dollars each to the inauguration fund, right, they all turn up to the inauguration, presumably to save their own skin in some sort of capacity. You know, Meta facing various actions against it. Amazon trying to maintain its business. Elon Musk trying to, it seems, be loved. I'm not exactly sure what all that's about, that side.

And now that that is all, and that's all blowing up spectacularly in their faces, especially when it comes to Mark Zuckerberg, and then of course Amazon for a brief moment putting on the, the cost of tariffs potentially to the consumer. I, I know there's a, there's a little bit hazy about why exactly they're gonna do with it, but the reporting seems in puck with Trump then calling up Bezos and being really really, mad about it.

I think it's a, to answer your question, I think it's a mix of, it's not---a, a lot of it is the administration wants to execute on something and then they deliver it, but then Silicon Valley comes out 'cause they have their own motivations. And of course they’re businesses, you know, if they want to maintain whatever will keep themselves, their shareholders and their companies, the ability to do this Silicon Valley growth, you know, they want to keep, keep growing during this administration.

And a lot of these CEOs and other companies, they were hoping for an accelerationist president, which is very much a term used in AI. And we published a piece recently where they voted for that with their probably literal votes and their money and what they got was a “decel” president instead, which is, which is a derogatory term in the AI community for somebody that's driving tech backwards. And Trump is absolutely that. He's actually awful for the tech companies, at least in some ways.

James Grimmelmann:  Yeah. I think I'm gonna say two things or two slogans that can help to understand this relationship. The first is that the Trump administration----and Trump is a they, not an it that there's not a singular unified agenda being pursued here. There are a bunch of individual goals that align in some ways and are in deep tension in others.

So Trump himself deeply wants tariffs and he's under the influence of Peter Navarro who makes the case for very high tariffs, and there's a part of a space that really wants this and many others who do not. There's Trump wants very strong immigration enforcement; many, many people in the Trump camp want this; the tech industry is much more split on this. You can go down the list of policies towards the size of government, government spending, the role of the military tax policy, and there is not a master list drawn up that's been negotiated over, that has been conceived as, these are our policy goals. It is a bunch of things being done simultaneously and chaotically.

And so tech companies are trying to sign up for parts of this even as they're horrified by others, and there's an absolute, an authoritarian slant towards many of them, and that's driven by both political and ideological goals. But I suspect that many in the tech world have been blindsided to realize that they're not negotiating or trying to partner with a government that has a coherent thought through policy; it is instead of a bunch of conflicting influences, it's cats in a bag fighting, and they've been surprised at which ones had the sharpest claws.

On top of that, I would also say John Ganza’s theory of bossism is very helpful for understanding the relationship of the people in Silicon Valley, like Musk and Andreessen, who are very strong Trump supporters, which is essentially they want a world in which their power as titans of industry is recognized and is unchecked by government regulation or by activists claiming what they're doing is bad or by employee unions or by other things that deny to them the cultural centrality and the power that they feel that their success in making money entitles them to. And so they want the strong state to help them crush their opposition and they will sign up for that. And in fact, there's an enormous indifference as to what the government does beyond crush competing centers, centers of power.

Aziz Huq I, I think I, I, I, I might add something that compliments what James just said by, by pointing to the diversity of ideological positions in Silicon Valley itself.

So, for example, very much consistent with what Joe said, I, I, I had Karen Howe, who's a journalist for The Atlantic and for MIT Tech Review in my class earlier this week, and I asked her about Sam Altman because she's got a book coming out about OpenAI and about what his incentives are at this moment. And, and her view was very much the Sam Altman is just a very, very wealthy venture capitalist who wants to do well under whatever political circumstances obtain. So that's one possibility.

And, and that's a very familiar type going back to the James' identification of the relationship between industrialists in the, in the 1930s in Germany and the Nazi regime. Indeed, I was preparing for this discussion and read an essay by Arendt, Hannah Arendt, on personal responsibility in times of dictatorship, and she, she identifies this go along to get along well attitude.

But that's not to say that there are not genuine ideologues within Silicon Valley. Musk, for example, I think it, it's now reasonably well documented, was shaped by the fact that he grew up in apartheid era South Africa. He's the child of a, a man who was deeply committed to the apartheid project. Quinn Slobodian, the historian, has a, a, a book that's forthcoming---maybe it's already out, I haven't read it---in the influence of ideas about race science. In Silicon Valley and on in particular the neoliberal, right. In general that's one constellation of ideas I think that is influential and that you do see playing out, for example, through DOGE.

A, a different constellation of ideas is the one around Peter Thiel, where he has a set of ideas about the trajectory of the United States as a superpower and its inevitable decline as it hits certain limits. And at least one theory of the or Theil’s beliefs is that Thiel wants to accelerate, not accelerate the technology, but accelerate the process of national combustion such that something better arises from the ash, such that you---this is the kind of Leninist theory of political development. You accelerate the, the contradictions at the moment, the moment explodes, and then you're able to build something new in the first instance.

And so, for example, one might explain the fact that the tariff policy adopted at the beginning of April was one that not just led to, I think at one point it was a 17 or 18% decline in equities, but led to a sharp increase in yields on the dollar. A, a, a quite astonishing---and if not historically unprecedented, then certainly surprising in light of recent history---challenge to the dollar’s status as the global reserve currency and the currency of account used for transnational transactions, right? That came in under pressure. And there is at least a theory that collapsing the dollar, driving the dollar out from its status as reserve, reserve currency is, it's just the quickest and most expedient way to destroy the United States as a, as a superpower, which is probably true, and build something better in, its in its place.

And I thought I, there was a moment in early April where it did seem possible, it did seem possible that that would happen, which is quite extraordinary. I think that is one of the most extraordinary things that people have not quite got their head around. We almost drove the United States---not we personally, I disclaim responsibility, I was not betting against the dollar or anything---but there was a moment where the United States almost got driven into the ground as a global superpower by losing its principal instrument of financial and monetary control, right? How on earth does that happen? Well, the tech industry might be one explanation at least if you start to disaggregate it by ideology.

James Grimmelmann: This reminds me also of the attitude that many evangelical Christians have towards Trump, which is that, well, of course he is the sinful secular ruler. He's not himself one of us, but his rise as necessary as part of what's foretold in Revelation.

Orly Lobel: Well, I'll just add to the conversation the, the role of the kind of lifelong public servant people that are in government. And I think the fact that we have so much chaos and randomness and no policy really, is a source of strength to, to the agencies themselves.

I think that we, I think we saw there is some evidence of that from the first Trump administration where different works and compliance and enforcement paths and processes were really kind of kept in place because there wasn't enough of kind of that clear guidance.

And there again, I think that we can think about technology as both a, a source of fear and an opportunity where when we have such, such a scarcity and cutting of, of the budget, inevitably we need to think about scaling the way that, you know, as he was talking about scale, well, scale is also something that is an enabler when you have like environmental enforcement that has to be done when you have you know, welfare and benefits that need to be delivered when we want to do, you know, you know, we're, we're cutting so much from research and FDA approval processes and all of these different things that we care about; it actually will have to be rebuilt.

We talked earlier this, this morning in our workshops about procurement about the role of introducing AI and, and inevitably partnering with the private sector to move forward and, and really kind of think about those heroes that stay in government because I know that, you know, so many are quitting, but so important to actually point to those who are staying, you know, despite all odds, despite all these challenges then and discontent.

Quinta Jurecic: Let me try to draw on that and maybe think about what it would take to move us in a slightly more hopeful direction. I think the discussion we're having now is really particularly striking when you compare it to the discussions that we might have been having, you know, 10, 15 years ago about democracy and technology. If you said those two words together in, you know, 2005, 2010, I think they would conjure a very different set of images. Twitter in particular, of course, is often credited with the rise and influence of the Arab Spring. There's an idea of technology as something that can be a democratizing force, something that takes down dictators and enables sort of collective action in a democratic space.

So my two questions are, first off, what changed? And second off, is there any way that we can think about the role of technology and democracy together that might allow us to kind of reconfigure things into like a more optimistic path? Is there, is there somewhere else that we can go?

James Grimmelmann: Let me give an optimistic take---at least not a completely bleak one---in the same way that public opinion drifts back between extreme, political science will call the thermostatic model, in which as one party gains dominance and does unpopular things, people start to say, hey, we liked the old way better; we didn't appreciate it at the time, but this is, this is even worse, and there's something maybe somewhat similar in the realm of technology and its relationship to politics. In the same way that technologies will often leapfrog: a country that develops extensive wired internet infrastructure will be slower to develop wireless than one that never had that strong infrastructure and went mobile first. When it's something is successful, it established and is much harder to displace.

You could see that maybe there's something like that in the realm of politics and social media in which you have the 2010s association between the social media technologies of the day and how they're used and the left and popular movements there, and it may simply be that the natural cycle of new networks and new technologies that explain each other are taken advantage of by different political actors because they are untaken territory. They are spaces to which they can expand that aren't yet associated with their adversaries.

And so something like blogs are left coded and Twitter becomes, you know, over time, right, coded. Podcasts today are seen as more right coded, but maybe TikTok will be left coded. And it's just simply the natural tendencies of people to follow the fads and find new things become associated with different political tendencies, and that sitting here 10 years from now, we'll be asking what changed from 2025.

Joseph Cox: I don't have an optimistic thing at, at the end.

Quinta Jurecic: I was trying so hard.

Joseph Cox: Yeah, definitely not the right person for that.

But just the journalist perspective of sort of what changed back then---I mean tech coverage, you know, back then what in the, in the mid 2000s, late 2000s, all very positive about these tech companies, all these startups coming up, all this money is going thrown around.

And then you know, soon that coverage starts to change when it turns out a lot of these companies are sort of full of it. Oh, they are actually doing some bad things. Some of these startups are failing.  And a lot of these people at the head of these companies, they were not used to that sort of coverage and they had no idea what to do or how to react when coverage did start to be more adversarial.

And then you sort of fast forward to, you know, now, or a year ago or two years ago, you have those same sort of people and especially Mark Zuckerberg, where they're kind of like well, well screw you. And they do a 180 and they say, well, I'm drawing away from that and I'm just going to be myself, whatever that is, and I'm gonna go rogue and I'm gonna do this. I'm going to just not engage as much with legacy or traditional media because they don't get me and they don't respect what I'm doing or, or whatever.

So it wasn't just a, a sort of a flip of---well, the media flipped and then started doing it more adversarial and skeptically, and then the leaders also turned and they, they leaned into their positions as well.

Aziz Huq: Can I, I'd like to pick up on something that Orly said about the possibilities for beneficial uses. I think that the, that the wave of coverage of technology in the early 2000s was focused upon technologies in the communication sector, but I would point to technologies in the banking sector as an area in which, in fact, there's been enormous public facing goods created.

So, for example, Brazil, which is a country with a reasonably high degree of, of, of still quite extreme poverty, in the last five years has rolled out a central bank digital currency called Dre, and Pix is now responsible for 45% of transactions that are conducted in Brazil. And you don't need a bank account, which many, many people especially the impoverished, lack in Brazil, you, all you need is a mobile phone. And the rollout of Pix eliminates the transaction fees charged by intermediaries such as commercial banks, eliminates the physical risk and security risks that come from storing value in the form of hard currency in your, in your home. Carrying it around with you reduces the incentives to conduct certain kinds of assault and, and robberies, and surely, massively increases human welfare. And that is a story of technology. That is a story of technology being scaled up in exactly the way that or Orly described so as to facilitate the flourishing of human beings.

And you have similar systems in, in other, what used to be called developing countries. I believe Mexico and I believe India have parallel systems. I don't know if they're successful as per Brazil, but the point that I would make about, about that possibility, which I think is real and important, and of that the fact of raising people out of poverty through the provision of these technologies, strikes me as just an unqualified human good.

But the fact is, is that, is that the Pix system means that the Brazilian government, the central bank has an enormous amount of data about individuals. To see how that data might be misused, look at the Indian electronic identification system called AHA. AHA has been rolled out over the last five or 10 years under the BJP government in India. And it, it facilitates access to welfare benefits, it enables people to confirm their identity in order to engage in transactions like family, family, law matters, marriage, divorce, etc.---surely has in the aggregate increased human welfare and eliminated forms of human suffering in really important ways.

But it's also been the platform for the BJP’s effort, particularly in the northeast of India, to strip non-Hindus, Christians and and Muslims off their citizenship to exclude them from the polity and to render life impossible for them by cutting them off from the economic system. So a system that the system that creates enormous benefits can also be---again, it's the flipping around, the helping hand turns around, becomes the hand bearing the knife, right? It's the same thing.

And I think that that a question that I don't have an answer to, but I think is really important for actually the lawyers---I'm sure the technologists, might like that, yes, we need to talk to them---but it's a really a law question, it's what's the legal framework that prevents that flipping of the helping hand into the knife.

Orly Lobel: Just to pick up on that, I, I just got back from Bhutan and, you know, that's a tiny kingdom with very few resources. I, I met with the minister of Health there. they were talking about how, you know, they, they don't get even close to what the World Health Organization has as per capita physicians, per patients, and it's, it would just be really wildly problematic to ignore the fact that technology and in particular, you know, what we see now with generative AI can really tackle these questions or these, these pro challenges of access to health, access to education, access to finance and independence, to gender equality and access, you know, to information and, and knowledge. And then yes, we, we have this big fear that that also comes with great power of data collection.

So, so my, I think take on that is that you---as, as you were asking, what's the role for lawyers---I think that we've been focused too much in, in the policy proposals, policy reforms that I've seen in Europe, in California, proposals before Congress. We focus too much on kind of these one off ideas or, or frameworks of either collect or not collect, data minimization, you know, like one purpose. And what I, I, I think is more, would, will be more effective is really to recognize that it's about the uses and about the, just the, the ways that we frame our purposes, what, what can be done, what is done by government, by by the private sector as well.

So, for example, with reproductive rights. I think, I really believe that we can figure it out, you know, how to resist what's happening, and, and we see all these things that are coming. A lot of it comes out the states because we have, you know, such stagnation in federal administration. We see how technology can connect, can create, just lower the cost of creating communication and, and access across state lines. Just advancements also in how we do abortion these days---you know, like actually the majority of abortions now, it's just shifting to pills rather than, you know, in it has to be in hospitals. So those really, I think, give me hope that. It's, it's not a technology problem, rather, technology has to be part of the solution and the problems are with, you know, really, you know, bad intentions of, you know, certain people in power.

And the, the one other thing that I'll say about hope is that being, you know, I'm a dual citizen and having this historical perspective and also the comparative perspective I, I do think that a system here has, it still has a lot of checks and balances of federalism, of continuity of the, you know, the agency or the, the administration, the, like, kind of the, the lifetime civil servants as I mentioned before, and, and, and just four years of a presidency, which is also really gives me a lot of hope.

Quinta Jurecic: I want to make sure we leave time for audience questions, if anyone has any that they'd like to ask.

Audience member 1: I thank you for this wonderful panel. So, law schools welcomed Lexis and Westlaw with open arms when they were owned by legal publishers, but now that they're owned by multinational media companies, s Group, and Thomson Reuters that have data brokerage arms with contracts with ICE and other government agencies, should law schools alter our relationship with these legal research tools?

James Grimmelmann:  Well, yes, but you, the law schools could be doing this anyway. The entire purpose is to lock people into becoming addicted to and use to their particular offerings so that they will pay large money for access to legal information, the vast majority of which is in the public domain to start with, and should be readily accessible to all freely online.

Audience member 2: Hi, Courtney Cox, Fordham Law School. So this has been really fantastic.

I want to pick up on a couple of themes that was maybe begun with Orly’s comment about when you say tech, like what do we mean? Do we mean the faces of the tech company? Do we mean the technology companies themselves or do we mean the actual products?

And I guess one of the things I find myself continuously wondering is to what extent does using even this kind of phrase technology obscure, not just those distinctions, but meaningful distinctions between the tools themselves in a way that kind of enables some of the obfuscation about what's happening? That maybe we can't understand what it is when really it's just basic database scraping or it's something where, you know, even those who do kind of are at the forefront of it don't fully appreciate exactly why they know the thing works, but not why the thing works.

And so I guess I was interested in hearing the panel's view as to whether kind of lumping it that way again, obscures what's really going on democratically, which I think Aziz was, was starting to pick up on.

Joseph Cox: I agree that lumping them all together is not useful and that's very easy to do and I think journalists will do that, academics as well and, and just colloquially in conversation as well.

And that's sort of why I admittedly have such a more narrow perspective than the rest of the panelists who have this amazing historical and broad perspective. Whereas I'm just like, literally what database is going into this ICE tool 'cause I just want to know those specifics, right. And I think that's why a panel like this is so useful that you can combine those together. I think the specifics are just as important as sort of the broader perspective as well.

So I agree. I think people should be definitely more particular and when they talk about this issue, so they can actually be, be identified, you know.

Aziz Huq I, I guess I would push back just a little bit in the sense that I think, so that that to frame and to understand the world, as you know, we need categories and the categories are often simplifying and blur or miss, miss things. And so I, I I I would be, I think, forgiving both of ourselves and the others if we use a category like tech as a, perhaps the way to think about it, considering your critique is as a, a portal that opens up on a, on a suite of issues, you know, maybe one way of productively to sort working with what, with what you've, you've thought through is once one has, once one has that framing device, how does that cast light on our present moment?

And, and I'll give you one example. So, so there's a, a, a line of political science that looks at what happens when states obtain new capabilities and, and, and what's thought of as a capability is what's thought of at that moment as a technology. So the most important of these technologies historically is writing. We don't think of writing as a technology. This is, I think your point, but when newly sedentary societies developed the tool of writing, what's writing used for? Well, it's used for the tracking of commercial transactions, but it's used by the state. It's used by the state to aggregate information, share information, and transmit information over time, which had previously not been feasible. I think you can learn a lot by thinking about how writing changed the state and comparing it to how, let's say generative AI tools changed the state. right?

And, and I think you're really correct, Courtney, that there's a way in which that comparison is obscured or lost if we get this term technology, if it, if it occludes our vision, but on the other hand, I think once you start to think flexibly with the word technology, perhaps it opens up that question and you can start to think about comparisons that one otherwise wouldn't have come to mind.

Orly Lobel: I'll just thank you for that because I've been very, very worried about that kind of lumping.

I, I think that it has this other effect that I want to make sure that I'm describing or kind of warning against because I see this all the time where, when we have a panel like this, when we're talking about, you know, all these really horrific, scary collusions between Silicon Valley tech moguls, tech conglomerates, and the administration that we are very concerned about, I believe that we're creating a, a vicious cycle where we are really creating kind of these insiders and the outsiders, the critics, all of us that then, especially, you know, kind of the next generation that are now going into the labor market and the into the different industries will be just dissuaded from being part of it.

And I think that it's really on all of us to also to have a very balanced conversation, to talk about all the benefits and potential and, and good and celebrate the really inventive and innovative things that are being developed right now that are lifesaving, that are creating welfare so that people will have skin in the game and want to, you know, be in there because, you know, otherwise we're just kind of amplifying and we're really writing the, the result of, of you know, continuation on a, a very dark path.

And then one other distinction that I, I would add to, to all of this mix is that a lot of times---and I think it's the, to the benefit, I think it's purposeful for people like Sam Altman and, and others who are kind of the leading conversations about this---we conflate between the technology that we have and the technology that we kind of think we may have in the future. And it's to the benefit of those that are like in power and, and profiting to talk about that, you know, unknown technology of, you know, AI, artificial general AI, you know, general intelligence or, you know, something that will lose control over, and like, rather than talking about the, you know, sectoral, you know, automation algorithms that exist.

And, and I've become more and more convinced from a, from a legal perspective that we need to also, from a, you know, regulatory path, we need to regulate sectorally and not have these AI acts that talked so abstractly about risks where it really is, becomes meaningless or, or legal scholarship that just says we need more transparency, we need more fairness, we need more accountability. Like every article that's been published in the past five years calls for that. But if we're not talking, you know, about, you know, what do I actually, what is the technology, what does the sector, what does this do, then we're not doing much.

Audience member 3: Paolo from Georgetown. Thank you for this cheerful and uplifting conversation.

So I'm gonna ask this in a way to try and resist you just fighting the premise, although I think I'm gonna lose. So I have long thought and recently written about part of the problem being the kind of the stories we tell about the word efficiency and how supposedly efficiency is this, like one overarching goal that makes everything work, when in reality, every time there's efficiency for someone, it's by applying friction somewhere else.

And so I wonder if the kind of association with Elon Musk, with that word, might finally give some space for a counter narrative about deciding where the efficiency goes and where the friction goes, or the counter efficiency. I want you to focus group how I can sell this without fighting the premise about whether or not this is the solution, although feel free to fight the premise.

James Grimmelmann:  You just, when you meet somebody who says, but efficiency is good. Then you can say, look, the Department of Government Efficiency, that is what efficiency is actually like in the world. It is a very conscious trade off. It bears like very little resemblance to this idealized abstract notion of efficiency that is good that you have in your head. Efficiency everywhere and every, all the time is like what DOGE is delivering, we're actually fighting over the what and the how, not the weather.

Joseph Cox: I would just point to DOGE, DOGE's catastrophic failure even by its own metrics. Well, what were they saying? They were gonna save $2 trillion, then $1 trillion. They were $500 billion or something. I think it may have even gone down at some point now by, even by the way that they measure their own actions of being more efficient and cost saving their clowns essentially. So I would just point to the evidence of that, you know.

Aziz Huq: So Paolo, I think you are, you've got two different things going on in your question. So the first is, I think that there's a trade that's happening on the word efficiency, which in most people's mind connotes the absence of unnecessary waste with a, a, a much more economic idea of efficiency. And I, and I think that we have to be careful about when powerful people use words in ways that trade on such ambiguities. So I think, I think that's one element of what you are identifying.

The other element, I think is you’re I think pushing on an old debate about between the---I have to say Posnerian I, I'm contractually obligated, I, I lose my job if I don't say Poserian. Sorry, this is, is such inside baseball. Just ignore us. It's this, the, the debate between post efficiency and accounts of the social good that are redistributive in character.

And my own view is, is that the, the Posner version of efficiency, which goes back to Bentham, is a really good critique. It's a really good way of pouring acid on bad ideas, but it doesn't actually tell you what's good about the world. It doesn't actually tell you anything about what you ought to strive for. And, and even a moment's reflection upon the technical inputs into any claim about efficiency understood in its welfares form tells you that because a claim of efficiency without some account of what human or non-human goods are being pursued is utterly empty, right, so unless you have an account that tells you whether to count or not count, for example, the diminishing marginal utility of wealth. Right now I'm really in the weeds. It, you're, you're, you're, you're, it's, it's just, it's just vacuous, right?

So, so I, I, I'm sympathetic in part, but I wouldn't lose sight of the fact that there is this, there is this, there is this use of the word efficiency, which goes back to Bentham, which is a, it's a stick, it is a vial of acid you pour on the nonsense that previous generations have, have encumbered you with. Remember that, that, for example, that Benham uses, you know, that, that, that the stick of efficiency against the English criminal justice system in very, very effective ways in that way, and I, I don't think we should lose track of that.

Audience member 3: Thanks, Catherine Powell, Fordham Law School, part-time Council and Foreign Relations Digital cyber program. I, I think, Orly’s remarks about sector specific regulations in part anticipates my question and whether we're talking about old technology, like the vacuum cleaner, which both freed women up even while it shackled us with housework, or Arab Spring, you know, asserting rights in Tiananmen square while governments surveilled and censored, or today the leapfrog technology examples that, that Aziz gave around digital site currency. I'm just curious about what are the bright spots in terms of whether it's countries or regions or even states within the United States that do a good job of highlighting, enhancing the positive face while restricting and regulating the, the downsides.

Orly Lobel: Yeah, I think Azis you mentioned financial regulation. I think that there have been. Regulatory fields where we've had the introduction of algorithmic decision making animation for, for decades now. So like tax and the SEC, I think, you know, this is a good time to actually respond that we just need more expertise in, in government to introduce those good systems, but there is a lot to---I mean, I'm not, I don't think of efficiency as a bad word for sure. And bureaucracy is always, always strapped for resources, always can do more, always has, has always historically been the kind of in the race to catch up, you know, vis-a-vis the private market. So I think there are good examples.

The, the FDA has been doing much more. I think it was really slow in approving AI based medical devices and diagnostics, other systems, and just recently, just in the past, like four, three, four years has accelerated that and developed more expertise in, in it. I truly believe that there are opportunities in every sector like looking at education, I've, I've documented really good systems of social robots coming out of MIT that have been specifically deployed in, in difficult schools or, or underfunded communities that have really proven to help, you know, level the playing field.

Audience member 4: Hi, I'm Jane. I'm a student at Fordham Law. I have a question sort of regarding previous conversations about the optimism of the early 2010s and social media back then where it sort of felt like it was a public forum and we were sort of less aware of surveillance of our commentary on those social media platforms, or at least I was back then. And I think now we're acutely aware of not only the private ownership of all of our statements on those social media platforms as data that can be used and in fact given to the government.

I'd like to use the example of student protestors say who have been deported or otherwise taken into custody, and also the idea that these platforms are no longer necessarily places where free speech can take place so much, as we can see by the recent change of ownership with X and all of these things we're sort of, there are different rules for content moderation now than there were in the 2010s, I think.

And I'm just curious if you all had thoughts about if that's one of the reasons why we see democratic backsliding today, if that's sort of the realization that this content is not necessarily ours and not necessarily just floating around in the internet with no context and whether or not that's a huge factor in this sort of backsliding and silencing of voices.

Orly Lobel: I'll just say that, I mean, that's not like some natural state of existence that, that, you know, who owns the content. That's something that we have to continue to fight about. And that the, the just introduced quite important regulations on content moderation and control on, on user control over, you know, what, what they see, how do you prioritize the, the, the content and on your feed, how data is preserved. So, so there are legal reforms that that can be done. So just not accepting that kind of story of the loss of control, the loss of ownership is what we all need to be doing.

Joseph Cox: I agree with what you say and it almost works the way around as well, where you'll have all these very rich, powerful people---you know, Andreessen Horowitz and a16z and like all of the VC community, that sort of thing---and then there was an amazing piece in Semafor recently from Ben Smith about how a lot of these people have migrated to these massive group chats where they can talk basically without criticism except among one another.

So in, in the formulation of your question, yes, it's students on social media and maybe they'll police their own speech. Maybe they'll be surveilled, maybe. Maybe they'll be detained or whatever. And then on the flip side, these powerful people don't feel comfortable expressing their own opinions in public now, for whatever reason. 'cause they think they'll be canceled or whatever perceived enemy they seem to have. And they are building these ideologies and these theories inside these small echo chambers, which are completely shut off to us. We can't see them either, 'cause they're not on social media now either. They're in these private groups.

Aziz Huq: I just want to pick up Jane on one piece of your question, which was the, the empirical connection with democratic backsliding. There are a number of studies that look at vote share in swings, in vote share in elections where you've gone from a reasonably robust democracy to one that's palpably backsliding. Places like Venezuela Hungary Poland, I think are the leading examples. There's a arguably an example early on in, in about five years earlier in Turkey.

And the evidence from those studies, it does not support the inference that it is people who are heavy social media users who are voting, who are switching to authoritarian parties. It tends to be older people. It tends to be people who are not heavily online, not using social media. And so at least the large evidence that's available through political science, usually studies, does not support the proposition that, at least at that, those pivot moments that social media is playing a role.

James Grimmelmann:  Yeah, I think I, the two mechanisms you outlined are self-censorship through high visibility and censorship through the platform deprioritizing voices. I tend to think that those are probably not the most significant ones that drive the way in which our political discourse plays out through social media. I suspect that a lot of the mechanisms look a lot more like what Joseph was describing, replicated across millions of WhatsApp groups. And so it's not just the billionaires, it is also the people in a neighborhood who are in some ways invisible to the rest of us and in a small social media little bubble that is walled off and they radicalize each other.

Quinta Jurecic: All right. This is our last question.

Audience member 6: I'm Sam Aler. I'm a student at Fordham Law.

Privatization seems to underlie a lot of the sort of panel discussion here and sort of the rise of privatized government. And so I'm curious, how do we sort of go about deciding what we want to sort of be delegating to private entities? Is this something from like a legal perspective that is done through like focusing on due process concerns on sort of some aspect of non-delegation? Is it something that's more sort of normative, like the OMB circular memo says that government agencies shouldn't contract out sort of core governmental or inherently governmental functions, which is a little bit, sort of, of a looser sort of guideline. How should we think about what we really want to be sort of sweeping into sort of like a privatized administrative state?

Aziz Huq: I think Fordham gets immense kudos for having a student who mentions an OMB circular this time on a Friday afternoon. Well done.

I, I, I think that what, that I would in some measure, resist the question in the following way. I think that as constitutional lawyers in the United States, we, we tend to posit a strict and clear distinction between the state and private actors, right. And, and I know that there's a, a plethora of tests for state action; nobody knows which tests to apply when. And the, and the case law is a train wreck. I, I understand that, but in the back of our mind, we have this clear and sharp distinction when you read the political scientists.

However, one of the points that comes out very, very early on in, in studies of the state and the way the state has developed over the 20th century is that the state has never developed in some sort of splendid isolation from society. The state has always emerged through building relationships through the exercise of what the, the political scientist and sociologist Michael Mann calls infrastructural power. And, and to my mind, it is much more useful to ask the question of what's the relationship or what's the, the nexus and the particular form that state private interaction takes in a, in a, in a given context.

And, and I would, as an, as an example of how that is, is generative or at least opens the door to inquiry, suggests thinking or, or, or do the thought experiment of thinking about, well, you know, Brock and Courtney's point about, you know, is the, does the word tech mean anything, right?

Think about the relationship between the private sector and the public sector with respect to technology in the United States, in Europe and in China, and in each of those places, there is a very different relationship. But in each of those places, you can't understand either national policymaking or the way that technology shapes society without looking at both private and public actors. But those public and private actors are in very different relationships, right. If you are Jimmy Ma, right, you're at a very different place from Elon Musk, right?

So to my mind, it's much more helpful to think about this embrace or entanglement between the public and the private, and then to say, well, exactly, exactly. What's the dynamic here? Who has the upper hand? Why do they have the upper hand? And, and how is that relationship gonna unfold given that, on the one hand, it's sometimes it's the party, sometimes it's the state, sometimes it's private actor? It's just gonna vary, right? But thinking about those gestalts or complexes--I don't quite know what the right word would be---I, I think, is more helpful than, than our traditional legal way of slicing the world into the public and the private.

Orly Lobel: Yes, but sometimes it, it is straightforward. I mean, sometimes it's a question about whether we have private hospitals or public hospitals ,private prisons or public prisons. And there I would just remind us that again, we, what we need to be doing is debating these questions, not as a thought experiment, but rather based on fact and empirical studies. And again, to inject some optimism, actually, AI is really generative now with helping us do empirical studies better, more deeply and, and, and accurately.

James Grimmelmann: And I would slice through this in a slightly different way, which is I think that if the choice is between a very large state institution doing something at huge scale, or a very large private company doing the same thing in the same way, you may already have lost. And that's it’s very important to have a diversity of institutions and places, not just one national communications network, but lots of ways to connect not just one company building something that many of them---not just one agency---but the whole huge variety of them with different expertise and sources of authority and the ability to address specific things with local knowledge. And that's a vibrant society that has all of these diversities of communities and sites of power.

Quinta Jurecic: Thank you all for your excellent questions, and thank you to our panelists for such a invigorating discussion.

The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter at our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts, and look out for our other podcasts, including Rational Security, Allies, The Aftermath, and Escalation, our latest Lawfare Presents podcast series about the war in Ukraine.  And check out our written work at lawfaremedia.org.

The podcast is edited by Jen Patja. Our theme song is from Alibi Music. As always, thanks for listening.


Quinta Jurecic is a fellow in Governance Studies at the Brookings Institution and a senior editor at Lawfare. She previously served as Lawfare's managing editor and as an editorial writer for the Washington Post.
Joseph Cox is an investigative journalist, cofounder of 404 Media, a digital media company, and author of "Dark Wire: The Incredible True Story of the Largest Sting Operation Ever."
Orly Lobel is a law professor at the University of San Diego and the author of "The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future."
Aziz Huq is the Frank and Bernice J. Greenberg professor of law at the University of Chicago; his book “The Collapse of Constitutional Remedies” will be published in December.
James Grimmelmann is the Tessler Family Professor of Digital and Information Law at Cornell Tech and Cornell Law School. He studies how laws regulating software affect freedom, wealth, and power. He helps lawyers and technologists understand each other, applying ideas from computer science to problems in law and vice versa.
Jen Patja is the editor of the Lawfare Podcast and Rational Security, and serves as Lawfare’s Director of Audience Engagement. Previously, she was Co-Executive Director of Virginia Civics and Deputy Director of the Center for the Constitution at James Madison's Montpelier, where she worked to deepen public understanding of constitutional democracy and inspire meaningful civic participation.
}

Subscribe to Lawfare