Lawfare Daily: Former FCC Chair Tom Wheeler on AI Regulation
Published by The Lawfare Institute
in Cooperation With
Former FCC Chair Tom Wheeler joins Lawfare Tarbell Fellow Kevin Frazier and Lawfare Senior Editor Alan Rozenshtein to discuss the latest developments in AI governance. Building off his book, “Techlash: Who Makes the Rules in the Digital Gilded Age?” Wheeler makes the case for a more agile approach to regulating AI and other emerging technology. This approach would likely require the creation of a new agency. Wheeler points out that current agencies lack the culture, structure, and personnel required to move at the speed of new technologies. He also explores the pros and cons of the Bipartisan Senate AI Working Group’s roadmap for AI policy. While Wheeler praises the collaboration that went into the roadmap, he acknowledges that it may lack sufficient focus on the spillover effects of more AI development and deployment.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Introduction]
Tom Wheeler: We don't know what's coming next with AI, but we do know that it's coming. And we need to be in a situation where we've got the tools to deal with it.
Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, a 2024 Tarbell Fellow with Lawfare and assistant professor at St. Thomas University College of Law, with my Lawfare colleague and University of Minnesota Law Professor, Alan Rozenshtein, and our guest, Tom Wheeler, visiting fellow at Brookings and the former chair of the Federal Communications Commission.
Tom Wheeler: The web, which created the ability to go out there and scrape all this data in order to feed large language models and create AI, is now being eaten by its offspring.
Kevin Frazier: Today we're talking about Tom's book, “TechLash,” and ongoing efforts by Congress to regulate AI.
[Main Podcast]
Tom, your book, “TechLash,” came out in October of 2023, and in that book you argue that today is not the fourth industrial revolution. And instead you call for a new era of public interest oversight that embraces a more agile regulatory ecosystem. Can you tell us a little bit more about this agile approach to regulation and whether advances in AI have changed your thinking at all?
Tom Wheeler: Well, let me answer the last part of the question first and AI has not changed my thinking about the need for agility in regulatory oversight, but it has heightened the urgency of us getting there. So the, what I posit in “TechLash” and the subtitle of which is “Who Makes the Rules in the Digital Gilded Age,” is I draw the analysis, the analogy, between the Industrial Revolution and the Digital Revolution. And how in the Industrial Revolution it was the industrial barons who made the rules until the government decided to step in and have become a countervailing force on behalf of the public interest.
And in the Digital Era, it's the digital barons who are making the rules. The difference being that the government has yet to really step in and say, no, this is what we need to do to counterbalance that power in the public interest. And to do that, we need to think outside of the box that was created during the Industrial Era. The assets of the Digital Era are different from the assets in the Industrial Era. The assets of the Industrial Era were things you could stub your toe on, right? I mean, a lump of coal or a barrel of oil or whatever the case may be. The assets of the Digital Era are all soft assets. They all behave differently and as a result, the marketplace behaves differently. The industrial economy was all about scope and scale production. It was a production pipeline, if you will, that relied on scope and scale at all steps along the way. The digital economy is a pairing economy, which behaves quite differently.
What is AI? AI is pairing a query with a whole lot of data that has been collected on the topic of that query to give you an answer. I was, one day I was, I had the privilege of, of participating with former Secretary Madeleine Albright in a session talking about new technology and its impact on diplomacy. And she says to me, she says, the problem is that we take 21st century challenges and define them in 20th century terms and propose 19th century solutions. And I said, Madam Secretary, I'm stealing that because that's exactly the situation that we are facing in technology. And here we have AI, a shape shifting change of for the economy and society. And we're still trying to figure out how to put that square digital pig into a round hole of the processes that we had in place in the industrial era.
One last, one last point. The Senate, as you know had a AI working group that Chuck Schumer, Senator Schumer put together. And when they came out with their recommendation in the press conference, Senator Schumer said that, you know, regulating AI is so hard because it's changing so fast. And yes, it is. And it's particularly hard if you're trying to keep up with that kind of change with the sclerotic and rigid structures of, of 19th and 20th century marketplace oversight, rather than saying, okay, how do I build a new agile structure that is capable of dealing with these new challenges?
Alan Rozenshtein: I'd love for you to maybe express in a bit more detail this contrast. So what is it in your view that is different about how we used to do regulation in the past and how we need to do it in a more digital age? Is it just a matter that today the assets are non-corporeal, where they used to be corporeal? Is it that technology is moving faster now than it did in the past? You just talked about a sclerotic regulatory environment, but I'm curious why you think that regulatory environment was sclerotic when, there was a, an incredible technological change between the late 19th century and let's say 1960. Perhaps in some sense, even greater than we're experiencing today. So what, just say more about what you mean by this contrast.
Tom Wheeler: So regulation tends to look like your pet. Okay. That, that, when. The government leaders went, got together and decided how we were going to regulate the industrial economy. They looked to the management techniques of the companies that they were seeking to oversee. And what were those management techniques? It was a rigid, top down, rules-based structure. It was called scientific management. The guru of the time was Frederick W. Taylor, who was the, wrote a book called “Scientific Management,” and this is the way that all corporations should be done, should be overseen and the core to it was to wring out all initiative. And I don't care whether you were the guy, and it was a guy, on the shop floor, or the supervisor making sure that those guys are following the rules, or the manager who's making sure that the supervisor is doing his job. It was, the goal was to wring out initiative and dictate the way things happen. And so we ended up with a structure of government in which we dictated operations with the goal of achieving effects, behavioral effects in the marketplace.
And I call that sclerotic because of the fact that it is a couple effects. Number one it, it tended to wait until there was serious abuse before it took action. Secondly, the process was itself a drawn-out process. And I agree with you, Alan, that an awful lot of change occurred in that period, but it not, did not occur at the pace in which it's occurring today. You know, the development of electricity, there was a classic study at one point in time done by a guy at Stanford that the development of electricity, there was a 40 year gap between that and when electricity really started to hit the shop floor. I'm a guy who comes out of the telecom business. The statistic that I always found fascinating was that it took a hundred and twenty-five years after Alexander Graham Bell invented the telephone for there to be a billion people in the world connected to telephones. It took less than six years for a billion people to use the Android mobile app.
And so the challenge becomes how do we create governmental structures that can be agile enough to deal with these changes that we are experiencing. And the answer to that is to go back to look like your pet. We need to have governmental structures that look like the digital companies that they need to oversee. And what are the characteristics of those companies? One, they are risk based. They're constantly looking around and saying, what are the new technology challenges? What are the new marketplace challenges? They are agile in terms of how do I respond to those challenges quickly, and I think that we need to say, how do we build oversight that mirrors those kinds of concepts?
Alan Rozenshtein: Let me just ask you one more diagnostic question or question about your diagnosis, and then we'll get into what we do about it. We're focusing today on AI, but I'm curious if you think that the struggle that regulators are having today with AI, is it part of a broader struggle that they've had with digital technology that they've had with social media? Or is it different in kind? So in other words, should we view AI as a fundamentally new challenge to regulation or another chapter in a challenge that we've really had, ever since, Congress passed Section 230 in the 1990s and kind of washed its hand in some sense of a lot of the internet.
Tom Wheeler: I think Alan, that I tend to look at this situation in three parts. There are those things that we know are wrong and are clearly illegal, whether they are done by analog means, digital means, AI, whatever. Fraud is illegal. Discrimination is illegal. And we probably have sufficient statutory authority to deal with that, no matter how it is perpetrated. Lina Khan, the chair of the Federal Trade Commission, has a wonderful expression. She said that there is no AI exception in existing law.
The second level, the second issue, are the issues that we've been sweeping under the rug for the last 25 years. That the digital revolution has brought us that we have failed to deal with: privacy, competition, truth and trust. What are the societal expectations in those areas we have, as a nation, failed to step up and deal with? AI only makes those realities worse.
And then there's the third structure, which is the unknown unknowns. We don't know what's coming next with AI, but we do know that it's coming. And we need to be in a situation where we've got the tools to deal with it. And I think that those tools are less about managing the technology per se. It's managing the effects of the technology and that what we ought to be concerned about is what are those effects.
Kevin Frazier: So Tom, when we get into the nitty gritty of thinking about which regulator can actually take on, the shape of the pet we are trying to govern. I think you and I would both agree that if we look at the administrative procedure act, that's going to render Congress or a lot of our agencies to look more like a pet sloth than like a pet cheetah, which needs the speed to catch up to AI, right? So if we think about the menu of regulators out there, we have states, we have federal agencies or we have Sam Altman's proposal leaning more into a international regulator. Which combination of these actors, or which specific regulator, do you think has the best odds of taking on the requisite shape of becoming the pet we need?
Tom Wheeler: Well you know there's a piece of legislation that's been introduced in the Senate by Senators Bennett, Michael Bennett of Colorado and Peter Welsh of Vermont, that attempts to, to create this new iteration of administrative procedure, which says that, again, it goes to the point of we need to be worrying about what the effects are and managing to those effects. One of the things that's fascinating to me is that they have started to do this in Europe. The Digital Markets Act, the Digital Services Act set out and say, here are our behavioral expectations in the marketplace and you companies have the responsibility to show us how you're meeting those. Just the inverse of how it used to be done, which is, we'll tell you what you have to do so that we get to these results. It's just the inverse. And that allows for the kind of agility that you need to keep up with things.
And so I think we need to not only have that kind of a new structure, but also we need to think about, okay, how do we determine what those desirable effects are? And again, I think we've got to look like the pet, that technological standards have been the backbone of the technology revolution. I mean why is it that your cell phone works everywhere? Because there are standards that everybody signed up for that were cooperatively developed. Why is it that you can go from 1G to 2G to 3G to 4G to 5G and now 6G is being developed? Because they're agile and able to keep up with technology and marketplace changes. And so how do we take that kind of a concept of standards, use those standards as the, the behavioral expectations in the marketplace, and expect, and enforce, companies to achieve those. And that is not the Administrative Procedure Act, but it is the kind of risk based, agile, structure that I believe we need going forward. And that again, that AI exacerbates the need for.
Kevin Frazier: And so thinking though, on your experience, having helmed the FCC, having been in the belly of the administrative beast, what specific efforts would you like to see from a federal agency to stay up with the latest developments in AI? How, beyond amending the APA, what sort of specific changes would you like administrative agencies to take on to try to better keep pace with enforcing and developing these new standards?
Tom Wheeler: The challenge that we faced at the FCC is that we were dealing with an act that was written in 1934 when television broadcasting did not exist and yet we still had to regulate broadcasting writ large. The act was updated in 1996 when the internet was AOL so, and the difficulty was that there were assumptions made then that hamstrung decisions that we could make. And so, I'm not sure that it is a fair expectation to turn around and say to an existing government agency, you have to become a 21st century agency now. They still have legacy responsibilities.
And the muscle memory of the agency, those in Congress who oversee it, the judiciary who has to make decisions, review decisions, the staff is still rooted in analog assumptions. So I think we need to say, okay, we're going to have a digital agency staffed with digital expertise using digital technology. Imagine how we could harness AI to track what's going on in the marketplace and to identify where actions are needed. And imagine what could happen if suddenly we could be dealing instead of ex post, which is what anti-trust law is and what regulation has traditionally been. We could do an ex ante by saying, again, here is the deliverable that we believe is in the public interest, established with us, that's what you're delivering.
Alan Rozenshtein: So let me ask about this new digital agency, and specifically about the idea of having one agency, and we could call it the Department of Digital Affairs or Department of AI or the Department of Technology. Why have one new agency rather than try to embed this expertise in multiple agencies?
And, to sort of think about the reasons for that, one might worry, for example, that setting up a new agency, just bureaucratically and given gridlock in Congress might be much more difficult than having the agencies that currently exist try to beef up their own capabilities. And another argument one might make, and is a more fundamental one, is that if you think that, and I think this is true, AI and digital technology in general is such a general purpose technology, it's going to be hard, I think, to have a single agency that has not just expertise in the underlying technology itself, but all of its applications, across everything, the federal government does. In, in a similar way that we don't, for example, have a department of computers and every computer related thing that, that the government wants to do goes through that, right? You have IT people in every agency. I'm so curious your thoughts on that trade off.
Tom Wheeler: Well, Alan, I think the first thing is that this agency staffed by digital experts needs to be a resource for all the other agencies dealing with things. Second of all, it needs to focus on identified goals and, and this is not a boiling the ocean kind of, of a situation. But my concern having led an Industrial Era agency is that it is populated by incredibly dedicated people. They are populated by incredibly dedicated people. But again, as I said the congressional oversight is structured in such a way as to think about yesterday rather than tomorrow.
One of the challenges I used to have when I was chairman of the FCC was constantly being called, either privately or publicly, before Congress concerned about, what I'm doing about quote, the internet. And yes, that we do need to have, the National Highway Transportation Administration needs to have the ability to deal with the impact of AI on road safety. And I think it has that already. And the Food and Drug Administration needs to have the ability to deal with the effect of AI on implantable devices, foods, drugs, et cetera, which I think they already have. But there are other broader AI issues that I think require focused attention and, and that focused attention can also work in partnership with the other agencies and bring them the skill sets that they're going to need to address their challenges.
Alan Rozenshtein: One additional question I have is I'm curious, both generally and honestly in your, given your experience as head of the FCC, and I'm sure dealt with these issues about staffing and expertise. This has always been a problem-
Tom Wheeler: Right
Alan Rozenshtien: -for the federal government. I think it's especially an issue with something like AI, where the expertise that, there's not that much expertise-
Tom Wheeler: Right
Alan Rozenshtein: -out there in the world. It all gets dragged to Silicon Valley, where the current salaries, I mean they're literally millions of dollars that are being paid to these machine learning engineers. The costs of doing this research in terms of compute are enormous. We're talking sometimes tens of billions of dollars, soon could be hundreds of billions of dollars, might be trillions of dollars, to do this kind of research. And, in, in an era where honestly, compared to 50 years ago, the government just does much less kind of basic foundational research. And so I'm curious how you think about how an agency in charge of this would be able to keep up with the technological challenges.
Tom Wheeler: So, the first option, the first reality is that it has to try, okay? That we can't just sit back here and say, oh it's just too big of a problem. We've got to make the effort. Secondly, I think you've got to have a specific effort to attract and compensate. We were able to sequence the human genome through a government program, right? We ought to be able to figure out how to deal with AI.
Kevin Frazier: So, getting back to the AI roadmap that Senator Schumer and three other senators released earlier in May. The focus there was really on innovation and in some comments on the AI roadmap, Senator Schumer stressed that he regarded innovation as the north star of AI governance and AI regulation. Are you concerned that so far our conversations around AI governance from some of our most influential senators have been so focused on innovation and that our general concerns about public interest and developing what would AI in the public interest be has been left off the agenda?
Tom Wheeler: Let me start by saying hooray for Senator Schumer and the other senators that were on the AI working group who are able to come together with a bipartisan product. What an interesting concept in this Congress. So, so, we ought to take our hats off to them. Hooray for them and their recommendation for $32 billion in federal support of AI research. I'm less excited about the fact, as you say, that they tended to focus more on the how rather than the what.
There are really two things, I think, that are at the heart of everybody's concern about AI. One is losing control of the algorithms so that they will do bad things. The other is losing control of human’s use of the algorithms so they will do bad things. And I think that the, the Schumer report tended to focus on the former more than the latter. So, so I, but before I wrote “TechLash,” I wrote a book called “From Gutenberg to Google, The History of Our Future” which while we're doing shameless self-promotion, next month we'll come back out in paperback, in a revised paperback edition as “From Gutenberg to Google and On to AI.”
And in, in “From Gutenberg to Google,” I made the observation that it is never the primary technology that is transformational, but it's always the secondary impacts. And that the internet per se was not transformational. The uses to which the internet was put were transformational, including enabling AI. And so the challenge for policymakers, I think, is that we need to move from where the working group was focused on the technology per se, to the matter of what are the effects of the technology. Not that the first one isn't important. We got to deal with both of them. But now let's be complete in our roadmap.
Alan Rozenshtein: Let me ask you another question on the innovation theme. One might be concerned with AI regulation in the kind of classic way one might be concerned with regulation in that it stifles innovation. One might be concerned in a different way that too much regulation would harm innovation by helping the large incumbents at the expense of the insurgents in a kind of capture way. And I'm very curious, especially since you both ran the FCC, but before then, you had a long career in the communications industry.
So you've seen these dynamics from both ends. When Sam Altman famously went to, I think it was the Senate and said, yes, I would like to be licensed. That was easy for Sam Altman to say. And a lot of people, myself included, were pretty skeptical about that because although that would be regulation, it would be regulation that really entrenched OpenAI and Microsoft and the Amazons of the world. So how do you think about this question of capture in a market that, just to say one more thing, because again, of the very high compute costs naturally tends to favor a kind of oligopolistic concentration.
Tom Wheeler: Yeah, yeah. And the question we haven't really discussed here yet is that last point, Alan is this is really a hardware issue. It is the concentration in the hardware side of the equation that is important. But to your question, I, I always love how the big companies are always arguing, you know, this regulation, we can do this, but it'll be harmful to the small guys that they're spending every hour, every day, trying to figure out how to destroy in the marketplace. Okay. So I always love that, that, that concern. I think that industrial regulation does have an impact on innovation and investment, it discourages, you need agile innovation in order to move forward. And if agility is squashed by tight prescriptive regulations, you've got problems. I used to love when I was chairman that various CEOs would come in and say, oh, the regulation is so tight, but we're, you're stifling our innovation and you're taking away our reason to invest.
And so, three times I tried to structure rules that were effects based, and they came in and said, oh, this is terrible. This is regulatory uncertainty. Well, if one day you're arguing that regulation is too tight, and the next day you're arguing that regulation needs to be tight and can't be effects based, really what you're saying is we don't want any regulation at all. And we've seen the consequences of that. So again, I come back to the concept that was my a-ha experience that led to this belief that we need to have a focus on behavioral effects rather than on micromanagement. And that would encourage innovation by providing a flexibility that isn't possible under micromanaging regulatory structures.
Alan Rozenshtein: So let's talk about some of those effects and in sort of substantive areas. So you've published works analyzing what we might think of as some neglected areas of AI regulation, at least so far. So things like the metaverse, things like the digital divide, I'd love for you to address either or both of those. Why are they not receiving, in your view, adequate attention? And what kind of concretely would you want regulators or lawmakers to, to do to address, whatever problems in those areas?
Tom Wheeler: Remember when it was about a year and a half ago? Every time you turned on the medium or you picked up the newspaper, it was all about the metaverse, the metaverse this, the metaverse that. Mark Zuckerberg changed the name of his company to be the metaverse, and then all of a sudden GPT-3 drops and the metaverse gets left behind in the headlines. But the point of this is that the metaverse is run by AI. And it's, it is one of those derivative effects of AI. And so we need to be, we need to be dealing with that issue.
You talk about the digital divide. Let me just posit, a new divide, the web, which is, has defined our lives for the last, when did Tim Berners Lee do it? 20 years ago. The web is on a gradual decline because of AI. You know, Gartner had a study that said that within two years search on Google will be down 25%. Because it'll be being done with AI. It is a fascinating reality that the web, which created the ability to go out there and scrape all this data in order to feed large language models and create AI is now being eaten by its offspring.
But the challenge there is, let's go back to what I was saying before, it's the secondary effects. What's that going to mean to publishing? What's that going to mean to the people on Etsy? What's that going to mean to, pick a topic, any topic. As we have had this economy build up around the web, and what's it going to mean to those who have relied on the web and suddenly find that there is a new structure that has put the web in a managed decline. Again, that's the unknown unknown and it's one thing for the three of us to sit here and talk about that, but how are we going to prepare for it? And who do we look to, to say, okay, what's the public interest balancing in this?
Kevin Frazier: So one known derivative and spillover effect of AI's proliferation has been what we could call at least an amplification or at least an ease of spreading disinformation, misinformation, malinformation, however, you want to define it. I know this has been a topic of concern for you in terms of our information ecosystem and how we make sure folks still have access to verifiable, actionable information. What I call the right to reality as I penned in Lawfare.
Tom Wheeler: I like that, the right to reality. Excuse me, I've got to take notes here.
Kevin Frazier: There you go, title of your next book. You're welcome.
Alan Rozenshtein: We're always closing and always promoting on the Lawfare Podcast. It's very important.
Kevin Frazier: Always be closing, always be coming up with bad new rights and punny names. With that in mind, I'm curious, what sort of intervention, if you were philosopher king, and I think we, Alan and I have agreed, you already are philosopher king, so using those powers-
Tom Wheeler: Careful!
Kevin Frazier: How would you use them to try to mitigate the effects of AI on the quality of our information ecosystem?
Tom Wheeler: I think we have to go back to basic truths by design, if you will. Here's my, I have been a serial entrepreneur as well as chairman, as well as running some industry associations. And what I found when I was a software executive was that the thrill was in getting something to work. And the consideration of the effects of that never surfaced. It's, wow, let's see if we can do this. Oh my god, we did it! How do we get into the process expectations that there will be effects that are beneficial? How you do that through a government structure is arduous, challenging, particularly in a First Amendment environment.
But let me tell you a story. Last week, two weeks ago, I was in London with Dame Melanie Dawes, who is the head of Ofcom, which is for want of a better description, the FCC of the UK. And she had been handed the ungodly assignment by the British Parliament to implement the Online Safety Act, which calls for dealing with the kinds of problems that you were just talking about, Kevin. And she had a choice between saying, am I going to make specific speech decisions, or am I going to stipulate a process that if followed by people making their own speech decisions should be beneficial. And it's a work in progress.
And her first exercise was having to do with kids. And how do you protect the online safety of kids? And how do you make, how do you have an expectation that in fact, when Kevin Frazier signs in and says that he's 18, he really is 18 and he's not 13. How do you have an expectation that you're dealing with, that the people who have built the delivery system that delivers harmful information to young girls, shaming them, and in many cases ending up being suicidal, how do you establish the expectation that they have a responsibility for that action? And I haven't got the answer here, Kevin. But I think that the kind of, again, the kind of approach where you're saying, here is the expectation of what responsible behavior is a huge step forward from the attitude of, hey, look what we can build. Let's go do it. Damn the consequences. Full programming ahead.
Kevin Frazier: I think you said it best when you said these are just serial works in progress and it's going to be step by step. So, thank you very much, Tom, for sharing your thoughts. I think we'll end it there and really appreciate you coming on.
Tom Wheeler: Hey guys, this has been a really stimulating discussion. Thank you very much.
Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.