Lawfare Daily: Matt Perault on the Little Tech Agenda

Published by The Lawfare Institute
in Cooperation With
Matt Perault, Head of AI Policy at Andreessen Horowitz, joins Kevin Frazier, Contributing Editor at Lawfare and Adjunct Professor at Delaware Law, to define the Little Tech Agenda and explore how adoption of the Agenda may shape AI development across the country. The duo also discuss the current AI policy landscape.
We value your feedback! Help us improve by sharing your thoughts at lawfaremedia.org/survey. Your input ensures that we deliver what matters most to you. Thank you for your support—and, as always, for listening!
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Matt Perault: Doing things like requiring impact assessments and you're requiring companies to assess the potential use of their products in harmful ways way downstream, like way after they've gotten a product out into the world—those are things that require more sizable teams or significant resources so that a company without a sizable team can outsource it. I think that's really challenging for Little Tech.
Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, contributing editor at Lawfare and an adjunct professor at Delaware Law joined by Matt Perault, head of AI policy at Andreessen Horowitz.
Matt Perault: I, I think it's certainly a Sputnik moment. I think this is a, a moment where we see the importance of American competitiveness, and the policy agenda I think has shifted toward that. So we don't need to throw safety overboard. It's not about sacrificing safety, but it is about routing our policy agenda and how can we create competitive products.
Kevin Frazier: Today we're talking about the Little Tech agenda and its intersection with AI.
[Main podcast]
Let's start with a common refrain: economic security is national security. America is safer when important technology and essential products are produced domestically. That's straight from the Department of Commerce circa 2017.
Economic security itself, though, evades easy definition. What qualifies as a robust economy is often highly dependent on your perspective. One perspective that's caught substantial attention from Silicon Valley to D.C. is that our economy has become weaker due to excess concentration, particularly in the tech sector.
The Little Tech agenda outlines a vision for introducing dynamism into a sector that some argue has become excessively captured by just the hands of a few. Matt, thank you so much for coming on and helping us explore this Little Tech agenda, and in particular, it's nexus with a burgeoning field of AI.
The Little Tech agenda—folks may have heard a different podcast on it. Folks may have seen it on X, may have seen it appear in random bits and pieces. Can you give us your definition of this agenda, perhaps in an elevator pitch?
Matt Perault: Yeah, so this is the startup ecosystem. So the question is, what's the right policy agenda to allow startups in America to thrive? Our co-founders wrote a blog on this, which you can link to maybe in the show notes if people want to check out how they framed it.
But the focus for us is on ensuring that there's a level playing field. So this isn't about giving special favors to Little Tech companies or handouts to Little Tech companies. This is about ensuring that if you are a small company, you're able to compete aggressively. My focus is on the policy components of that, so ensuring that there's a regulatory agenda that works for Little Tech.
And from our standpoint, one of the fundamental components of it is understanding that when compliance costs are imposed on development of technology, those costs aren't borne in the same way by larger companies and smaller companies.
So I started working on technology policy at Meta. I was there for a little more than eight years and had a very positive experience at the company. The size of the company though is dramatically different from the kinds of companies that I'm working with now at Andreessen Horowitz. So at Meta, when I joined, which was earlier in the life of the policy team, it was still, I think, like a dozen people or a couple of dozen people, and then, and then it grew to several hundred.
The companies that we work with sometimes might not have any public policy team. They might not have a general counsel; if they do have a general counsel or head of a policy, typically it's a very small legal team or, and typically like a, a very small policy team, sometimes meaning a one person policy team.
And so for companies like that, when there are, complex regulations that they need to figure out how to manage and deal with—that is, that is challenging and that takes cost and energy away from kind of the core things that startups are focused on, which is usually figuring out how to build a product and then get that product to market and compete often against better situated companies with deeper pockets.
And so in a lot of areas of tech, as you sort of alluded to in your opening, larger companies are more established. They have more resources to put into building products and, and navigating legal complexities and that puts startups at a disadvantage. And so from our standpoint—again, it's not about special handouts or favors for Little Tech companies, but it's about ensuring that there's a regulatory landscape that enables them to compete.
Kevin Frazier: And something that I point out to my students frequently is that when you look at some of these state laws in particular that are imposing pretty substantial regulatory expectations on different companies, it's not always the case that there's a very high threshold for a company that suddenly may have to comply with these regulations.
So for example, the My Health, My Data Act in Washington state imposes some serious regulatory obligations on any company doing business with a resident of Washington, whereas maybe the CCPA in California, that does have some pretty high monetary thresholds and interactions with a certain number of Californians.
But even then you can see a sort of perverse incentive—if you're a startup, suddenly you don't want to grow too quickly or too fast and suddenly hit that threshold and find yourself in regulatory hell, for lack of a better phrase.
Matt Perault: Yeah, that's exactly right. The, the coming up with the thresholds can be really complicated, particularly in the area of the tech ecosystem that I'm now focused on, which is artificial intelligence, design and development, because for AI companies, the barriers to entry are really high and high compute costs, really challenging to access talent.
And so for some of those companies, sometimes they have, they might get significant investments in them right out of the gate, in part because that's what you need to build AI tools. And so, you know, things like potential market cap of a company or like, you know, size of investments they've received, there, there are small companies that can look larger than their footprint when you're, when you're looking at them through certain metrics.
And I think the macro point that you're getting at, which is the significance of the potential compliance burden as a result of state laws is really something that is significant and that we're very focused on right now, because it's not just the particular compliance burden of dealing with a law in a state like California or a state like Texas—it's navigating that compliance burden across 50 states. And again, for a startup that has a small policy team and a small legal team, it's that patchwork that can be particularly challenging.
I want to be really clear: our point of view is not that states should sit on the sidelines and, and do nothing to regulate in various different areas of tech policy, including AI policy the conduct within their borders. That is the traditional role of states to, to police their own jurisdictions. And there's, there are robust areas of state law that do that across lots of different domains. There's obviously, state criminal law, there’s state consumer protection law, there’s state civil rights law, there’s state antitrust law.
Our point of view is that those, that those areas are important and should be enforced, and that it might be that there is room for new lawmaking in those areas, but it should be in accordance with the way that states have traditionally gone about policymaking, which is not burdening interstate commerce.
And what we're seeing in some of the laws, some of the AI laws that states have developed—is that it would make it harder to offer a product that traverses state lines and in lots of areas of technology, that's really important. Like if you and I are messaging with each other, we don't want that experience to be different. If you're sitting in Florida and I'm sitting in North Carolina, or if you move from Florida to another state, if you were to move to Virginia, for instance, we don't want that experience to all of a sudden be different. You want that to be a seamless experience and that's what you expect as a user of technology.
So in terms of creating a national market and creating the products and market that would, you would expect for, for products that are traversing state lines that are really engaged in interstate commerce, that is traditionally the domain of the, of the federal government. And so we think there's an important role for each, for each to play, and if they—if federal government and state governments—do that effectively, that from our standpoint will be good for Little Tech.
Kevin Frazier: I appreciate you kind of making clear that this isn't necessarily a laissez-faire embrace of economics or just let the market do whatever it wants and consumers be damned, we'll figure it out eventually, but it really is just emphasizing that sort of growth mentality and leveling the playing field, as you said earlier.
But with that in mind, thinking about some of the folks who have embraced Little Tech, I'm guessing some listeners are already thinking, okay, is this a blue idea or is this a red idea? We have folks like Garry Tan, the CEO of Y Combinator, located in San Francisco, may not be the most left person in San Francisco, but is able to walk around and swim in those political waters. Then we have Marc Andreessen who came out in support of Trump. Folks wouldn't say, oh, he's definitely a AOC liberal or anything like that.
So is this a political ideology or political agenda? How do you all frame this in the larger political ecosystem?
Matt Perault: It's not at all. So the, the—we say this explicitly, or our co-founders say this explicitly in the Little Tech agenda blog that I alluded to earlier—we'll work with anyone on either side of the aisle who supports Little Tech. That's really the focus for us.
And there are Democrats who I think fit in that category, there are Republicans who fit in that category. From our standpoint, the focus is less on like blue, red, left, right, and more what's an agenda that works for Little Tech companies.
And the focus for us, just to be clear, is not just on, it's not exclusively on growth and economic value. I think those are really important components of it, of course. And I think after the release of DeepSeek, there has been a strong shift, and I think an appropriate shift, toward thinking about American competitiveness.
The, the, the policy conversation, not, not monolithically, but, but many parts of it have shifted toward how can we build tech products that compete globally, effectively, compete against products that are developed in Europe and compete against products that are developed in China. I think policymakers are seized with that idea, and that means not having regulation in place that burdens innovation and burdens development without significant benefits where, you know, where the costs of the regulation really outweigh the benefits.
And our view is that the focus of regulation should not be on model development, which is essentially just a tax on math. It's just focused on taxing the process of building AI models, which is just algebra, complicated algebra, algebra at scale, but it's just algebra. And putting regulatory burdens on that process really just literally, I mean, slows it down, that's, that's what it does. It makes it less likely, makes it harder to develop those tools, makes it particularly harder for startups, benefits incumbents, and so, and so you get less development.
It's possible that you get more safety, and it's possible you get more consumer protection because if you don't create anything, then you wouldn't create anything that has some potential to harm, but you also don't get all the benefits, and it's not necessarily clear that just slowing down development does anything on the consumer protection or safety side, right?
So like regulating model development is just regulating model development. It's not consumer protection. It's not strengthening how civil rights is applied in cases where AI is used to, to harm people, to deprive people of their civil rights. And so our focus is that if you want to protect consumers—if you're motivated by that—you should focus on regulating harmful use. So focus on when AI is used in problematic, in problematic ways to harm people.
Our, our view is that, is that existing law covers many of those cases. So there's no AI exception in existing law. Section 5 of the FTC Act, which focuses on unfair and deceptive trade practices, no AI exception to that. That applies in a world of AI. Similar for, for federal civil rights law, similar for state civil rights law, similar for antitrust law, there aren't AI exceptions.
And so at least as a starting point, figuring out how to enforce those areas of law robustly, we think is a better way to protect consumers than regulating model development, slowing innovation, and hope that in some way, maybe through that bank shot, consumers are protected.
Kevin Frazier: To help give listeners a better sense of what policies align with the Little Tech agenda, I think it'd be helpful to run through a couple different case studies just to understand what the policy lane looks like for realizing this agenda.
So we're recording in mid-February; the Paris AI Summit just happened, and Vice President Vance took Europe to task for its regulatory agenda. So just to call a spade a spade, would we say the EU AI Act, something that really is focused on governing frontier models—can we affirmatively state that's most definitely not in the sort of Little Tech agenda paradigm.
Matt Perault: If you're focused on regulating models, rather than use, then we think you're making it harder for Little Tech to build models. I think that's hard to contest.
I mean, tell me if you think that's wrong, but it, you're making it more difficult for anyone to build models. And some people would say that's a positive thing. But if you're doing things like requiring impact assessments and you're requiring companies to assess the potential use of their products in harmful ways, way downstream—like when, you know, way after they've gotten a product out into the world. And you're trying to figure out how to classify various different components of your model based on how it might be used in certain situations like that—like those are things that require, I think, more sizable teams or significant resources so that a company without a sizable team can outsource it. I think that's really challenging for Little Tech.
And again, from our standpoint, it's not about like one option is safety, and the other option is economic growth, or one option is financial gain, the other option is consumer protection. In our view, when you're regulating model development, you're doing a harmful thing on the innovation side without really doing much on the consumer protection side.
And I think there are options for moving forward on consumer protection in a consumer protection direction that are probably better for that set of outcomes for making technologies work better for consumers without all the costs of burdening model development.
Kevin Frazier: I think what's really important to focus on as well, as you're pointing out, Matt, is really diving into what does it look like for a company to actually comply with these regulations and doing close analysis of those costs.
So if you are a AI lab and you have to go find a quote independent third party auditor, well, there's not that many independent parties who have the level of sophistication and expertise and availability to just show up whenever you need them to conduct these various tests—that's a cost. Working with them is a cost. The time delay is obviously a huge cost.
And so encouraging legislators and folks in this space to think about the actual operational burdens of any of these regulations when they get compounded is huge, especially if you're in a place where we have 50 different states with a patchwork of, oh, this is what qualifies as an independent auditor, but that group doesn't in New York, but they could work in Washington. And all of a sudden it just gets wild.
And with that in mind, I'd love to get your sense of the state regulatory landscape with respect to AI and whether there are any states you would point out as perhaps being models to follow from embracing the Little Tech agenda and maybe some states who appear to have not gotten the message.
Matt Perault: One that I think is worth looking at is Utah, which passed a framework that is a little bit more oriented around like inventory and starting from an inventory of existing law, figuring out how that could be used to apply to AI and potentially where the gaps are and then develop recommenda–, policy recommendations for where those gaps are and how you might fill them.
And then Utah has also been a leader in regulatory sandbox models, which I've I've been interested in for a long time. I wrote a bunch about as an academic and, and, and they've, they are creating one for, for AI products. And I, and I think that's a productive step; I, I don't think that's the only possible mechanism for governing AI, just to be clear, but I think that is one, one helpful approach.
The framework that I think in our view is more challenging are the frameworks that are closer to the EU AI Act, that are focused, like very squarely on model development, which again, I think clearly is, is the kind of thing that could result in states dictating what a national market in AI looks like. When you're, when you're regulating AI in that way, it just, it just seems very clear on its surface that you're not just regulating how an AI product will function within your borders.
And that model is, again, a fairly burdensome one for Little Tech because it requires kind of at the threshold level and a certain legal assessments about where your products fit in, which I think just, just that alone is challenging for the kinds of companies that, that we see in our portfolio.
And then once you, if you have determined that you are offering a product that's closer to a high-risk model, then there are significant compliance things that you need to do like an impact assessment or like audits. And your listenership probably has like a very high, a very disproportionate percentage of people who have actually run audits at some point in their careers.
I wouldn't, I guess it would be unfair to say that I ran an audit when I was at Facebook, but I, I was part of the team that worked on our global network initiative assessment. The Global Network Initiative is a multi-stakeholder organization that focuses on best practices around business and human rights issues and is a very productive forum for dialogue and, and self-assessment and inquiry and learning more about how to, how to integrate human rights into the company's, company's work.
And so that, that framework I think is a very good one. It also is not cheap. The audit process is financially expensive. And if you, even if you just took that to one side, it requires time, not just on the auditor side, but someone who's internal in the organization to help the auditor understand the organization. Like you have to set up meetings for them to do with the right people and give them access to the right materials and walk them through a company's procedures. It takes a lot of time in terms of like the actual duration of the audit. It takes a lot of time in terms of the company's internal processes.
And my guess without, without knowing more about it, but, without knowing more about the specifics of how it's implemented now is that for most companies there probably is like someone who has a spends a sizable percentage of their job just managing the audit. And it might even be that there are multiple people who spend some meaningful percentage of their job managing the audit and so those kinds of procedures I think are really challenging for Little Tech.
Kevin Frazier: Yeah, and just to really echo something you said there about Utah being a leader in an innovative regulatory approach—for folks who haven't read up on the Utah office of AI policy, I've spoken with Zach Boyd there, the director.
The work that Utah is doing deserves a ton of attention from stakeholders. Their regulatory sandbox, as you referred to it—I think their formal program definition is regulatory mitigation. And so they actively work one on one with companies who want to come to Utah and develop products specific for Utah residents and say, what steps are you going to take to be aware of consumers' needs and potential privacy concerns and related interests.
And here's the system we're going to set up so that if you go beyond those bounds, those expectations, we're going to give you a cure period, for example, a chance to change your practices, but carry on without some sort of fine that's going to knock you out and take you out of the picture.
So really cultivating a chance for companies to prove themselves without fear of onerous or unexpected fines seems like a really promising model that we'll see how it develops in Utah. It's still very early stages.
Matt Perault: Yeah, I agree. I, I, I've loved, I've been very interested in the sandbox model for a long, for a long time. When I was at UNC Chapel Hill, Scott Babwah Brennen and I—and Scott is now the director of the Center on Technology Policy at NYU, where I'm a fellow—we wrote a bunch about this.
One of the things that sandboxes have been criticized for is the idea, I don't think it's always the case, but the idea that they are just deregulatory in nature. And so sandboxes that are structured as you describe, where you test a product and the sort of the only lever is regulatory mitigation, regulatory forbearance—that is obviously like a deregulatory framing.
And I think it makes lots of sense, and there's a lot that's desirable about that in areas that sandboxes have been traditionally used in, like fintech, and then also in AI, where I think you really want to see a lot of product experimentation.
But it's not the only way to have an experimental approach to emerging areas of tech policy. So one of the ideas that Scott and I proposed were similar models to sandbox and you could call them had a different name for them, you can call them lots of different things, but where you're trialing a new policy regime at the same time as you're trialing a product.
And if you set that up in a way where there's transparency and you're actually generating evidence in the trial, then you'll learn something about the products, and you'll also learn something about the policy regime. And I think there could be models that for states that maybe you have more of a pro-regulatory approach that use something like that.
And the advantage I think of that is like I, I think we have informed views about what the impact on Little Tech will be from various different regulatory models, but we, it's hard to know for certain, you know, there's a lot of uncertainty. And I think the experimentation and trial approaches means that I think if you set it up well, you'll, you'll learn about those impacts.
Any experiment that you conduct could have a sort of reflexive auditing introspection component to it where you're actually like producing data in the experiment. And then at some period of time you look holistically at that data and you do a report on it or you provide some feedback to the lawmakers on the progress to date.
And so you learn about the policy regime that you've tried to establish and that kind of a model I think really works in AI where I think we really don't want to be in a situation where we put a bad law on the books that's just on the books for 40 years. And again, post, I think the stakes of that have always been high; I think after the release of Deep Seek, there's an appreciation by lawmakers of exactly how high those stakes are. And in that, in that universe, having different tests of different approaches, I think makes tons of sense.
Kevin Frazier: So thinking about a sort of affirmative Little Tech agenda—more so than just saying, please don't adopt the EU AI act in your state or at the federal level. What are some of the things you're hearing from AI startups that they're really hungry for? What sort of assistance or regulatory framework might actually make it easier for them to compete that they're not seeing on the ground right now?
Matt Perault: So for Little Tech, I do think the focus primarily is that they want to build their products and they don't want new complicated regulatory regimes to make it harder for them to do that.
I've never talked to a company that says we want to be able to violate existing law. So they understand obviously like, you know, you can't engage in criminal activity, you can't engage in unfair and deceptive trade practices, you can't, you can't do things that are contrary to existing law. But a development-oriented compliance regime that all of a sudden now they have to manage that as they're just as they're trying to build a product is really challenging.
I think the thing to remember about venture capital is like the batting average is low. Most companies don't get to market successfully. And so people, people see the successes; you don't see the companies that weren't able to build a successful product. And that is a reflection of the competition really being fierce and hard.
And again, you know, larger companies are, typically are advantaged in lots of ways, particularly in an area like AI, where the, the barriers to entry, I think, are more significant than they are in certain other areas of tech development, just because of like the compute needs and the data needs, and that type of thing. So I do think the emphasis is on, we just want to be able to compete.
There are certainly like lots of things. I think you could think of as like an affirmative regulatory agenda for Little Tech. I mean, one is, one is what are the things that we need to have this technology function safely in the world and be good for consumers?
And I think, I think if most people experience it as harmful or scary or unsafe, like that won't be good for the future of AI. And so that's why in the, in what we've spoken about and written about, we've put a focus on ensuring that there is regulation of harmful use, right? So our, our position is not that there should be no regulation. It's that there should be regulation focused on harmful use. And so I think that's an important component.
The government has also talked about very, there've been sort of different ideas batted around about ways to lower barriers to entry, and that might be through, through things like government providing access to data sets that that startups and academics and researchers can train off of, or you know, giving better access to compute resources, which helped to mitigate some of the high costs of development.
And so I think there are things like that, that are certainly, like, valuable to experiment with. Obviously there's a lot of discussion around energy, reducing energy cost, particularly for Little Tech and increasing energy access—like those are, those are things that are positive on the affirmative side.
Kevin Frazier: Yeah. And, and I like that point because I—to get up on my own soapbox for a second—I think that this AI literacy component, both from a regulator's point of view, as well as from a public point of view, is not being discussed enough.
And to your point, the consumer base isn't going to be interested in new AI products if they're only primed to hear that AI is dangerous, it's going to steal your information, it's going to take over your life, or what have you.
So I think that AI literacy component is really fascinating, and just want to celebrate the fact that the state of Oklahoma is working with Google right now to make it possible for 10,000 residents to receive some training on AI, very fascinating to see how that goes. And also want to celebrate the California state university system becoming the largest university system to have an enterprise wide ChatGPT account for all of its students, faculty, and staff—really interesting model to follow.
But also emphasizing the need for regulators to be aware of the actual risks of these different products. So for folks who haven't seen already, perhaps the first major UDAP settlement that was reached was between AG Paxton in Texas against a company called Pieces, where there was this dispute about the rate of hallucination for this Pieces tool that was for medical uses.
So you can imagine that if you're underselling or understating the rate of hallucination for a tool that's going to be used in medical settings. That's a huge issue, but that's contingent upon the state AG's office, having the resources, having the staff being aware of these things.
So it's fun to see how this entire ecosystem is contingent on kind of raising the bar of everyone's awareness of these issues.
Matt Perault: Yeah, that's, that seems exactly right to me.
The last point that you made about, you know, having the resources, I think is one that can easily get glossed over. And again, this is—I don't want to prosecutor-splain to your audience like you, you have a large listenership that knows these issues much, much better than I do, but, but prosecutors don't just get perfect cases handed to them.
You know, you have to do work to figure out like what, what's a violation that occurred. How was a person harmed? You know, you need to have some understanding of the nature of the injury. You need to have some understanding of the mechanism of how a person was injured. And those things aren't just obvious, intuitive, handed to you.
Those are, you need A), to have someone who has the time to, to investigate. And then you need to know what you're looking for or what you might be looking for. You need to understand how the technology might be used in harmful ways.
So I think often when people hear, there's this like whole set of things that people think of as like sort of unsexy do nothing policy solutions. And I think that like digital literacy worker retraining, resource capacity building, technical capabilities, coordination between enforcement agencies—-all that stuff, I think there are a lot of people who would bucket that as like, I hear that, they hear that as do nothing.
From my standpoint, each of those things has a robust version of it, and I think on, on the enforcer resources, which is not just financial, it's also technical understanding, technical ability—it's pretty important. So, you know, our focus is use existing law to regulate harmful use to, and to enforce against cases where AI is used harmfully to harm consumers. And that will take work. That is not a do nothing agenda that, that, you know, understanding how to use existing law to prosecute AI violations takes work.
And my hope is that those policy approaches that understand it as work and understand it as needing funding and support and training—that those will be viewed less as do nothing approaches and more as like a very key component of a pro-consumer, pro-innovation agenda.
Kevin Frazier: And before we close with some of your things you're particularly paying attention to for the future of the Little Tech agenda, we've talked a bit about DeepSeek.
If we could focus there for just a second on to what extent you regard DeepSeek's immense progress at, regardless of what the actual number was, far lower cost than anyone was expecting DeepSeek to announce this impressive training run.
To what extent do you see this as a sort of Sputnik moment and do you think it's, it should, or it will lead to a greater embrace of open source here in the States? Or is that just something that is kind of off the table, given that we've invested so much in kind of a handful of companies?
Matt Perault: Yeah, I think it's certainly a Sputnik moment. I think this is a moment where we see the importance of American competitiveness and the policy agenda, I think, has shifted toward that. So we don't need to throw safety overboard. It's not about sacrificing safety, but it is about rooting our policy agenda and how can we create competitive products.
And I think that means again, to go back to the thing I've said several times–-not burdening model development, focusing on regulating harmful use, not on model development. Because if we want to compete with deep seek, if we want to compete with China, we can't do that by creating significant headwinds to model development.
I also think it seems clear that a lot of the innovation will happen in open source, and so kind of regardless of America's position on it now, China is going to build it, and developers all over the world will have access to Chinese tools. And so I think that means that if you want American products to be competitive globally, and you want the backbone of many AI products to be an American backbone, then open source technology development has to be permitted.
I do think there's like generally pretty good news on that front—not conclusively good news, but directionally good news—which is that maybe two years ago, or even a year ago, I think there was more conversation around how restricting the development of open-source AI. And I think now there seems to be more of a recognition of the importance of open source, certainly for innovation and competitiveness.
And I think also people understand that even though sometimes intuitively open source doesn't seem safe that actually the history of software development is that open-source development is safer, that your vulnerabilities are clear, more people can identify the vulnerabilities.
And so I think now post DeepSeek, my hope is that that becomes even more evident. And again, you know, we had the NTIA report on open-source tools in the last administration, which said, you know, this is an area to, to, to look at closely to continue to monitor, but given the benefits of open source for competitiveness, it shouldn't be prohibited at this stage. And I think that's really important now and into the future for Little Tech.
Kevin Frazier: Yeah, it was a very fascinating response, too, to see across the aisle folks from former FTC Chair Khan to Senator Hawley all banging the pots together, saying we need to really lean into innovation if we're going to respond to this DeepSeek moment.
And so we'll see how this all evolves in the next couple of months, because as things stand right now in February, we're about, I don't know, 150 days away from when the AI action plan is due pursuant to the Trump executive order on AI.
So from your perspective as head of AI policy, what are the things you're tracking right now? What's, what's taking up most of your time? What legislation or just developments are you really keeping an eye on?
Matt Perault: Yeah, so it's in line with the discussion that we've had in this podcast. I mean, one is continuing to put this shift on moving the focus away from regulating AI development and toward regulating AI use.
And then second is figuring out an agenda for state policy development that is not going to cause undue burden to interstate commerce. What are, what are the things that states can do that are really principally focused on regulating conduct within their jurisdictions and not regulating the development of products across state lines?
And I think that sort of feeds into the third thing, because if you're, if, if states are engaged in that, will be less competitive. American companies will be less competitive if there's a version of ChatGPT in Texas that's different from the version in California that's different from the version of North Carolina. And, and if there's a different ChatGPT version, all those states, the likelihood that there are new upstart competitor versions of that product is going to be significantly attenuated, I think.
So shifting away from that state by state patchwork, at least for the things that are really related to a national AI market, I think is going to be important for American competitiveness.
And then I think the question is, in addition to that—in addition to sort of a focus on federal and state governments doing what is in each of their traditional domains—what are the other elements of that competitiveness agenda? How do, how do we make American technology development as competitive globally as it possibly can be?
Kevin Frazier: One final question to hit on, which is a bit of a toughie admittedly. So here's, here's my cold call question for you, Matt, thinking about the Little Tech agenda from a competition law standpoint, we have on the one hand, the reality that a lot of startups are hoping to one day be acquired by a company. That's a great exit, right? You're going to do well for yourself and for your other founders. But on the other hand, as we've talked about, there've long been concerns about outsized concentration in the tech sector.
So what's the Little Tech approach to this competition question. Do we want to make mergers easier? Acquisitions easier? How do we balance that tension between the incentive to get into the startup game, knowing that you have a good exit opportunity versus avoiding this concentration dilemma?
Matt Perault: Even you just saying the word cold call made my heart flutter, even though I'm like now several years after my law school experience.
Kevin Frazier: It’s my greatest superpower. I can just, I can put the fear of God into folks anytime you say the words cold call.
Matt Perault: It is.
So I don't see a conflict there. I mean, I think, the, the view that I have of that is that pro-competitive mergers should be allowed to be consummated and anti-competitive mergers shouldn't. And same on the Section 2 side, you know, pro competitive activity shouldn't be an issue under antitrust law.
And we have robust, you know, antitrust law and increasing antitrust enforcement to ensure that companies aren't permitted to engage in anti-competitive conduct that, that harms consumers.
And if each side of that, of antitrust law, is enforced robustly—meaning pro-competitive mergers are fine, anti-competitive mergers are, are problematic and are, are challenged. And then on the Section 2 side, pro-competitive conduct is not enforced against, anti-competitive conduct is seen and enforced against by enforcement agencies—then that seems to me to be a good outcome for Little Tech.
It is actually interesting–we talked a few minutes ago about enforcement resources for agencies, and a few years ago, there was a robust set of antitrust reforms that were introduced in the House and Senate. I think there were six bills. One of them was around resources for the FTC and the DOJ, the two primary antitrust enforcement agencies. That was, of the six, the only bill that passed.
So the bills that were aiming for kind of more, more extensive reform and changes to existing antitrust law weren't able to get enough political consensus to pass, but the idea that we should at least have resources in place to enforce existing law got enough consensus to move through the legislative process and to become law.
I think there's something there that seems appealing, you know, which is like, it's not clear necessarily that we need a radical revamp of existing law, but we don't, but we don't want existing law to be under enforced. We want enforcement agencies to have the tools they need.
And I think that's true, you know, across the board in legacy industries, but again, as I was, as I was emphasizing earlier, I think it's particularly important in AI, because there are going to be new ways, I think, that AI is used to engage in anti-competitive activity and we want enforcers to understand what those are. We don't want them to mistake pro-competitive conduct for anti-competitive conduct. And we want them to be able to identify when there is a violation of law, and that just is not something that automatically occurs.
It's not a given that we will end up in that world. There actually needs to be energy. We need to put energy into ensuring that we get to that world.
Kevin Frazier: Don't get me started on the need to bring back the Office of Technology Policy. We'll leave that for another conversation. You survived your cold call. Thank you so much for coming on, Matt. We'll have to leave it there.
Matt Perault: Thanks a lot, Kevin.
Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.
Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to Jan. 6. Check out our written work at lawfaremedia.org.
The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.