Latest in Podcasts and Multimedia

Cybersecurity & Tech

Lawfare Daily: Alissa Starzak on Keeping the Internet Running in the Age of AI

Kevin Frazier, Alissa Starzak, Jen Patja
Wednesday, July 24, 2024, 8:00 AM
What threats does AI pose to the integrity of the Internet?

Published by The Lawfare Institute
in Cooperation With
Brookings

Alissa Starzak, head of public policy at Cloudflare, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, to discuss the promises and perils of AI in the cybersecurity context. Frazier, who interned with Cloudflare while in law school, and Starzak cover the novel threats posed by AI to the integrity of the Internet. The two also discuss privacy laws, AI governance, and recent Supreme Court decisions.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Introduction]

Alissa Starzak: There's a whole world of technical cooperation that has to happen for things like the internet to work, and that's often where I think that set of rules and norms play. So it's not the public policy set of rules, actually. It's more the international cooperation around technical protocols standard, that I think is actually the most appropriate for that set of issues.

Kevin Frazier: It's the Lawfare Podcast. I'm Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare with Alyssa Starzak, head of Public Policy at Cloudflare.

Alissa Starzak: People have different motivations putting out legislation. I think the real question is, does the legislation do something positive in the end? Does it actually address a set of concerns that is a set of concerns that everybody wants to address?

Kevin Frazier: Today we're talking about the promises and perils of AI in the cybersecurity context.

[Main Podcast]

The pros and cons of AI are easily observable in cybersecurity. On one hand, AI can bolster efforts to identify and respond to cyberattacks. Cloudflare, for instance, relies on machine learning to identify AI bots. On the other hand, AI may give bad actors new capacities to wreak havoc. In Cloudflare's recent State of Application Security 2024 report, the company concluded that bad bots left unchecked can cause massive disruption.

One third of all internet traffic stems from bots, the majority of which --- 93 percent --- are unverified and potentially malicious. Well Alyssa, you're no stranger to the dual use aspects of novel technologies. You spent time as the general counsel of the Army, and now you're the head of public policy at Cloudflare.

And I'm keen to start off with just your high level observation. Are we freaking out too much about AI or not enough, or somewhere in the middle? Where are you on this whole AI question?

Alissa Starzak: It's an interesting question, Kevin. I think one of the things that's happened over the course of the past year or maybe two years now is that everyone has focused on AI.

And for good reason, I think one of the things that happened, it sprung onto the scene in really incredible ways where all of a sudden people could see it touching their lives. And I think realistically, that had an effect and we shouldn't be surprised by that. I think the question of how it impacts cybersecurity is a slightly different one.

It's not new; I think that's the big piece. I think people think about AI and they think about ChatGPT. That's not AI or machine learning for cybersecurity. They're very different things. And the reality is that we have to disaggregate the different issues that we're facing.

Short answer to your question, I guess, I think in many ways, AI will be revolutionary, but should we be terrified that something is going to take over? No, I don't think that's going to be the immediate action.

Kevin Frazier: Okay, so I can sleep at night. That's good to know. But let's go to another kind of baseline setting question, which would be back when I interned at Cloudflare in law school, I remember that people commonly referred to it as the internet's plumbers.

I don't think that's the official motto, but it's certainly an evocative phrase, but it doesn't fully get across all that Cloudflare does to keep the internet up and running. So in your own words, what's your elevator pitch? Or I guess as the head of public policy, what's your Senate subway speech on what Cloudflare does to help us all address these new cybersecurity threats or ongoing cybersecurity threats?

Alissa Starzak: So, I think the important thing is that people don't really understand how the internet works, and Cloudflare is foundational to how the internet works in many ways. So, what we are is a big global network. We have equipment around the world and we use that network to look for cyber threats, but also just to make internet traffic more efficient. We operate in more than 300 cities in more than 120 countries around the world from an equipment standpoint. From a practical standpoint, what that means is that we can be close to users, so we can look for threats that are close to users.

We can look for threats that come in as they come in very close fashion, and we can also just again make traffic more efficient. So, we reduce the overall bandwidth of things like bots. We can reduce the total amount of things that have to go over the submarine cables or other sort of infrastructure systems.

So that's the plumber idea, right? We can reduce the overall bad traffic, just make sure that the good traffic gets through.

Kevin Frazier: Right, well I'm all for good traffic, especially here in Miami where we have far too much bad traffic, but that's another bad joke and a whole other story.

But to get back on track, I think a lot of listeners probably assume that these bots are a novel threat, like I initially shared, but they've been around for a while. Are you seeing any change though with the advances in AI? Are we seeing a higher magnitude of bots? Are they becoming more pernicious and effective or has the status quo kind of just stayed the same?

Alissa Starzak: So I think it's worth understanding what a bot is. A bot is just something that is traffic that is not human. It's automated traffic. And sometimes that's used for good things. How does Google work? We don't, again, we have things we don't think about. Google sends out a crawler, which is essentially a bot. It announces what it's doing and it looks for things on the web. That's a bot. But usually that's what people would think of as a good bot, right? You want your thing to be crawled by Google. It means that it comes up in search. That's a positive.

But there are lots of reasons that you might send out crawlers for all sorts of other bad things. So you might use automated traffic, for example, to try to get in a login. So you just bombard a site with traffic that is essentially trying to log in. That could also be a bot. So, it's a huge range of different kinds of activity, but the important thing about what is a bot is just traffic that is not human.

So, everybody has gotten the CAPTCHA: are you human? Verify you're not a robot. That's essentially the kind of thing that we're looking for, we're trying to find the things that are trying to do things that are malicious. Not only that they're not human, but they're trying to do something that you don't want them to do.

One of the interesting things about the AI world; you had asked if AI has contributed to bots. I think there, there are two pieces of that. Again, if you think about what you're using a bot for, it may be that you're using a crawler just to identify things for AI systems. So, AI systems actually will send out crawlers.

Those are bots. If you're a creator and you don't want your, your content to actually go into AI systems, it may be that you want to block that bot. So having the ability to understand what the bot is, understand what you want to allow and what you want to prohibit is actually an important component.

Kevin Frazier: Well, so this is obviously top of mind for Cloudflare, given that you all just recently --- we're talking in about mid-July --- you all recently provided customers with a way to stop AI bots from scraping content and using data without permission. I think a lot of folks would be excited by the purpose you all stated to help preserve a safe internet for content creators, really emphasizing that you should have whatever content you're creating be your own for as long as possible. What's been the initial response so far from both content creators, from AI labs, and potentially from policymakers? Are you getting rounds of applause or are folks targeting those bot armies right back at you now?

Alissa Starzak: I think people are cautiously optimistic about those kinds of ideas. And I say cautious because I don't think we know what the impact will be yet. But I think that the idea that a content creator can decide how their creation is governed, whether it can be crawled, whether it can be put in Google, right? Can they dictate the terms of when somebody takes it and puts it into an AI system or not?

I think that feels intuitive to a lot of people. And it's one of those issues that has come up that it seems like it should have happened all along. So, I think the response that we've gotten has been largely positive. But again, people are waiting and seeing about how it all works in practice.

But it's something that I think again, it's not a given that every creator wants that. Some creators may actually want to have systems that go into AI or content that goes into AI. It just depends on every individual creator. So the key for us is to give tools that enable the creator to make those decisions or that the person who owns a website to make those decisions.

Kevin Frazier: So would you all, wearing your public policy hat, support some sort of legislative mandate to give this option to all content creators who perhaps aren't Cloudflare customers? Would you want to see this become a norm in law, perhaps, to make sure that content creators have this right or this opportunity?

Alissa Starzak: So I think that gets into the question of what public policy can do and what is better done elsewhere. I think the reality is often in technology, public policy isn't exactly, or legislation isn't the right answer. Technology moves really quickly, changes really fast. And it's hard to figure out exactly how you write something that, that anticipates those changes.

I think that is true with something like AI as well. And I think the reality is that what you want is a system that, that implements controls that enables people to implement controls where that is broadly accepted. And the entities that don't follow those set of rules are punished essentially.

And I don't mean punished in the, in the public policy way or the criminal way. I mean that you actually can get blocked for not following rules. There's a whole world of technical cooperation that has to happen for things like the internet to work. And that's often where I think that set of rules and norms play.

So it's not the public policy set of rules, actually. It's more of the international cooperation around technical protocols standard that I think is actually the most appropriate for that set of issues.

Kevin Frazier: Fascinating. And so we've had quite a few conversations on the podcast about the values of open source, for example, versus closed models and just those larger ideas more generally about things being open source means that we can have this sort of democratization and spreading of opportunity and new technology.

How would you respond to the pushback that having just a couple content creators be able to deny these bots to scrape data deprives the rest of us of better models, better training data that perhaps would unleash the next generation of even better AI models? Do you see that as a potential counter argument that you've been hit with?

And when you have been what's been your most strong response to those critiques?

Alissa Starzak: Look, I think a lot of these issues, as with many things, it's a balance. We have to figure out how much are a person's rights and how much are things that lead to innovation. And you want to draw the line in between somewhere.

I think that we've had, we've had interesting developments over the past 10 years or so where certainly the line has changed in areas like privacy. We all, we are now in a very different world of understanding that there are regulatory controls on your own personal data. So you could have made that same argument; in fact, I imagine companies did. Not that I was, it wasn't me, but I imagine companies made that same set of arguments about access to any data that was on the on the web that might be personal, or, why shouldn't you have access to it? It might lead to innovation. You can do all sorts of interesting things.

So I think that there's a balance between those two. I think one of the big pieces is to try to figure out where the line is between things that could harm people or could deprive them of, of things that they should have a right to and things that will lead to innovation. So think about AI in the context of medical discovery.

On the one hand, a lot of the information that might help power an AI system in that world is actually very personal health information. On the other hand if you aggregate all of that information, you might be able to make incredible medical discoveries. And so we're seeing something similar, I think, in many ways.

In the AI there's got to be a balance between those two. We have to figure out what rights we're trying to protect on the one hand while not stifling the innovation on the other. I think from a regulatory standpoint, the EU has started to make some interesting choices around those same, those kinds of lines. That's what the EU AI Act is really about, just trying to think about what the uses of AI should look like, what information should be protected, what uses should be prohibited. And that's really the sort of thinking about those lines.

Kevin Frazier: Well, balance and seeing things in a nuanced fashion, perhaps isn't the typical way to describe policy makers, especially in D.C.

So one other attack on policy makers that commonly gets brought up would be perhaps a lack of technical sophistication. I think everyone on this podcast is probably familiar with some of the questions that were asked of Mark Zuckerberg, for example, back during the initial hearings on Facebook. You know, ‘How do I find a friend? What is a social network?’ These sorts of questions. From your experience, do you feel as though maybe the Hill has turned the corner? Are folks getting AI a little bit better, or is it just this sort of binary thinking? Good bot, bad bot, protect content creators or destroy the entire internet. Are we seeing more nuance this time around, a little more sophistication?

Alissa Starzak: So I think there have been a lot of changes since the those initial hearings many years ago. I think the thing that has happened with AI is that policymakers have recognized the significance of it. So what you saw was this sort of wake up. We don't want this, that to happen again, where we miss the, miss the world that, of social media, for example, and didn't, never even thought about it from a public policy standpoint.

And so here's this new big thing, AI, let's start to learn about it. Let's really think about what the implications are. And I actually give policymakers a lot of credit for that. I think they have recognized that they missed the ball in some ways on technology. And now they are trying to make up for that with AI.

I think that the question of whether they've figured it out, it's a hard problem set, and they haven't even quite figured out what they want to regulate --- going back to the other piece --- so I don't know that it's all solved and cleaned up in a bow yet. But I actually give them some credit for trying to think about the issues more in advance, really trying to dig into the underlying technology in a way that they really didn't in previous iterations of technology.

Kevin Frazier: Well, and Cloudflare is at such an interesting point in the internet stack, as it's known, obviously, we've talked about the sewage approach as the plumbers getting into the weeds of things. Do you all view it as a sort of responsibility to help be that neutral provider of reliable information about the internet and cybersecurity that perhaps other companies might be less transparent about?

We won't name names or anything like that, but to what extent is just providing information a key role that some of these private companies can provide to policymakers?

Alissa Starzak: I think it's huge. I think it's huge. And actually, the less involved you are in a particular issue, in some ways, the easier it is to give neutral information. I think one of the things that's interesting about Cloudflare is that we are a very technical company. So, what we have found over time is that a lot of people just don't know us, even though they probably use us hundreds of times a day, or they go through our network hundreds of times a day.

I think that the benefit that gives us is that we have an ability to talk about how all of those things work in a sort of relatable way. So we can go in and talk about, hey, you may not understand how the internet works. Let's give you a little bit of a primer. Let's describe how the pieces fit together, why they matter, what, where regulation might be appropriate, where it isn't, where regulation will have unintended consequences, for example, on something you actually care about.

You can start to have those conversations if you go through the technology and really just explain that, they're busy people. They have a lot, they know a ton of different things, but there's a good chance they don't know how the Internet works. Not through any fault of theirs, right?

That's, they're doing a lot of other things. I'm a lawyer. I had to learn it too, right? It's not a, always obvious to people. And so being able to go in and explain it in a way that both makes sense to them, but also doesn't talk down is really important and I think is really appreciated.

Kevin Frazier: Would you be an advocate for more formal exchanges of that sort of information and expertise? I think one big critique about the current approach to AI legislation on the Hill has been, we had these AI insight forums, but they were behind closed doors. And oftentimes they invited experts who had some pretty dang close ties to the labs themselves.

Is there enough opportunity for exchange between companies like Cloudflare and policymakers to have that increase in overall awareness, or should we have more formal mechanisms to ensure that exchange of information?

Alissa Starzak: I think there, sometimes informal mechanisms are more informative in the end.

I think formal mechanisms, people tend to, they look a little bit like theater. I think often when you have a formal hearing, you have something that's set up in advance, someone giving a speech, essentially. Maybe there's a rhetorical question at the end that somebody may or may not answer.

It doesn't always feel like the learning environment necessarily. I think that the key though, is to make sure that you have exchanges that are both robust and comprehensive in terms of who is involved. So, the concern that you raised is really about how do you make sure everybody gets in, that all of the different kinds of voices are heard.

And that’s just a sort of a policymaking, general policymaking critique, right? That those are always things that are hard to do.

Kevin Frazier: And we've seen a lot of these unresolved policymaking questions move their way up to One First Street, to the Supreme Court, where we recently had Moody v. Netchoice, for example, and Murthy come down, where, depending on who you ask, they were either excellent opinions, horrible opinions, or somewhere in the middle.

What's your sense of the sort of judicial capacity to take on these really difficult technical questions. If we said maybe we're seeing some improvement on the Hill, on the other side of One First Street, we've got the Supreme Court who has Justice Kagan, who said that, she admitted that the justices weren't the nine greatest experts on the internet.

So, should we be concerned about more of these policy questions perhaps finding their way into a judicial setting, which doesn't have really any sort of informal mechanisms for that increase in understanding?

Alissa Starzak: I think the interesting thing about the Supreme Court piece is that the Supreme Court isn't making judgments on its own without other public policy mechanisms, right? So all of those cases came because there was legislation that was passed. Those cases came because there was legislation that was passed that different policymakers tried to assess something and made choices along the way. So the Supreme Court didn't get involved in things without anything in front of them, they really got involved after policymakers had gotten involved. And that sort of does make you think that all of these things stack up on one another.

So if you, if you've done a good job with legislative history or background, maybe that will help the courts in the end. Maybe they will actually make better judgments because there's actually a legislative record about why you came to certain judgments. And hopefully it's accurate from a technology standpoint. I'm not sure that's quite where we are right now, but I think that's, in an ideal world, that's kind of what you would want. So not that they're making novel untested arguments about technology, really that they're looking at things based on the record in front of them that is a bit more comprehensive and actually accurate, which again, goes back to the policymaker problem.

Kevin Frazier: It's a big problem. It's a tough problem. And it's a problem we've unfortunately seen before. I think if you look back to privacy law, for example, including cybersecurity law, a lot of that has been at the state level where we've had this patchwork approach. Are you getting the sense that's going to be the same route we go for AI regulation? Or do you have some optimism that the Hill's going to step up and get into this AI fray?

Alissa Starzak: Well, that is a loaded question.

Kevin Frazier: That's what we do. That's the second podcast name. “Loaded Questions.”

Alissa Starzak: So I think states have gotten involved in regulation for two different reasons. One, they've gotten involved because the federal government hasn't acted.

And two, they've gotten involved because they have views on things that may differ from the federal government or from Congress in general. And I think there are different kinds of problems. I think the challenge with where we are in politics in Congress right now is that you don't see a lot of legislation.

We see it in spurts. But I think that the challenge with having very complicated legislation is that Congress just can't come to agreement on it right now. And so states are filling the void. I don't see that's going to end anytime soon. I don't think AI is, we're going to magically solve those with AI unless AI starts writing and passing its own legislation.

So I think we're probably in that world for at least some time.

Kevin Frazier: Looking retrospectively, I think that there was a lot of concern, especially right when California passed its landmark privacy act, the CCPA, that, oh gosh, this is just going to destroy companies. It's going to destroy innovation.

Using hindsight, have you seen this sort of patchwork approach to privacy actually work out okay? Or are we still in desperate need of a federal privacy bill?

Alissa Starzak: We need a federal privacy law. It's embarrassing that we don't have a federal privacy law. Like we should all be embarrassed that we don't have a federal privacy law.

I think every, almost everybody, including members of Congress will tell you that it's embarrassing that we don't have a federal privacy law. I think that they're different questions. I think the question of does a patchwork work or not, I think what we've seen is that companies look for consistent practices, so they're not trying to sort of dictate all of their practices based on the narrowest letter of any particular law.

They are looking for the commonality between different requirements. So we as a company --- certainly when GDPR first actually went into effect, our view was apply it globally. That will help us address privacy regulations. And certainly from a California perspective, the fact that we were applying GDPR globally helped help with our own implementation, and I think you're seeing that with the patchwork as well, right? So as long as you've met that sort of fundamental standards often you can get a long way there on consistency. But would it be better to have a one piece of federal privacy law that really protected people for all sorts of different issues? Absolutely.

Actually one of the things that we're seeing right now is that the lack of a federal privacy law is affecting how the executive branch acts as well. So you’ve seen, we've seen rulemaking over the course of this year that is really in some ways a response to not having a federal privacy law.

So there are all of these rules about thinking about how we deal with sensitive data. If we had a federal privacy law, we wouldn't need rulemaking. So again, in that same vein, how do we think about those issues? We really do need to think about what kinds of information we're protecting. The right entity to do that is Congress.

It should be federal. It should apply countrywide. That's not to denigrate any state that's passing that phenomenal privacy law. It's really just to say that's really where we should be as a country.

Kevin Frazier: It certainly is now an area where anyone with a regulatory sword seems to be entering into the question.

The FTC may promulgate a massive commercial surveillance rule sooner than later. We've seen, as you mentioned, more and more states jump into the fray. So amid this uncertainty as well, I wonder if you're seeing more adoption of sort of informal mechanisms to adopt these improved standards or heightened standards.

So one tangible question I'm keen to get your two cents on would be following Murthy, where we saw the Supreme Court generally punt the question, decide the issue on standing and not dive into the merits of jawboning, the practice of the federal government putting pressure on private companies. Do you think that the case, despite it being resolved on standing, will change federal behavior?

Or do you think that this is just going to be a little speed bump that we forget about and the status quo goes back to normal, and angry emails get sent to platforms and all that jazz? Or are folks finally seeing, reading the tea leaves correctly that we need to have a change in practices?

Alissa Starzak: So I think what we saw is less about the Murthy case and more about the interest in the jawboning question in general, right?

So we saw congressional investigations into jawboning. We saw suggestions that that it might be inappropriate and a lot of pressure put on entities that engage in looking at platforms, for example. I think that has changed the landscape in some pretty significant ways.

I don't think that's the legal question. That's what's so interesting about it, right? I think that the reality of sort of what it looks like to either to litigate a case or to defend yourself in a congressional investigation has changed the landscape. That pressure and the sort of both the optics concern and the costs has had a fundamental, has fundamentally reshaped some of the landscape. It's hard to know exactly what will happen from a government perspective, how they will treat next steps --- there is more caution now than there was previously about those kinds of topics, and it certainly raised it to the floor.

Kevin Frazier: The other really fascinating development, I think, has been this notion that we need some sort of a Sputnik moment to get meaningful legislation passed on really any question.

Do you buy into that narrative of, we just need one good disaster, as silly and crazy as that sounds, to get something across the finish line at the Hill?

Alissa Starzak: That's depressing. I think there's an underlying question about how we get people to rise to the challenge that we have in front of us.

And I think that's a very real question. I really hope it's not a disaster. I don't want that. I think that that is not ideal. I think that what you want are recognition that we actually have common interests, that people may come from different sides of the aisle. They may have a different policy goals in the end, but there are things that we should do together that make us stronger as a country for everyone. That's a little pie in the sky. There's my optimism. Yes, it's going to come back. We're all going to, we're going to have great governance but that---

Kevin Frazier: It is right after the 4th of July. So some apple pie, great American ideals. We're for it.

Alissa Starzak: I need to break out into the Star Spangled Banner, but nobody wants to hear me sing. So I do think there's something to that, though, right? I think that privacy is an interesting example. I actually think there is a lot of consensus around most goals in privacy.

The challenge is that there are a few components that are controversial and people don't agree. And so how do you make that compromise? We had some interesting developments over the course of the year where people did try to. To move those forward, recognizing that there are trade offs essentially.

Those obviously haven't gone anywhere, right now but the recognition that that is something that could be done, I think is positive.

Kevin Frazier: And I wonder, wearing your public policy hat, is it hard to disentangle supporting legislation that is being pushed for very different reasons by the two sides?

So I'm thinking of COSA, for example, the Child Online Safety Act. Some folks are advocating for it mainly from a sort of protective point of view of not exposing children to certain content. Other folks are really focusing on this idea of making sure that companies are just cleaning up their acts and making sure they're enforcing their own standards.

How do you disentangle those means and ends arguments when you're thinking about whether to put Cloudflare's public policy support behind a piece of legislation?

Alissa Starzak: So one, we don't do a lot of formal endorsement of any kind of legislation. We don’t, I think from a practical standpoint, from a company standpoint, what we try to do is talk about what effects legislation will have, just make it real for people.

So thinking about how you take a piece of legislation, what will the effects be technically? Do you actually understand what the impact will look like? Have you thought about infrastructure companies? That's often a piece that we raise, right? I think that is all sort of the background of how we think about public policy from a company's standpoint.

That's not quite the same as saying, do you do this as a good public policy? To be fair, it's more let's just talk about what it looks like. What would happen in practice? Are you aware of all the implications? How do you think about it? Is this what you're trying to do?

All of those become, can be real conversations. I think the motivation, look, that's always been the case. People have different motivations putting out legislation. I think the real question is, does the legislation do something positive in the end? Does it actually address the set of concerns that is a set of concerns that everybody wants to address?

And people may have other motivations as well but that should be the sort of standard it should be looking at with the effect of the legislation.

Kevin Frazier: And continuing this idea of finding some sort of common ground, I wonder, are there any cybersecurity or privacy policy areas where you think the narrative is perhaps overly partisan, where in actuality you found a lot more common support for ideas than perhaps the public narrative would suggest?

Alissa Starzak: I think one of the things about cybersecurity is that it hasn't traditionally been partisan, and that is positive, and I'd like to keep it that way. I think people recognize that it's actually, everybody recognizes that it's important to protect critical infrastructure, for example. That it's important to protect consumers, both the business side of that, the enterprise side of that from a critical infrastructure standpoint and the individual side of that, they both need cybersecurity.

And so I think that you end up with areas that really just aren't about politics, which is positive. And the more you can keep it in that discussion where it isn't about the individual piece of content, for example, it really is about securing systems, making sure that there aren't unauthorized access points or otherwise, or breaches or compromise, all of those.

It's hard to argue that compromise, for example of some sort of network is a partisan issue. Certainly shouldn't be.

Kevin Frazier: And getting to another potential area of consensus, everyone started 2024 talking about the year of elections and how this was going to be just a critical year to make sure that elections were run smoothly and in a secure fashion.

We're now into July. So far, have we made it through? Cloudflare has been very much involved in making sure elections are run smoothly and reducing interference during these elections. How have we done so far?

Alissa Starzak: You know, it's been a huge year for elections. I think people don't fully appreciate globally how many elections there have been.

I think we had something like four billion people voting. I can't remember. Maybe I've got my stats wrong. I should probably look that up. But it's an incredible amount of people voting. A huge percentage of the world voted this year or is voting this year. I think that the reason Cloudfare has gotten involved in elections is that we think that we have a role that we can play in just making sure that kind of basic cyber interference isn't a problem, right?

So that, that looks like cyberattacks. So imagine for example, an attack on a government website that is telling people where to go vote. We want to make sure that website stays up. It's not very hard to put an attack out that might take it down. If we can help protect it, we think that's an important role that we can play for the democratic process.

And we actually have a bunch of programs that help support it. We have a set of free services we provide to government websites that have election information which includes voter registration websites and a whole bunch of things like that. We also provide a set of free services to, to nonprofits, some of whom are involved in the voting and democracy space.

So, we see it that way. And that might be organizations, for example, that either describe the issues or encourage people to register to vote. Those all might be organizations that fit into that category. We also help protect a bunch of independent journalists who report on elections. So, all of those are part of the broader election ecosystem.

And again, going back to the idea that cybersecurity isn't a partisan issue, all of those need protection and actually even on that, even on the candidate side, we actually have a program called Cloudflare for Campaigns where we work with a nonprofit that provides free services out to political campaigns because they also need to be protected.

So, we really try to think about how we can protect the ecosystem from a set of attacks that might influence it to the extent that we can. We believe that we're an important role in that process. I think that the question of how we've done so far: it's been pretty good actually from a cybersecurity standpoint.

So some of the issues that we've seen have been, have not related to cybersecurity. We had, we had Pakistan turn off the internet for a short period of time during the vote. That wasn't great. We've seen some attacks along the way, also not great. But for the most part, cyber has not been the biggest topic in the elections that we've seen there have been lots of other big topics, lots of public policy, interesting topics, but cyber has not been the primary set of concerns, which is generally a good day for cybersecurity companies,

Kevin Frazier: I think that there's some people who would say, oh, it's great that Cloudflare is providing this basic service, providing sort of an equal opportunity to everyone across the entirety of the political spectrum. And then I'm sure you've also received some comments who say, how dare you provide that service to that party or to that other party? How do you all decide about when you can remain neutral?

And when you feel given your immense power as the infrastructure of the internet, when you need to go in and say, no, these are our values, or these are the policies we're stated behind, and this is where we put our foot down. How do you go about drawing those red lines?

Alissa Starzak: So, in the election space, it's actually not that hard, and this is because we're not actually doing things that are political.

And that's the important piece. I said we work with a nonprofit. The nonprofit has a certain set of published standards of when candidates are eligible, for example all candidates that are eligible that meet those standards can be part of their program, and any of the entities or the political candidates in their program can sign up for service with us for free.

So it is not, there is a predetermined set. It's not us making the choices of who we protect and it's not us doing political donations. It's us providing a set of services to a nonprofit that is then reporting to the Federal Election Commission exactly what those donations look like to the nonprofit.

So I think that is actually, doesn't raise those kinds of choices. Which is good, I think because I think you really do want a set of issues to not be about, do you support this particular candidate? You want it to be the basic infrastructure that everyone has. Kind of like you would say that you wouldn't say to the cable company that person shouldn't have internet access because I don't like their views.

You want everybody to have a baseline of cybersecurity. And so that's how we think about the election space. The goal really is to have it be consistent. Everyone to have a set of protections. We can fight about policies all we want. And often we should, but that's not the same as saying everyone, that is as depriving someone of baseline cybersecurity protections.

Kevin Frazier: Cloudflare's also got some other schemes related to AI going on. Can you tell us a little bit more about the good uses of AI that we should be attentive to?

Alissa Starzak: So, I think the thing about AI that people have to think about, we often think about closed systems of AI. So, you think about ChatGPT and, a company owning the ChatGPT model and that you just have access to it.

If we’re really going to think about the potential for AI in the long run, we want to have an innovation system that enables people to build with AI. So one of the things that Cloudflare did or has done is really thinking about how we could use our very large network to help developers work with AI.

So you can take an open model and you can actually build AI and use our large network to do inference, what we call inference at the edge, right? So the idea of getting information into an AI system really close to a user and sending information back without having to go through a large training model.

That's something you can do really close to a user. And so we can use our large network for that. So we have a whole platform that is designed to help a developer who wants to build an AI model. They can pull an open-source AI model. They can build on top of it.

And then they can deploy it globally with all of our equipment around the world. I think long term that will be how AI systems end up working. It is the ability to do inference from everywhere as opposed to just where the model has been trained that will have a pretty significant impact on AI in the world.

And hopefully that sort of innovation network will expand to lots of different new, interesting products.

Kevin Frazier: Yeah, this is really encouraging because I think one of the hangups I've had with AI is that we often talk about AI in the ideal sense that, oh, it'll cure cancer one day, or it's going to provide tutors to folks in low-income communities.

But this to me seems like a really tangible, positive use case where we can see folks who might otherwise not have access to this sort of tool be able to benefit from Cloudflare making this unique resource available. Can you speak at all to the use cases you've seen or geographically where the use case has been located?

Is it just North America or perhaps have we seen uses around the world?

Alissa Starzak: No, uses around the world. I really, when we think about those use cases, that inference piece, the idea is that you want an answer that comes back really quickly, right? For, so whatever you're doing, you want something where the AI model is making this sort of quick snap decision that AI can make or that that the output, right, in a way that's real time for a user.

If you have to go all the way back to wherever that original training model is located, it's going to be slow. It's not actually going to reflect, imagine a chatbot, right? A chatbot is talking back and forth. You want that to feel seamless. You don't want there to feel like there's a significant amount of latency.

So thinking about how you build models like that where the user is interacting with something that's actually relatively close to them, but still has the benefits of AI. That's really what long term AI inference will look like. It's that idea that you can do something on the edge of a network like ours all around the world that will make it work in practice.

Kevin Frazier: You know better than I do, but I think it's so important too, to stress that difference in latency between being at the edge and having to go somewhere else is huge and actually matters, even though we always think about the internet moving somewhat close to the speed of light. There are small differences in the amount of time it takes to retrieve an answer or retrieve information.

And that can be all the difference in whether or not someone wants to use a certain product or go back to a certain web page or use that bot again. So this is really interesting and I'm definitely keen to follow it.

Alissa Starzak: Yeah, no, it's a huge issue and you're absolutely right. I think people think of the speed of light as the fastest thing that you can have, but it's limiting, you can only travel at the speed of light. And so when you can only travel at the speed of light, when it's limited to the speed of light, that means that the way you actually reduce the time is to bring content closer, or to bring a decision closer.

And so that is how you ultimately speed things up. And you're absolutely right, it does make a difference. It makes the interaction fundamentally different if there's no latency. And everyone actually experiences it, I think, talking to a chatbot where a very long or a long delay or even a pause feels weird.

Kevin Frazier: Well, and I'd be remiss if I had you on this podcast and we didn't talk about undersea cables. Listeners unfortunately know that I am a bit of an undersea cable nerd. So tell me, what's the state of the undersea cable network right now? What, is there one thing keeping you up at night about the threats to undersea cables?

Alissa Starzak: So I think the thing that people often miss on undersea cables, the thing that we often focus on at Cloudflare is the idea of resilience, right? So in a lot of ways, what you care about in undersea cables is having more of them. You want a lot of different paths, and this is the internet in general, right?

We don't, again, this is my point about not always talking about how the internet works, but the internet is a network of networks, right? That's what we say. And that had no meaning to me as a lawyer. It was like, what are you talking about? It's crazy. But it's the idea that you can find millions of different paths.

It's like a road network across the world. And you can find all of these different paths that will get you to the same place. And if you have a good functioning internet, they will get you to the right place in the fastest way. And that is the goal that we want for an internet. You want to have lots and lots of different paths across all of different places so you can get to different kinds of content and it can all happen at the sort of speed of light, right?

That's the concept behind the internet. And we have this amazing system that we don't talk about enough that of how that actually happens, right? We have these cooperative systems where everybody has agreed upon a set of protocols and they exchange traffic with each other and it just works.

And people think they're going to turn on their WiFi and it's going to work and magically it does. But we don't actually often think about how and why it might break or how it could break. And undersea cables are an important part of that because the more undersea cables that you have, the more chances there are going to be that there are paths to other places in the world.

And so a lot of what we think about is the resilience side of that. How do we ensure that we have enough cables where we have resilient systems where they're not going to get disrupted? And that really is the piece of undersea cable component that I end up focusing on quite a bit.

Kevin Frazier: The resilience question is a huge one. For those who don't know, undersea cables are about the width of a garden hose and even tiny little fishing nets or little anchors that get dropped on them can, boom, snap a couple of them in one fell swoop. So this resilience issue is huge and I'm so glad I'm not the only one talking about it. And one day we will just nerd out exclusively about undersea cables. But I think we'll go ahead and end it there.

Alissa Starzak: Great, well thank you. Thank you for having me on.

Kevin Frazier: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Kevin Frazier is an Assistant Professor at St. Thomas University College of Law and Senior Research Fellow in the Constitutional Studies Program at the University of Texas at Austin. He is writing for Lawfare as a Tarbell Fellow.
Alissa Starzak is the head of public policy at Cloudflare, a company that provides key components of the infrastructure that helps websites stay online.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.