Latest in Podcasts and Multimedia

Cybersecurity & Tech

Lawfare Daily: Securing Open Source Software, with John Speed Meyers & Paul Gibert

Eugenia Lostri, John Speed Meyers, Paul Gibert, Jen Patja
Monday, September 9, 2024, 8:00 AM
What can open-source software compromises result in? 

Published by The Lawfare Institute
in Cooperation With
Brookings

Lawfare Fellow in Technology Policy and Law Eugenia Lostri sits down with John Speed Meyers, head of Chainguard Labs, and Paul Gibert, a research scientist at Chainguard Labs to talk about the distinct challenges of securing open source software (OSS). They discuss what sorts of harms OSS compromises can lead to, how Log4J opened a political window for action on OSS security, and how the software liability debate affects OSS developers.

Meyers and Gibert authored a Lawfare article questioning the conventional wisdom on how software liability could deal with OSS.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Introduction]

John Speed Meyers: It's not because open source software maintainers are doing anything quote, wrong. When software developers write software, the only question is how often you produce software vulnerabilities, not if you do.

Eugenia Lostri: It's the Lawfare Podcast. I'm Eugenia Lostri, Lawfare's Fellow in Technology, Policy and Law, with John Speed Meyers, head of ChainGuard Labs, and Paul Gibert, a research scientist also at ChainGuard Labs.

Paul Gibert: Maybe it's as simple as the more people have to get got, and some more executives have to reach into their pockets to respond to incidents of them getting hacked before we start seeing change.

Eugenia Lostri: Today, we're talking about the distinct challenges of securing open source software.

[Main Podcast]

So, let's start with the basics: why is open source software security something that we need to be talking about?

John Speed Meyers: I think what has happened is that a) open source software has become extremely prevalent. When a software developer, any software developer, not a open source software, but just one at a company, sits down to write new code, more than likely the first thing they reach for is the open source software components that they're going to build on top of, put inside their application. There's industry research that often suggests that 80 percent of a modern software application is actually open source code created by volunteers, not at your company. I did some research with a couple others that suggested that 80 percent is actually conservative. I think for small software applications, it's probably more like 99 percent. And what that means is that all the problems that can show up in any piece of code, be that malicious code added to it or unintentional vulnerabilities, shows up in modern software applications and modern software, because open source code is in there. This is not a statement against open source software developers. All code has flaws. All code can be corrupted. So that is the root of the problem. It's extremely prevalent now. And like all software, there are flaws, unintentional and sometimes malicious.

Paul Gibert: I think all I would add there, John Speed, is that it's a very unique problem to what we've seen in classical supply chains for physical products, for example. It's managed by an unbelievable number of people and that these communities are growing more and more every day, and we really don't have a lot of vision into the problem. We don't know if people have been exploiting vulnerabilities in the supply chain for decades, simply because there's just so much code we would have to go through to understand what's in there. And like John Speed said, companies are bringing more and more of this stuff into their production pipelines and quite frankly, we don't understand it all. We can't see it all.

Eugenia Lostri: So, you're hinting at what I'm about to ask, but let's try and maybe make it a little bit more concrete. In the recent years, we've been talking a lot about how to secure software, right? We have security by design. There's a conversation about software liability that we're going to get into later. But what are some of the unique challenges that thinking about security for open source software presents, especially when we compare to how we're talking about proprietary software?

John Speed Meyers: Yeah, I think an important part is this legal dimension: it's about the liability. When you, as a legal entity, use another thing in your supply chain, often there's some sort of contractual relationship. If you're building a car and you're a car manufacturer and you purchase tires and the tires are faulty at this point you can hold the manufacturer of the tires liable. Well, the open source licenses by design, really they disclaim liability. They say hey, you use this at your own risk. We, the developers of the software make no claim, absolutely none.

And so what has happened is that now lots of companies, they're using open source software and they're realizing that this relationship they have is not a contractual one. It's really not a supplier in the traditional manufacturer sense, it's true, they have found a component to use. And so this lack of a contractual relationship has been creating a number of problems and there are technical computer security problems that are special too, but for me, that's really where I start.

Paul Gibert: Yeah, the barrier to entry too, to producing artifacts in the supply chain is also pretty low, compared to other supply chains we've seen. For example, I don't think there's a lot of people who have the technical ability or the resources to create the screws that hold bridges up. But I mean anyone can open their laptop and contribute to an open source project. Not to mention the skill sets associated with that have become very valuable recently, and we're seeing more people go after data science and computer science degrees in college and some people skipping college altogether just to do a coding boot camp just so they can learn how to write software. A lot of people can contribute to the open source community. That's why it's so powerful, but at the same time, it's also why it's so dangerous.

Eugenia Lostri: So the ecosystem for open source software is quite vast, right? So maybe just go through who are the actors that are most relevant here? What are the categories that we should keep in mind before we dive into some of these more technical and legal challenges?

John Speed Meyers: Oh, gosh, this question is terrifying. I'm going to mention a couple and I'll pass it to Paul. There's no way on the spot that we could even come up with the appropriate categories, but I'll mention a couple that stand out in my head. First, there's some home where source code lives, where the actual code is and this is a place like GitHub or GitLab. These are repositories, is the term you often hear, some set of files usually of code. And there's often documentation with it, and then some sort of community that forms around it. But, if you go to GitHub and github.com slash and then fill in the blanks, you'll find all these open source repositories, and this is really a key site of open source software where the code actually is.

But I'll mention one other thing that's interesting is a lot of the code then that is now on our computers or on servers somewhere it doesn't directly actually necessarily come from the source code. It needs to be turned into something, you often hear the term binary. It needs to be turned into some sort of executable thing as in package somewhere. So there's all these quote, package ecosystems, too. For instance, if you are a Python programmer, you might actually get your code from the so-called Python package index or PyPI, or if you use a Linux distribution, there might be a set of Linux packages associated with, for instance, Debian, or Ubuntu. So at least two places I thought I'd call out are the source code management systems like GitHub and these package ecosystems.

Paul Gibert: I guess the way I've always framed this in my mind is you have the people writing source code. You have the people who convert source code into various usable forms that companies can use. You have companies who are converting it into a product. And then you have the end user. And so that's the chain that I've always visualized in my head when it came to this ecosystem.

Eugenia Lostri: Yeah, I think that makes sense. Now let's look at some of the maybe most noteworthy examples of open source software incidents. I'm thinking Lock4j, I'm thinking Xe Backdoor. There was the recent Stargazer's Ghost Network. What are the types of harms that we're talking about here? Like, when we're talking about making sure that this software is secure, what are we trying to prevent? What can happen when the software is not secure and it just gets incorporated into these packages?

John Speed Meyers: Yeah, I'll just, I'll mention one broad type, and I'll pass it to Paul for another, which is one broad type I'll talk about is when there are unintentional vulnerabilities introduced by developers. It's not a malicious action, it's simply an accident or a series of accidents. And Paul, who's actually been doing some research on malicious software and software detection recently, can talk about malicious open source software. There might be more too, that's, but in my head, that's where I start. And Log4j are the, really the vulnerability is called Log4Shell. But in the winter of 2021, so December 2021, there was a widespread and relatively easily exploitable vulnerability discovered, announced in a popular Java logging library. It's really just a little component you put in your Java application and it helps you log information and it's a useful developer tool. Turns out that there was a very serious vulnerability that could be exploited relatively easily, and it was in many, many, many software applications all around the world. This wasn't malicious. That, that flaw had been there for a little while and is honestly a series of mistakes - oversights, really - by well-intentioned software developers. When it was announced and discovered, they fixed it quite quickly, actually, in fact.

So there was no malicious action, but it did mean that as a result, that for users of that software who were in their applications had this vulnerable set of versions of Log4j that they were potentially, depending on how it's configured, vulnerable to a severe compromise. And there's, Log4j is in some sense special because it's so, so severe, so easily exploited, so widespread. But there's a whole, that's just the very top there's, you go down a little, there's a whole ton of these that happen every day, literally every year, thousands of new vulnerabilities, at least I'm not even sure the exact number and open source software projects. And it's not because a open source software maintainers are doing anything quote, wrong. When software developers write software, the only question is how often you produce software vulnerabilities, not if you do. So that's really what's going on. There's so much open source code, so many projects change, some of them evolving rapidly that inevitably there are vulnerabilities, unintentional. But there's a whole other category of malicious open source software.

Paul Gibert: There are other examples that are really interesting in addition to Log4j. The one that caught my attention the most recently was the XZ Utils backdoor. But I think before I, I dive into that specific example, there is a paper, the title of which escapes me right now, but discusses the fundamental risk or things that can happen to you if you do not secure an application. We call it the CIA triad, which I believe stands for confidentiality, I: integrity, A: availability, like the three things that can happen to you that are bad, if you get got.

I would say confidentiality is pretty much, I have sensitive information that is no longer sensitive because of the security flaw. So medical records, for example, or my Social Security number. Integrity, I think our society relies a lot on data being accurate. Anything from what I'm going to decide to wear based off the weather report I read on my phone or traffic updates or more important things that have to do with safety. The last one is availability, and that is the systems that we depend upon every day that have to be up all of the time are up and running and not disrupted. And so when we talk about what can happen when you compromise open source software, we're talking about something in one of those three categories getting impacted.

Eugenia Lostri: Okay, so, but you did mention an incident that you wanted to look more into. So maybe talk to us a little bit through that and maybe explain to us how one of these categories in the triad was affected.

Paul Gibert: So XZ Utils attacked a very fundamental open source project that exists in virtually everything, honestly. And it affects the first category, the C, the confidentiality. XZ Utils exist in one of the most common communication libraries used in the internet: OpenSSH. When it comes to talking with the server anywhere in the world, chances are that communication is via OpenSSH.

And what these attackers did was they injected an ability for them to bypass the security of that communication so that they did not have to identify themselves in order to start talking to the system. And that means anything using OpenSSH that would only want certain people to be able to talk to the system can now allow these attackers to talk to the system. And it would fail to keep any data that server had private, which could have a range of implications, everything from being able to get into how a company manages their data to classified information behind a government server, really bad. And caught early, luckily, because someone just happened to be looking in the right place and spotted the back door. The question is how often do people try to sneak these back doors into open source security and is information we think private really not?

Eugenia Lostri: Since we're talking about malicious activity on open source, I would really be interested in hearing your thoughts on open source software as a vehicle for foreign malicious activity. You know how many of these situations can we blame on foreign actors who are trying to, I don't know, undermine the U.S. or whatever the claim is nowadays. Is that an actual concern for the open source software community? Is that more of a national security concern that just spans all of it? Do you see it as a problem?

John Speed Meyers: Yeah, I have an opinion on this, some of it written in Lawfare, in fact. A couple of years ago, I wrote a piece with a few co-authors called “Should Uncle Sam Worry About Foreign Open Source Software?” I'll give you the too long didn’t read version, which is: we're skeptical, but it's hard to say. And that's because at the time, a few of us in that group of authors have built a data set of known software supply chain compromises, including open source software supply chain compromises. There's no indication that knowing about any of the nationality of any of the developers associated with there that really the more like the GitHub profiles or other sorts of users associated with open source software would have given critical information that would have allowed one to detect, oh, that's a malicious North Korean software developer.

But I think a more neutral answer, slightly less opinionated is that open source software isn't, as a system, currently designed to provide information about the nationality of persons involved. It's not a system, like Paul alluded to, it's not a system where there's a strong system identity. There's not a tradesman or woman who signed off on a particular bolt that there's some record somewhere that says that met these criteria. It's really much more free flowing than that. It's not quote, high identity. So I do think it is certainly possible that North Korea or, you name it threat actor is using open source software to distribute, cause harm to the best of my knowledge. It's unclear still, at least in public reporting, who is behind or what set of people or institutions was behind XZ Utils. So it is a concern. When I think about the security of open source software, though, for me, I still put that as a tertiary concern at best this foreign angle to it. But I, there are some people who have strongly disagreed with me this on this before, and so I'll at least concede that.

Paul Gibert: I can say that defense contractors to the United States government definitely care about this. So I used to work for one right after college. And when they were training me on how to develop for the government, they would not let me use open source code, and it was miserable because they'd say, go build this. And then I'd build it and be like, hey, look at this awesome thing I built, and said, oh, well, you used the request library in Python. Fundamental, fundamental library to all Python developers out there. Imagine having to develop an internet-facing Python application without being able to use requests. I wouldn't even know how to do that. I didn't, when they told me how to do it. And one of the reasons I ended up leaving this contractor is I decided that I didn't want to work in this type of environment where I had to lock up all my access to common software projects that otherwise get to benefit from. So should we worry? Maybe, maybe not, but I do know at least the contractor I worked for thinks about it all the time. It takes extreme precautions to make sure that no one's messing with the supply chain, dysfunctional precautions, in my opinion.

Eugenia Lostri: Okay. Yeah. I was going to ask, how's that working out for them? Is that, is this actually preventing incidents or?

Paul Gibert: I wouldn't know, to be honest. I worked for them for about two years and so I wouldn't say I have extensive experience in what some, what they produce in their contracts. But I do remember that example from training very well, them announcing that I couldn't use that and it was just like, I can't build it now.

Eugenia Lostri: That's fair. Okay, John Speed, speaking of things you've written at Lawfare, I want to bring us back to another article that you wrote this one in November 2022. And there you argued that Log4j, and I'm quoting you, opened the political window for action related to open source software security. And you connected that window to the passing of the Securing Open Source Software Act, or SOSSA. So, tell us a little bit about what the act was all about, and why you saw Log4j as opening the window for that act.

John Speed Meyers: Yeah, and unfortunately it wasn't passed, to my knowledge. It was simply proposed, made part of the political discourse. I think elements of it have actually, in fact, trickled into action. But okay, let me start. I find this episode interesting and revealing that when Log4j happened, like I mentioned, it's this massive, widespread vulnerability that was actually quickly fixed by the volunteers associated with the project and the fix was made available. And then afterwards, there were, understandably, calls for action. A wide range of calls, meetings at the White House, and others, to do something about open source software security.

What I found was interesting is that, and you can even notice in the discourse now, a lot of those calls actually don't relate at all back to Log4j, not at least directly in the sense of the problem with Log4j is that there's an unintentional flaw in a Java library that was quickly fixed, in fact. And then there are a million calls for all sorts of things related to open source software security. And I actually welcomed many of them, but I really don't think, in fact, they're actually closely connected to Log4j. That's why I say it opened the political window. It made it thinkable that open source software security is a, is actually a homeland security or national security or even international security problem. And then the Securing Open Source Software Act, it was really a range of methods to I would say, to start to improve the security of both the U.S. government's open source software security posture, but also the critical infrastructure of these wide set of sectors that underpin American society. And it ranged the gamut from things like developing tools and metrics to measure the health of open source software security to doing a census of the open source software libraries and dependencies that are found in these contexts, government contexts and critical infrastructure contexts. So, like I said, I really do think that there's been an explosion of interest in this area post-Log4j, and I think it's very healthy. But I do sometimes, found it interesting that, a lot of them don't actually address the problem of Log4j, they rightly shine lights on other hitherto ignored problems which I think is welcome.

Eugenia Lostri: So would you say that this window is still open? I mean even if SOSSA has not passed, what other types of activities have you seen that would indicate that this remains a priority, or at least a concern that that is meriting a lot of effort and resources poured into it?

John Speed Meyers: Yeah, I think there still is. In fact, I think the XZ Utils near-miss has widened the door again. It's interesting, in fact, it's totally different, at least conceptually. It's a mali-, potential malicious compromise, whereas Log4Shell was an unintentional vulnerability. But I think it's shined light on the dependence of modern society and software, on open source software and its security. And I think there are some strong and interesting signs.

There's this so called open source software security initiative, you'll see the acronym sometimes OS3I, which is appears to be an Office of the National Cyber Director-led, though many other agencies are involved too, initiative related to open source software security for the government and really more broadly that for society. That initiative just released a short summary of their activities recently and really a response for a request for information. And it highlights, I didn't even count specifically, but I'll say a dozen, a couple dozen different initiatives across the federal government related to open source software security and it definitely shows that it's not just Log4j or XZ Utils. There's something in the water right now that lots of people are interested in this topic. So I definitely detect positive momentum.

Eugenia Lostri: I want to focus a little bit more on this RFI on open source software security. I found it quite interesting and I'm wondering, what are your thoughts on their findings about future areas of work and what it tells us about what needs to be done?

John Speed Meyers: Yeah, I'll just mention a couple of the areas. It's really quite rich, all the different activities that are described in here and the recommendations. So I'm not an expert on many of them. I'll mention a couple of things that stand out. There definitely was understandably an emphasis on moving the quote, memory safe programming languages. There are classes of vulnerabilities and quote, memory unsafe programming languages, usually C and C++ that can lead at least arguably relatively easily to writing vulnerable code. And there are some languages that are, quote, memory safe that avoid these. The one that often gets mentioned, though there are others, is this programming language called Rust.

And there are some interesting projects, especially interesting research projects, I think sponsored by DARPA, the Defense Advanced Research Projects Agency, DARPA, that look at trying to make it easier to either create Rust code from C code or otherwise ease the adoption of Rust. There's a much smaller software developer base of Rust developers than there are C developers. So I think that's one interesting angle. There's this major emphasis on creating memory safe code and making it widespread. Which is interesting, once again, it's different than Log4j, memory safety would not have happened there. It's different from XZ Utils, but it is, in fact, important. There are a lot of memory unsafety vulnerabilities out there.

Eugenia Lostri: So, we talked earlier about this vast ecosystem of open source software developers. They're individuals, they're just doing their thing, they're not really concerned or not all of them are concerned about the security outcomes of the software they're developing. So what are the incentives that can be offered to them to start thinking about, changing the language in which they're coding or translating it or taking these measures that are trying to be incentivized by the U.S. government into account? Is there a particular carrot?

John Speed Meyers: So they’re for, I'll say generously that memory safety advocates are certainly aware of this. They know that projects, open source projects, often form around a programming language, that it's not just their interest in this topic. They have some knowledge about some topic, and they often have some shared programming expertise in a particular language. And so simply, certainly by government fiat, or even by a lot of companies putting up their hands and praying, it's not an easy ask, it's not even realistic to say, simply switch to a new programming language, please. It's not. So I think they would all concede that.

I think there is, I truly think it's an area of active research. I'll say, like social and economic research about what it would take to make these sorts of changes, and how do you gradually introduce memory safety to at least some of the most important components and which components should be converted and how do you create a healthy community once you do that? I think really, to the credits of people who are interested in this, they are actively figuring that out. I, in fact, have discussed with even a colleague, I’m a non-resident fellow at the Atlantic Council, research projects on exactly this matter. How do you know when you switch a project to Rust or some small part of it that it will really last and survive? Because you need the people to maintain it. And for me, it's all open territory.

Eugenia Lostri: Okay. So if we don't know what the carrot, is this where the stick of software liability comes into play?

John Speed Meyers: I think it's possible. I think it switches, very likely, the liability, or it switches discussion from talking about open source maintainers as the unit of analysis to companies the entities that are making money on it, that would likely be held legally responsible for it. And I do think at that point, it's, you can start to imagine how liability would penalize entities that allow, build software and memory unsafe languages and then their users suffer from vulnerabilities that come from that. It is thinkable. It's a little hard to imagine but it is definitely thinkable and I think in the long arc of history that will be the trajectory.

Eugenia Lostri: Okay. Well, we're going to get back to software reliability later because you guys wrote a great article on this, and I want to explore those arguments that you make. But let's maybe get back to this RFI and the findings that you found to be the most interesting.

John Speed Meyers: Yeah, so we talked about the memory safety ones. I think the other interesting - I saw a note in there. There has been discussion within the federal government, or at least people adjacent to it for a couple of years about, when will there be an open source program office in the federal government? This is a common idea that you see in major companies. Some central office, some central group of people, a company that sets overall direction, really, the open source software consumption, licensing, contributions. It's an emerging type of function within modern companies. And the health and human services is, according to this RFI response, going to set up a open source program office. And I think this will be an interesting development and could lead to other agencies throughout the federal government doing something similar. I think there's positive benefits of that, potentially, at least, of having more well informed, mature conversation, knowledge, and policies at these agencies about their open source software consumption. Both allowing their staff to contribute back to open source, benefit their own agency and mission, but also about how to safely consume it.

Eugenia Lostri: Paul, did you have any findings that you found to be particularly interesting or do you think, or something that you see affecting the trajectory of open source software security?

Paul Gibert: I would love to see where the push for memory safety language usage came from. I mean I know a lot about memory safety because I come from a low-level programming background, and I just don't see why people have latched onto this one. Is it because of the, there are a lot of vulnerabilities out there that appear in memory safe languages like Use-After-Free and buffer overflow like that? Is that where this is coming from?

John Speed Meyers: That's exactly right. I mean the basic statistics you'll see, and Paul and I have discussed this. Paul is representing a very important divide too, which, these memory unsafe programming languages, which is not how C programmers describe the language they write in, are extremely economically crucial to society. So much code is written in C and C++. There's a huge developer base. There's a lot of applications written in it.

And when you see memory safety advocates write often in the first paragraph or two, you can see this in a great piece just recently by Rust, Shane through Lawfare white paper. It's a really nice piece, I urge listeners to check it out. But you often see citations from both Google and Microsoft that when they do their own after-action review of vulnerabilities that are showing up in their software, at least some critical pieces of their software. I'll say the number, 80 percent of those vulnerabilities can link to be linked to memory safety issues. So they say if, the thinking is, if these are so prevalent, and they're showing up this class is showing up over and over. Why not take some action like using Rust that eliminates whole classes of vulnerabilities? I think even these advocates, though, will concede that Paul is right, though, that it's a very long-term vision and there are definitely some C supporters out there who are unhappy about the way it's been constructed and framed.

Paul Gibert: It's difficult to say that C is a perfect language because it really isn't. It's a very messy language. It has several questionable syntactical decisions and it's also very old. It was designed in a time where we were still trying to figure out how humans should talk to computers and communicate instructions. And there have been languages since that have probably done a better job, Rust and Go, arguably Python. And figuring out how to solve the problem between human intending the computer to do something but something else ends up happening. That said, C is very fundamental, as John Speed just pointed out.

Python, for example, is the industry standard for data science. If you're building a model for anything, you're probably doing it in Python and under the hood, Python is probably executing some C code, just because at the end of the day, computers are very capable devices. To really control the CPU to the level you need to for some applications, for example, running very hardware intensive models, you need the freedom that a language like C gives you. And when you have that freedom, yes, you can mess up and you can do some dangerous things that maybe a high-level language might have been able to prevent but I think you're going to have a hard time convincing me that there would never be a use case to have that fine grained control over the CPU that some of those lower non-memory safe languages give you.

Eugenia Lostri: Okay. So we have a divide there. So where would you focus your attention if not memory safe languages?

Paul Gibert: It's hard to pick out just one.

John Speed Meyers: Yeah, I think that's what's so interesting about this area is there actually are, it's really a grab bag of problems. It's like for me, open source software security in the future will say it's like talking about housing or health. If someone was, if you were to ask a doctor, what's the one medical problem that if you fixed it would make health better? It's kind of understood that whatever they're going to say is parochial.

Eugenia Lostri: This is my prerogative as a podcast host. I get to ask. Pick one.

John Speed Meyers: Yeah, pick one. And a cardiologist says let me tell you about this heart related disease, and you ask someone else- And so I feel, I, I'm worried that Paul and I would be guilty of this too, if we were to pick one, but I'll mention something that I think is interesting from Log4j and Log4Shell is that it is the problem that there was a vulnerability in this, crucial open source software project? Or is the problem really that so many software consumers are really, producers of software, people that are, companies that are integrating Log4j into their product, can continue to use vulnerable versions of Log4j and create risk for really themselves and their users? Is that really the problem?

Is it, I think there's inevitably going to be vulnerabilities in software. Even when there are, we all use memory safe languages one day. The question is for me is how acceptable is it, as society, do we think it is for that risk to exist? And when right now, these companies can use vulnerable versions and, and maybe they have a breach that causes some problems for them. But for the most part, there's an ability to ship software and insecure components and not face consequences.

Eugenia Lostri: Okay, so this really does bring us back to the software liability debate that I've been promising that we were going to talk about. As I think I've said many, many times in different podcasts and different, I've been focusing on this too much. But the National Cyber Security Strategy from the Biden Harris administration really puts forth this idea of shifting where the risk lies, right? And it goes to what you're saying, John Speed, where these companies are putting products out there that are not safe. They're taking work from these developers and they're just like oh let's see what happens. And those who bear the burden of those security outcomes are the users. And this idea of software liability is supposed to change that because you're suddenly going to, if it ever happens, you are going to have to respond for that security outcome because you should have known better, to put it very plainly. So what wrinkles does open source software kind of add to this debate around software liability?

John Speed Meyers: Yeah, I'll just mention a couple but I think there's potentially a lot. And Lawfare to its credit has done a great job at airing a series of debates and finding, I think, some different positions on this. But going back to the key legal issue with open source software: it's the licenses say they are to be used as is. These open source projects are making no guarantee about the safety or security or anything these projects. And so by doing that the legal responsibility has been shifted, at least potentially, away from the open source project to whoever's using it.

Two things. One is there is a hope that should there be a liability regime, that finally these companies that are using open source software and building products that profit from it, if they're really liable now, even for the open source components inside of it, that they'll invest in those components. There's engineers and their corporate leadership will become committed stewards of the open source software that's in their products. They want it to work well. They want it to be secure. And I think that makes sense, as in, in it's logical. What I worry is that there's sometimes a belief, and I'd be guilty of it, that this will mean there'll be widespread investment by companies in the open source software that they currently use. And I wonder if there's an alternative scenario in which, especially, they actually shift the open source software they use, and they switched what might be called, become something like a walled garden effect. But there's a much smaller, more secure, more well-maintained set of open source projects that really receive the lion's share of corporate investment in this hypothetical world than otherwise might happen. So I think that's one wrinkle, is that I think some advocates see this as finally a way for open source software to get the money it deserves, and I wonder, not so fast.

Two, I'll say another wrinkle is because there have been malicious open source software packages that, some of them quite frightening. And could, with the potential to damage a lot of people, businesses, should the persons who are associated with that be able to avoid legal responsibility simply because they slapped an as-is license on it? And in this, in the Lawfare article, Paul and I mentioned that maybe this is computer, covered by the Computer Fraud and Abuse Act, or perhaps some sort of other existing legal statute, but to my knowledge, there aren't cases that deal with this. So I'm just darn curious, I have to say as a citizen.

Eugenia Lostri: Paul, any wrinkles that you want to add to this?

Paul Gibert: The way it's set up right now, it's just kind of difficult to do your due diligence. And the story I rattle around in my head is, imagine you lived in a world where Wikipedia being accurate was extremely important to you. For me, in high school, when I was writing history papers and would have to look up facts, Wikipedia was critical infrastructure. And if Wikipedia was incorrect, that was a problem, because then a fact would end up in my paper that my history teacher would know is not true, and I would get points taken. And the problem with Wikipedia, of course, is it appears to be very useful, but anyone can write anything in Wikipedia. And while, naturally, that does result in a superior encyclopedia, there's always that possibility of someone making a malicious contribution. The nice thing Wikipedia has is a little feature on it where you can click and see oh, like this part of the article is from this primary source. So it's really easy for me to go in and be like, okay, this is a legit fact. We don't have that ability that I know of in open source software to do that. I can't go look at an open source project and somehow confirm that they're conforming to some set of security standards that then allows me to trust it. And that's probably one of the largest things we're missing in open source right now is for the most part, I'm cloning a project from GitHub and building it and putting it into production. It's like what can I do to take the step to make sure that's actually a safe thing to do?

John Speed Meyers: Now, I just argue that I think that is technically true and for me, it comes back to a political reason, which once again is there isn't strong economic or legal incentive to do that. So there are some upstart projects that try to do exactly this thing. I'll mention one from the Open Source Security Foundation called Scorecards. It's really a pretty neat project, I know Paul has done some cool analysis with it in fact. But you don't find this so-called Scorecard on the vast majority of open source software projects. You find it occasionally, and especially some of the problem ones increasingly adopt it. But even if it's on there, it's no guarantee that the consumers of it really care are interested. It's more of a hobby project that the maintainers added. So, I think this lack of liability, in some sense, stunts the technical solutions too, because there's just not a demand. There's some, there’s some, and there are definitely forward thinking organizations like the Open Source Security Foundation and others, but I think it's small compared to what it could be.

Eugenia Lostri: Now given this political window that we were referencing before, and all of these, actual developments in work, whether we think, that it's the right approach or not. But, the findings, the new offices, the effort being put into it, is this signaling a shift in how the U.S. government, companies, and, you know, the open source software community communicate with each other and think about how their work impact each other?

John Speed Meyers: I would say not yet. I would say that there are some people have had high hopes that these recent events would do that. But I think that if you, if I read personally that open source software security initiative, RFI summary, there's a lot of acronyms there. If I read that summary, I would observe, and I would suspect that the authors and the people who contributed in this would agree that a lot of these initiatives are voluntary, best effort initiatives, things that are definitely helpful and certainly will provide benefits to those companies and agencies that are enlightened enough to adopt them. But for the most part, these aren't the sorts of heavy stick rules, regulations that really, I think, at least according to how some thinkers in this area, would want fundamental change. I really see these as a regulation light approach where there is opportunity for quote, public private partnership and other sorts of opportunities like that. And I'm really not trying to disparage it by saying that, just observing that I think there are more, skeptics might call them intrusive approaches, that really just aren't being embraced right now. And I think there's room for debate about what is appropriate, but that's my observation, mostly of this voluntary public-private partnership style right now.

Paul Gibert: I 100% agree. We haven't seen the type of awareness as a result from some of these events that I would have hoped for, and some of the problems with open source supply chains. And I really don't know what the answer is. Maybe it's as simple as some more people have to get got, and some more executives have to reach into their pockets to respond to incidents of them getting hacked before we start seeing change. I don't know. I really thought the world was going to get shook by XZ Utils. I thought it was so fundamental. Everything uses that library, and somebody had a magic password that could give them access to any system that used it. And no one's really batted an eye or at least to the scale that I thought we would. I don't know why. Maybe it's because we caught it early and nobody really got hurt. A friend told me that a lot of people had to die in car accidents, unfortunately, before we started having seatbelts. Maybe that is the morbid reality here

Eugenia Lostri: Unfortunately, that seems sometimes to be the case, right? That in order to see change, just something really big needs to happen. Before we wrap up, I wanted to make sure that if there's anything else that you want to add, some final wisdom that you want to leave our audience with, you have the space to do so.

John Speed Meyers: I'll just say that it really is actually a very vibrant time for software security writ-large, related to the U.S. federal government, and it's interesting and positive. I, in some ways , I think I can sound too critical about the relationship between all these initiatives and Log4j. I really think the political window is interesting. It gives a chance for all these policy entrepreneurs to advocate things like software bill of materials to, but a wide range of other things too. The NSA has been publishing, for instance, white papers on software supply chain security for the past few years. Without SolarWinds, the epic Russian hack, and then Log4j and others, I mean I really don't think there'd be this organic thirst for improving the security of software, both for the U.S. government and critical infrastructure. And so it's in some sense a testament to the vitality of U.S. government institutions that you read this RFI summary, and there's two dozen initiatives. I consider myself a close observer. I've even heard of some of them before. I say, “oh, my gosh, this is amazing”. So I actually think it's a bright spot right now. It's interesting that there are so many, I'll call them policy entrepreneurs out there, trying to figure out, exploring different questions, different problems with different pros and cons. It is actually a pretty bright spot.

Paul Gibert: I agree. I think we got this. I think we got people on all levels of the stack who are really smart, trying really hard to figure out how to do this. It just kinda needs to come to the public light a little more, somehow. And there's a lot of cool ideas, a lot of amazing research coming out and trying to figure out how we can solve this problem. And I think we're going to get it. I think it's a big problem. It's complex. It's going to take a lot of people from a lot of different backgrounds with crazy diverse skill sets to figure it out, but eventually it's going to become a priority, and when it does, we'll be ready to capitalize on it.

Eugenia Lostri: Well, I appreciate the positive spin at the end. John Speed, Paul, thank you so much for joining me today.

John Speed Meyers: Thanks for having us.

Paul Gibert: Thank you.

Eugenia Lostri: The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare Podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.

Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Cara Shillenn of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Eugenia Lostri is a Senior Editor at Lawfare. Prior to joining Lawfare, she was an Associate Fellow at the Center for Strategic and International Studies (CSIS). She also worked for the Argentinian Secretariat for Strategic Affairs, and the City of Buenos Aires’ Undersecretary for International and Institutional Relations. She holds a law degree from the Universidad Católica Argentina, and an LLM in International Law from The Fletcher School of Law and Diplomacy.
John Speed Meyers is the head of Chainguard Labs within Chainguard. He leads a research team focused on open source software security. He is also a nonresident senior fellow at the Atlantic Council in the Cyber Statecraft Initiative within the Digital Forensic Research Lab. He previously worked at In-Q-Tel and the RAND Corporation.
Paul Gibert is a research scientist at Chainguard Labs.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.