Cybersecurity & Tech

Lawfare Daily: Eugenia Lostri and Justin Sherman on Security by Design in Practice

Stephanie Pell, Eugenia Lostri, Justin Sherman, Jen Patja
Monday, August 19, 2024, 8:00 AM
What does 'Security by Design' mean in practice?

Published by The Lawfare Institute
in Cooperation With
Brookings

As part of Lawfare’s Security by Design Project, Eugenia Lostri, Lawfare’s Fellow in Technology Policy and Law, and Justin Sherman, CEO of Global Cyber Strategies, published a new paper, “Security by Design in Practice: Assessing Concepts, Definitions and Approaches.” Lawfare Senior Editor Stephanie Pell talked with Eugenia and Justin about the paper’s exploration of the meaning of security by design, scalability solutions and processes for implementing security by design principles across an organization, and the need to engender a corporate culture that values security.

To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Introduction]

Eugenia Lostri: We set out to answer two main questions. The first one is how much consensus is there among industry and government stakeholders around the meaning of “security by design?” And relatedly, how much consensus is there around the meaning and the usefulness of the term “security by default?”

Stephanie Pell: It's the Lawfare Podcast. I'm Stephanie Pell, Senior Editor at Lawfare with Eugenia Lostri, Lawfare’s Fellow in Technology Policy, and Justin Sherman, CEO of Global Cyber Strategies.

Justin Sherman: How do you, as a software vendor, make that security burden, because there will be some security burden if you're doing security by design, but how do we make that as low as possible and make it as easy as possible, as seamless as possible for people to implement security by design?

Stephanie Pell: Today, we're talking about their new paper, “‘Security by Design in Practice: Assessing Concepts, Definitions, and Approaches.”

[Main Podcast]

Eugenia, why did you and Justin write this paper?

Eugenia Lostri: So I tend to think of the genesis of this project and this paper is back in the 2023 National Cyber Security Strategy. So as you know Stephanie, one of the big shifts in that strategy focused on this idea of moving responsibility for security outcomes from users of software back to the tech companies that develop it. So, when we look at some of the big cyber security incidents, many of them could have been avoided if better security measures had been baked into the product before being deployed.

So, the administration, the Biden-Harris administration, offers two related ideas through which to realign incentives so that companies are better incentivized to invest in security from the get go. So kind of very briefly, you have on the one hand the strategy that says that the administration will work with Congress and the private sector to develop legislation establishing liability on product vendors for insecure software and services. And then on the other hand, you see that CISA has been working to develop security by design principles to promote the private sector to take actions that advance customer security. The way in which I see those two ideas working together is that adopting security by design principles would help create at least a positive presumption against software liability, right? So these two things are happening in tandem.

Now we were seeing that, and so a year ago, we at Lawfare started a project on security by design that was meant to evaluate what are the meanings, what are the implications, how can security by design be implemented, and the project explores how law and policy should demand security by design from software developers and when liability for inadequate security be imposed. So all of this just is context for why did we write this paper.

Justin and I focused on maybe a more foundational question before all of this. So it's given the attention paid to the term security by design, and I think since CISA started working on this, we've seen it being adopted, more people are talking about it as, yes, we are secure by design, you see companies promoting that, you see the government saying we need to be thinking about security by design, security by default, and we were just wondering, you know, are we all talking about the same thing, right? Is there a concrete, useful, working definition of security by design? So, we started this with a little bit of skepticism, I think it's fair to say. Our perspective was it's highly unlikely that suddenly we're all talking about the same and we're all on the same page. So basically we wrote the paper to figure that out.

Stephanie Pell: So we're going to discuss how the paper talks about meaning and understanding of security by design. But at a high level, if you just had to give a very basic explanation of what you mean by security by design, what would that be?

Eugenia Lostri: Sure, I can try. Although, as you said, it's a bit tricky given that the entire point of this is to figure that out. But I think at this point the best way to outline the concept is to maybe take the definitions that are provided by CISA in their security by design white paper and use that maybe as the baseline. So CISA defines secure by design to mean that technology products are built in a way that reasonably protects against malicious cyber actors successfully gaining access to devices, data, and connected infrastructure. But CISA doesn't stop there. As I said before, they don't just talk about “secure by design,” they also introduce the concept of “secure by default.” This is a complementary concept that goes hand in hand with “secure by design,” according to them. And “secure by default” means that products are resilient against prevalent exploitation techniques, out of the box, without added charge. So all of this to say, the idea is that you as a consumer should receive a product that from the get go is safe against known threats and that you don't need to pay extra for that security. Now, of course, this is not that simple or straightforward, and we have the entire conversation and we have an entire paper to discuss how it's not that simple.

Stephanie Pell: So to explore how it is not that simple, what research questions did you set out to explore or answer in your paper?

Eugenia Lostri: Given these two concepts that CISA articulates as complementary but separate, we set out to answer two main questions. The first one is how much consensus is there among industry and government stakeholders around the meaning of security by design? And relatedly, how much consensus is there around the meaning and the usefulness of the term security by default? And so we started this by doing a literature review, right, which is pretty straightforward. We surveyed major government industry and academic publications that define the concept of security by design or a similar enough term. And by doing that, we established a baseline of different conceptual definitions of what security by design is and how it can be implemented in practice.

And after that, we did what I think is the main contribution of our paper, is we conducted a whole bunch of interviews with people who actually have to implement this. So we turned our attention first to major software vendors. We interviewed experts at Google and Microsoft about how their organizations view the concepts, both of security by design and security by default. I want to note that we did ask Apple as the other major software vendor that we considered, we asked them to make someone available for an interview for our paper, but unfortunately they declined to participate. And then we also interviewed experts at CISA, we interviewed experts at CrowdStrike, Schneider Electric, an open-source software organization, just to get a broader sense of how these concepts are being implemented. And because we approach this kind of with a cross discipline perspective, I want to know that every interview was designed to draw out perspectives from technology, law and policy. So we covered quite a broad spectrum of how to understand security by design.

Stephanie Pell: So Justin, can you summarize the major findings of your research, and then we'll delve into them a bit further.

Justin Sherman: The first set of findings was related to definitions, and I know we're certainly going to talk more about that. As we were just hearing, right,  there has been an important set of policy debates and technical efforts from the cyber security and infrastructure security agencies --- CISA and others --- around security by design and security by default. So one of our major takeaways was that, which we can get more into, most of the organizations we spoke with agreed that security by design was a useful concept. They at a very high level used similar language to talk about that concept and how it related to security processes at their organizations.

And again, thinking, okay, these are folks who are either building software or heavily involved in securing it. So that was important. And then there was some skepticism about the term security by default. The second major takeaway we had related to what in our paper we call scalability. So once you have, let's say, a team of 20 developers who have sat down with policy folks, they've really laid out on paper or in their software tools, what does this look like to do security by design? What do we actually mean, day to day when you're building a web application or a piece of cloud software or a video game, right? What do we actually mean by security by design? So our second theme was about once you have that established in one part of your organization, how do you make that happen in every part of the organization? How do you grow that, scale it, hence our use of the word, to make that function across companies as large as Google or Apple or Microsoft who, of course, have massive numbers of people doing software development. So that was a second key theme is scalability is really important, and it's an interesting opportunity, it's also a unique challenge for some larger orgs.

And then the last two major themes we identified were every organization has different incentives and disincentives to promote this concept of security by design. That might depend upon their existing security posture. So there are some organizations like Microsoft with the secure development life cycle that already have put thought into concepts that are somewhat similar and so maybe there's more of an interest there. And then you have other organizations that have less interest or capacity. And then across every company, right? There's of course, an economic pressure to develop products as quickly as possible, right? MVPs, right? Minimal viable product. So how quickly can we roll this out? And usually how quickly does not include how securely. And so that, that was our third bucket was talking about those market incentives, how that interacts with this security by design idea.

And the very last piece was about corporate culture. And that relates back to one of the first points we were just talking about of how do you put the burden for security on companies and not users. But there's also maybe some question about if you're a really large software vendor whose operating system or apps or whatever it is are used across all kinds of organizations, maybe you also need to have that culture of security, even within industry, there's sort of a different level of imperative maybe for different firms. So those were the four takeaway themes we had in our study.

Stephanie Pell: So let's dig into a few of those themes a bit more or takeaways. Can you talk more about your interviewee's thoughts on the meaning of security by design? And how their respective organizations implement secure by design from a technical perspective.

Justin Sherman: Sure. So as Eugenia was just describing, we have this literature review that was an interesting framing to say, okay, this academic paper puts it this way, this document from CISA puts it this way. And so we really were curious, as you just asked, to see, okay, what do companies think about this idea?

So as I mentioned at a high level, several interviewees thought there was relative consensus among major software vendors on what security by design means. And so part of that was knowing that CISA has put out a standard definition. Part of that was saying, we also have industry standards that we're well aware of, such as from the National Institute of Standards and Technology, right, the NIST Cybersecurity Framework. Or cybersecurity process guidance from ISO, the International Organization for Standardization. I'm dropping all these acronyms, so apologies. But industry was basically saying we also have all of these very specific technical standards that we've developed that we're well familiar with, we implement every day, and so when we hear a policymaker at a very high level say security by design, we all go, okay, we can understand what that means in practice because we have all these principles and practices already that relate to security by design.

The second point I want to make about that is folks pointed out that across organizations, you often have a similarity of implementation. And, again, that's somewhat at a high level, but some companies mentioned how they will identify a particular programming language that's memory safe, so you're not going to have memory leakage security issues, for example, to name one, example of a cyber security problem. So they'll say, okay, our developers are using all these programming languages. Let's identify which programming languages are safer for them to use. And then we're going to implement a policy across the company that says, unless you're using the, you can only use this language, right? When you open your tools, that's what's going to come up. And if you want to, or need to, program our software, in a language that we have not identified as secure by design on this metric, you have to submit for formal approval.

So that was an interesting example of, okay, when you talk about what does this look like at an actual software vendor, folks were sharing some examples completely in separate interviews that seemed to overlap. But I think it underscored the point that security by design to these companies is more about process, right? So it's not a checklist, but it's a process of saying, okay, we need to build this into our life cycle. And that includes things like which programming languages pop up when you open an IDE editor as a developer.

This is a segue just to wrap this question here into your second part of the question about the life cycle, right? So is there a concern that when we say security by design what about other parts of the software life cycle? What about building the software? What about testing the software? What about deploying the software and selling it to companies and governments and users? What about updating the software? What about sunsetting the software, like with Windows XP or something and there's no more security updates anymore? So, there was some concern that security by design could focus companies attention only on that design phase and not on other parts of the life cycle. But at the same time, as we were doing these interviews, many of the experts we spoke with at these companies also expressed the opinion that it's a kind of nitpicky concern at this point, right, that there's such a deficit of security understanding at a lot of organizations that you really need to just say design and in people's head that often includes a lot of these parts of the life cycle.

Stephanie Pell: So you indicated that a number of the companies that you interviewed certainly were aware of CISA’s definitions and in the workings of their own implementation processes could say, yes, this is how we are implementing security by design. Did any of them have thoughts on CISA's role generally in promoting security principles consistent with its ideas of secure by design?

Justin Sherman: They did, and to answer this I want to very clearly draw a distinction between what the people we interviewed said and what I personally think. The first part is, what did folks say? They did say the technologists, the policy experts, the lawyers we spoke to in multiple companies, that they do think CISA has done a good job both publicizing this concept --- the fact that we're even having this conversation in this context speaks to how widely this term has been referenced in industry and in cybersecurity policy conversations.

They also said they thought CISA did a good job making the guidance understandable. So it wasn't these 7,000 page highly technical documents, but they were putting out white papers and blogs and webinars and articles that clearly explained what security by design means to a developer who might not be familiar or to a smaller company who might not be familiar. And they also explain why it matters, which is something that our interviewees pointed out as well. CISA has done a good job, not just saying, here's this abstract technical concept that we care about, but actually articulating, as we heard at the beginning, this really matters for who is vulnerable to data breaches and cyberattacks, who is held responsible when people's data is leaked, for example. So, they thought CISA has done a good job there. The one caveat that some folks shared was they have concerns in general, and this was not specific to CISA, that regulators tend towards check the box approaches rather than comprehensive process approaches. But overall, like I said, they were very good with praising CISA there.

The one editorial comment I want to personally add to that is, I think CISA's done a very good job with making this accessible. I agree with the interviewees we spoke with. I think one challenge here is there is so much in-depth technical guidance like the NIST framework, right? And when I teach the NIST framework, I'll pull up the entire NIST framework and, students are like, this is what this is. This is not accessible at all. So I think CISA has done a good job in saying there is really great technical literature, we're going to try and make some content and material that's much more entry point accessible for companies.

Eugenia Lostri: I would also add that in terms of CISA’s role, there were, and I think here you see the, the importance of the iterative process, because we've seen CISA publish a white paper in April 2023, and then they submitted an updated one later in the year.

And I think you see them taking in some of the feedback that they received and particularly around the scope of their security by design principles, right? Something that we also heard was that, maybe some of the recommendations that they're putting forward are not going to actually make all software secure. It's a very, it's one part of the software ecosystem but if you're thinking about like operational technology maybe some of the recommendations don't really apply and I think we've seen in the way that they've started talking about it, the way that they've characterized it later, that they really took that in and are being a lot more clear about scope, but I don't think that was the case from the get go.

So maybe there was a little bit of tension at the beginning about, okay, who are you actually recommending this to? And should everyone be concerned, or is this just about this one subset of software that you want to make sure is more secure? Justin, do you think that's a fair characterization of that critique?

Justin Sherman: Absolutely. And we, of course, did talk to companies involved with operational technology or even connected devices, right? Internet of things devices, smartwatches, et cetera, where same thing would apply. Maybe there are different security considerations that at this point in time, based on what systems put out thus far are not necessarily captured.

Stephanie Pell: So the discussion this far says to me that this concept of security by design has proven useful one, because companies can access it, can understand it, can determine what they are doing in their own processes, which forward security by design principles. You indicate in your paper, however, that this other term, which was part of CISA’s larger rollout of quote security by design, this other term, security by default, to not be as useful a concept. Why was that?

Justin Sherman: Two pieces to that. So one is that, and I, we also agree with this in our analysis, but interviewees had said that they didn't really understand the distinction that CISA was trying to draw between these two phrases security by default and security by design, right? So if you look at one of the sources that CISA has put out, this was from February of 2023, it describes these terms as security by default is when products have strong security features at the time of purchase. And security by design is when products are designed with security from the outset. And it's okay, huh? So, that's, not to be joking about it, but that was the reaction from a number of folks we spoke with.

And my reaction also, when reading that is, okay, I can understand how maybe those could be used interchangeably if you're saying, no, the default is that we prioritize security or by design, we're going to prioritize security. But there does seem to be some effort to say that they're two distinct concepts, and this has happened in a bunch of publications. But it just was not clear to our interviewees what that means. And that, of course, raised questions about, again, if you use them interchangeably, maybe that's fine. But if you're trying to say that they're two different things, tsn't that just creating unnecessary confusion?

The one question one interviewee had raised was perhaps default means they, when we were talking to this person, they said maybe default means that it's the baseline security settings presented to a user. So if you think, oh, you open your messaging app and encryption is on by default, you don't have to turn it off or, you open your phone settings and location tracking is off automatically, right? That kind of thing. I said back to this person, but isn't that just security by design? And so anyway, so that was this is a more minor point perhaps of the paper, but I think it is an important point as we talk about building out this concept that security by design, like we said, that was there was relative high level consensus among software vendors of what that means, but this other term thrown in is maybe creating some confusion around terminology.

Eugenia Lostri: I actually think this is particularly interesting when you think, Justin, back to what you were talking about before, which is, what's the length of the security by design lifecycle? Because if we're talking about the user experience and we can consider that to be still secure by design, but if you're thinking about secure by design ending before the product is deployed, you could see maybe that transition. I'm with you that I'm still not fully clear on what the difference is, but I think it does all connect to, okay, what falls under secure by design? When does it stop being the design part?

Stephanie Pell: So just to then to put a fine point on this, did you find any current useful distinctions between the concepts of security by design and security by default?

Justin Sherman: I don't think so. And I personally think we should just ditch this default concept, right? It's complicated enough. CISA’s doing a great job trying to make this accessible. You can talk about design across the life cycle, so your bases are covered. And I think it's just creating, currently creating confusion.

Stephanie Pell: So shifting gears a bit, Justin, your research also focused on scalability solutions and processes for implementing secure by design across an organization. Are there existing tensions and challenges with implementation that you all discuss in your paper?

Justin Sherman: Absolutely. And anyone listening who does this firsthand is probably grinning at the question and what I'm about to say, right? It's definitely a challenge for folks in cyber security and in the software space.

I want to first say, when we use the term scalability, we were originally thinking, or at least I was originally thinking, about, okay, you have security practices tested and rolled out on one team, how do we copy that across the 50 other teams, right? But in the course of doing our interviews, which was interesting, numerous companies added two additional dimensions. So we have a three dimensional thing of scalability. So one is across the organization from team to team, right? So we have this team in an office in New York, we have a different development team in San Francisco, we have another one in Portland and so forth. How do we make sure they're all following our security by design practices.

The second that came up is scalability across product suites. So think of Apple, right? From your operating system to your smart device software to your app store, or think of Microsoft, right? Maybe that's scaling security across everything from the operating system on a video game console to a software as a service, a SAAS application run in the cloud, to Windows. So it's also across all of these different product suites. And the third dimension of scalability that came up is across the type of end user. So there are perhaps different security by design questions to ask about building products for an enterprise, where people show up to work, the software is loaded by IT, they log on and they start using it, right? Versus a product built for a customer, where I get my laptop in the mail or I download software off the website, and I'm the one that has to set it up, I'm the one that is making, clicking boxes and checking things on and off in terms of security settings. So across the org, across product suites, and across end user types are the three different ways that scalability came up.

So when implementing it, of course, there are a lot of challenges. The cyber security experts articulating and setting these policies have to explain and justify why this is needed across the organization. You need to bring in technical experts, right, who can say, this is the tool that we all use, so figure out how we do it safely, or this is something we don't use too often, maybe we don't need to use it if it's riskier, right? Think of the memory safe programming language example I mentioned, right? You need a technical expert to tell you, yeah, it's fine if you block access by default to using this unsafe language when you're programming, or hey, no, we use that all the time, we need to figure out a different way. So that's really important.

And another theme that came up is a “tax” on scalability that may get larger in a larger organization, right? Because the positive side of a larger organization with really good security by design practices is there's a large organization with really good security by design practices. The flip side is the more and more developers you have and the more and more teams and products you have, the more and more spending you're doing on security in that framework. So, there are costs to that as well.

So all to say the real takeaway from that conversation was this idea of minimizing friction of how do you at a software vendor make that security burden, because there will be some security burden, right, if you're doing security by design. But how do we make that as low as possible and make it as easy as possible, as seamless as possible for people to implement security by design.

So again, to make this specific, examples, things mentioned included memory safe programming languages turned on by default, predefined web templates --- so anyone at the company wants to make a webpage, we have a prebuilt template with all of the security around it. APIs, Application Programming Interfaces, to retrieve data. Here's a set of APIs that are pre-built and pre-screened that you can easily use. Things like that just make it easier for developers.

The last point about large versus small orgs. Of course, there are different challenges, right? A larger organization probably has access to more threat intelligence, right? They get attacked more. They might have more expertise if they have a bigger budget or they have more security professionals on staff. On the flip side, they may have years old legacy code, right? Think of a company that makes major operating systems or something, right? If you're older, you also might have more bureaucracy, more entrenched processes.

So there's pros and cons. And same thing on the smaller side. If you're a smaller company, you might not have that budget. You might not have as much expertise in house, but you might not have that legacy code problem and you might have agility that a very large organization is just not going to.

Stephanie Pell: So Eugenia, you all make the important point in your paper that secure by design is not just a technical concept. What do you mean by that?

Eugenia Lostri: So I think this is intimately connected to what Justin was just talking about. We are talking about technical solutions in some cases, but there is so much that actually shapes and affects the implementation of those solutions, we can think about, Justin was just telling us about, how do you, what are the internal processes that make sure that all of the teams are using the same secure by design principles, but it's not just internal. You also have market forces. You have the curriculum setting that affects how engineers think about their work and their role in security. All of that really affects corporate culture and the way in which the security by design principles are implemented. So you can't just like tech away the problem of security.

Stephanie Pell: And so how then did some of the representatives from companies you spoke with think about security or security by design in terms of those market or business principles? Were they consistent with the examples that you just gave?

Eugenia Lostri: I think so. Let me start by referring once again to the National Cyber Security Strategy, just because I think it offers a really good way to think about what are the external incentives that shape corporate behavior, right?

So the strategy claims that, and I'm quoting here, today's marketplace insufficiently rewards, and often disadvantages, the owners and operators of critical infrastructure who invest in proactive measures to prevent or mitigate the effects of cyber incidents. So a lot of that, I think the broadest consensus that we had during the interviews was that demand for security is limited. And honestly, if the market is not really asking for it, what are the incentives for you as the private sector actor to introduce more friction into your process and delay shipping a product? Like the incentive is very low. Like Justin mentioned before we're talking about the need to be first to market. We're talking about minimum viable products. And honestly, it doesn't seem like the market will reward you for bringing a more secure product to market if you're late, if there are other solutions.

And then there's also something else that the interviews mentioned was the trade off between different priorities, right? A product needs not only to be secure, you need to also build other things. Some of the things that were mentioned to us were, the privacy of the product, interoperability, reliability, performance, making sure that it's backwards compatible. It needs to be resilient and mostly it needs to be usable. You need a product that is going to appeal to the user. That's going to be easy to understand and sometimes, not too harp on this idea of friction, but security can make that harder because, think about how many times have you rolled your eyes at multi-factor authentication? When you need to like once again go to your authenticator and put in another number and you're like, okay, just, it's annoying. So there's a limit to how much you can push that as well before your product just becomes undesirable.

And I think another interesting nugget that we got was that at a more fundamental level, security is not top of mind and that is reflected in the skill set that engineers are expected to have when joining the workforce. That's another thing that we heard all about. It's just, they're not being trained to think about, and there's a very interesting blog post from Jack Cable from CISA where he talks exactly about this. They're not trained to think about the security outcomes of the workforce that stem from their coding. So when you have that range of challenges, it's not that surprising that internally security is just not something that people are thinking about to bake into the product. And that's why we end up with bolt on solutions where, after the product is deployed, you just like tack on solutions to whatever challenge you're facing.

So just to, to kind of summarize all of those streams of thought, we have both the market does not offer enough of an incentive. There are other principles and goals that need to be taken into account to make sure that the product is successful. We have engineers that are not properly trained on the importance of security. Tack onto that, there is little to no market or regulatory consequence for pushing unsafe products. And also if you have a policy, there's the potential always for mismatch between policy and practice and all of that just shapes corporate culture and the role that security plays in it.

Stephanie Pell: So Eugenia, I think this next question then will nicely follow from the last part of your answer. And that is, touching upon the discussion that you all have near the end of your paper about the 2023 Microsoft exchange online intrusion. And before we talk about that in the context of a corporate culture that values security, can you just briefly remind our listeners about what happened and the significance of that intrusion?

Eugenia Lostri: Sure. And honestly, I swear, as we were writing this paper things kept just happening. That meant we had to go back and add more and more. And one of the biggest of those things was the Cyber Safety Review Board published their review of the Microsoft Exchange online intrusion, which I think really captured a lot of what we were seeing in these interviews.

Basically to recap what happened in May and June of last year, 2023, the Microsoft Exchange online mailboxes of several organizations and like hundreds of individuals were compromised. This was attributed to the threat actor Storm 0558, which is assessed to be affiliated with China for espionage objectives. And they were able to access accounts using authentication tokens that were signed by a key that Microsoft had created in 2016. And this compromised senior U.S. government representatives including, famously, Commerce Secretary Gina Raimondo, the U.S. Ambassador to China, and Congressman Bacon.

So why is this important? The fact that Storm 0558 got its hands on a signing key, and signing keys are used for secure authentication into remote systems. That key allowed them to give themselves permission to access any information within the key’s domain, and the one that they got was clearly pretty good. So they had that access that combined with another flaw in Microsoft's authentication system that allowed Storm 0558 to gain full access to the way that the Cyber Safety Review Board puts it, essentially any exchange online account anywhere in the world.

Stephanie Pell: And I think I'm correct in saying that Microsoft has, at least publicly, not been able to state how the threat actor acquired the key to begin with. So they don't even, in a sense, know fully where the flaw was, whether it was a process or something else that gave the threat actor access to the key.

Eugenia Lostri: Yeah, that's correct. So it might not be surprising for you to hear that the Cyber Safety Review Board found that there was a cascade of errors. That's the word that they use, cascade of errors, that was actually avoidable on Microsoft's side. And they talk about operational strategic decisions that, again, quoting from the report, collectively point to a corporate culture that deprioritized both enterprise security investments and rigorous risk management. Now, why is this report interesting to us in this conversation? And I think it's, it fundamentally shows the role that corporate culture plays in incorporating security, right? Especially when you're talking about companies that are central for the technology ecosystem where so much of critical services rely on them, like it's the case for Microsoft.

So the interesting part is what happens after this, right? After the report came out, Microsoft, I believe has challenged the characterization that security culture is not adequate, but we are seeing some internal changes that reflect that they're not dismissing what they heard. There are, just to mention a few things, there's two important leaked memos, one from Microsoft CEO Satya Nadella, another one from Kathleen Hogan, who is Microsoft Chief People Officer, both of them urging Microsoft employees to prioritize security, even if it's at the expense of other priorities, making sure that they know that they will be assessed on the security outcomes of the software that they're working on.

We also saw Microsoft's President Brad Smith at a hearing in front of the Homeland Security Committee not that long ago acknowledge that the company should have done better and explain what are some of the changes that they're making internally to make sure that prioritization of security also comes from the top, that is something that people will get bonuses out of, that it will count to their productivity levels. And so what he says he talked about changing engineering processes, how they're integrating security by design, how they're changing the way employees review themselves, how those issues are elevated, and reward people for finding, reporting, and helping to fix problems. So I think even though they may publicly challenge this characterization of inadequate security culture, the fact that all of these things are being pushed now shows that there was room for improvement.

And since the incident took place, they have also launched the Secure Future Initiative, and I think this is interesting going back to Justin's points about the difference between secure by design and secure by default, because when we spoke to Microsoft earlier, they fell in the bucket of, we don't really understand what security by default offers as a separate concept. And yet, this new initiative for security they described it as being anchored in three different principles, and the three principles are secure by design, secure by default, and secure operations.

So I think that's just an interesting example of how maybe CISA’s work of marketing and promoting these concepts, it's actually taking hold internally for companies, how they're adopting that language to be able to be on the same page, which I think ultimately, even though I tend to agree with Justin's point about this, I don't think it adds much, but clearly it's being adopted and it's becoming part of the common language that people have to talk about security. So that's just an interesting thing to consider.

Stephanie Pell: Very interesting. You all conclude your paper with questions for future research that develop from the work that you've done, but also in the broader context of work going on under the secure by design umbrella. Can you talk about some of the most pressing research questions you identified?

Justin Sherman: Sure. So we had six. I'll take our first three here. So one was how this concept matures over time, but not just matures over time, but matures vis-a-vis other regulations and design principles and so forth. So how will security by design relate to any new federal cybersecurity requirements for contracting? How will they relate to state level security breach disclosure and liability laws? How will they relate to the Cyber Safety Review Board's future reports? So that's an important question. The second is back to this distinction or lack thereof between security by design and security by default. I was pretty clear about my takeaway on that, but further exploring that distinction and asking the same life cycle questions we already talked about of, we're trying to get security across design, deployment, updating, sunsetting, et cetera, so how does the terminology the government's using impact each piece of that? And the third research question I'll mention here is back to scalability. So what trade offs exist, and this is a research question I'm very interested in, what trade offs exist between scaling up security and also remaining flexible, right?

And the Microsoft hack is one great example where, okay, we have an organization, there's been an effort in some dimensions to scale up certain kinds of security practices. But if an incident happens, we need to pivot or there's a new tech threat or what have you and suddenly flexibility is demanded, is it going to take us five years to roll out these new processes? So what are the trade offs there and how do you deal with that from a tech and a policy standpoint?

Eugenia Lostri: Yeah, I can take the last three. So I tend to think of these three as being in the how do we think of security by design in other contexts? So the first one that I'll mention is how is the concept of security by design implemented in the open source software community, or is it even the right framing for making security improvements when you're not talking about a corporate software vendor context, given the pervasiveness of open source software and how much we actually depend on it. It is interesting to think about what can we expect and also how does that affect software liability going back to what we first discussed. The second one is how can these principles that we're talking about be translated into actual articulable standards that can be post and applied in that we can start, demanding that again, all of that framing and this idea of software liability.

And the last one is if there is enough alignment between the concept of security by design in the U.S., assuming that we ever get to one main concept, the way that it is promoted by CISA, and how that is adopted and translated into standards in other countries. CISA’s white paper was promoted alongside other partner agencies in other countries, but once again, it's the actual adoption of, the articulation of those into standards and expectations and how that is applied, whether it is by legislation, by regulation, or even in the courts, that is going to be, I think, super interesting to watch.

Stephanie Pell: So anything else that you would like to share with our listeners?

Justin Sherman: I just, two points just again to frame this and take away. So, one is we have a huge amount, I think, there's always a need for more, but there's a huge amount of really detailed, lengthy guidance on how to do security at an organization. That's the NIST cybersecurity framework, a gazillion ISO standards, which by the way, you have to pay for and other things, right? So there really is, I know we're getting in the weeds of this concept, which is really important, but at a high level, I think it's important to stress that there really is a need to generate more accessible entry level discussion, content, guidance around security. Because I love what NIST is doing, that's really important as well. It's both, right? But you can't give a startup a 500 page PDF and say, okay, here's how you do security, right? You need to have concepts that relate to accessible guidance and that's one way to improve security as a baseline.

So I think all this stuff is really important. The second thing is just remembering that CISA is not a regulatory agency, right? It's really on Congress and a lot of other places that do have that authority to continue to require some of this. A lot of what we're talking about is suggestions and voluntary practices, but I still think we need to continue the conversation about requirements.

Eugenia Lostri: Yeah, I'll subscribe to everything that Justin just said, and I would also like to add that the role of education in making sure that there's the right resources available, not just for engineers like we talked about before, but also the understanding at all the different levels, everyone who's thinking about tech law, tech policy, actual engineering, tech engineering, what your role is in ensuring secure products are put forth, understanding how what you do is actually going to impact the world in which we live. I think just making sure that we are all held responsible for our part would be my last thought.

Stephanie Pell: We'll have to leave it there for today. Thank you both so much for joining me.

The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

Please rate and review us wherever you get your podcasts. Look out for our other podcasts, including Rational Security, Chatter, Allies, and the Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org. The podcast is edited by Jen Patja, and your audio engineer this episode was Ian Enright of Goat Rodeo. Our theme song is from Alibi Music. As always, thank you for listening.


Stephanie Pell is a Fellow in Governance Studies at the Brookings Institution and a Senior Editor at Lawfare. Prior to joining Brookings, she was an Associate Professor and Cyber Ethics Fellow at West Point’s Army Cyber Institute, with a joint appointment to the Department of English and Philosophy. Prior to joining West Point’s faculty, Stephanie served as a Majority Counsel to the House Judiciary Committee. She was also a federal prosecutor for over fourteen years, working as a Senior Counsel to the Deputy Attorney General, as a Counsel to the Assistant Attorney General of the National Security Division, and as an Assistant U.S. Attorney in the U.S. Attorney’s Office for the Southern District of Florida.
Eugenia Lostri is Lawfare's Fellow in Technology Policy and Law. Prior to joining Lawfare, she was an Associate Fellow at the Center for Strategic and International Studies (CSIS). She also worked for the Argentinian Secretariat for Strategic Affairs, and the City of Buenos Aires’ Undersecretary for International and Institutional Relations. She holds a law degree from the Universidad Católica Argentina, and an LLM in International Law from The Fletcher School of Law and Diplomacy.
Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.
Jen Patja is the editor and producer of the Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare