Cybersecurity & Tech

Cloud Un-Cover: CSRB Tells It Like It Is But What Comes Next Is on Us

Maia Hamin, Trey Herr, Marc Rogers
Tuesday, May 28, 2024, 9:43 AM

Lagging policy upholds a status quo in which cloud vendor’s design decisions about how their systems work (and work together) are almost entirely opaque.

Cloud security (Blue Coat Photos, www.bluecoat.com/, https://www.flickr.com/photos/111692634@N04/16042227002/in/photostream/; CC BY-SA 2.0 DEED, https://creativecommons.org/licenses/by-sa/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In summer of 2023, the State Department discovered a single technical alert of what would turn out to be an extensive espionage campaign targeting the email inboxes of people working on U.S. policy toward China, both inside and outside of government. In a discomfiting reversal of roles, State notified Microsoft—its primary information technology vendor—of the discovery, triggering efforts by the corporate giant to unravel the campaign and its origins.

How could this happen? Thanks to a March 2024 Cyber Safety Review Board (CSRB) report on the series of flaws in Microsoft’s cloud infrastructure and security processes that led to the intrusion, more detail than ever has come to light.

Microsoft has stressed in its public messaging that it is targeted by advanced, state-sponsored hackers, who are “very difficult to defend against.” Yet the CSRB report reveals that the attackers did not rely on unimaginably complex tradecraft, but instead breezed through Microsoft’s fortress (and its neighboring castles) by finding a set of skeleton keys.

The attackers stole a years-old key that could be used to “sign” tokens that granted users access to resources—which still worked, because Microsoft failed to rotate or decommission out-of-date keys. Then the attackers used this key to generate new tokens to grant them access, with Microsoft missing key controls that could have identified these new tokens as forged. The attackers went on to use these forged tokens to access email inboxes across more than 20 organizations, including in the U.S. Department of Commerce and Department of State, because Microsoft had failed to implement certain basic limitations on the kind of access that such keys had. This was the overarching theme of the CSRB report: a lack of basic controls leading to critical vulnerability.

The report offers a chance to peek behind the veil at how companies are securing the opaque, complex cloud infrastructure on which our society increasingly relies along with some recommendations for how to address the problem. These proposals will take months to years to realize progress and would be greatly strengthened with a focus on establishing uniform security standards and transparency about those standards for cloud providers. While some of the largest cloud customers and their regulators have started to agitate for more transparency, the broader policy conversation continues to lag far behind accumulating risk in cloud services. 

But first, let’s talk about the cloud. The past 10 years in cybersecurity have seen massive growth in cloud services as a “default” mode of computing and a consistent increase in the complexity of these services, creating the grounds for more intricate and high-stakes failures. Despite the increasing criticality and complexity of the cloud, the policy conversation has failed to keep pace.

The failure of policy to adapt upholds a status quo in which cloud vendor’s design decisions about how their systems work (and work together) are almost entirely opaque. Outside of a single federal cloud procurement scheme—FedRAMP—cloud vendors are free to maintain this opacity, especially about security vulnerabilities in their cloud services. This leaves cloud customers largely in the dark about cloud vulnerabilities and underlines the importance of greater transparency alongside security in the cloud.

Instead of a dedicated regulator to implement transparency requirements or fixes for obvious failures in cloud security, there is a mess of different overlapping-but-distinct frameworks and controls. These frameworks and controls, like ISO 27001, Service Organization Control (SOC) 1 and 2 audits, the Center for Internet Security’s Benchmarks, the Cloud Security Alliance’s STAR framework, or the National Institute of Standards and Technology’s (NIST’s) Cybersecurity Framework (CSF) frustrate the attempts of the many different regulators who have some form of authority over cloud systems—and of customers of cloud providers themselves—to gain basic visibility into the byzantine structures on which much of today’s software is run. Throw into the mix cloud providers’ own emerging standards—Amazon’s AWS Foundation Security Best Practice Standards or Microsoft’s Cloud Security Benchmarks—and it’s no wonder the landscape of cloud security might be described charitably as inconsistent.

Cloud infrastructure is influenced further by the laws that apply to its customers, such as emerging state privacy laws in California and Washington, sector-specific federal requirements like the Health Insurance Portability and Accountability Act, and a growing array of domestic data and hardware localization requirements by countries in Europe and Asia. In order to sell to the U.S. government, cloud providers might need to comply with documents like the NIST 800 series or FISMA—or newer standards added into the mixture such as NIST’s CSF or the Defense Department’s Cybersecurity Maturity Model Certification (CMMC). Thankfully, the FedRAMP program, which authorizes secure cloud services for federal use, has provided some consistent baseline—but the recent CSRB report fails to interrogate the question of exactly how the FedRAMP process failed to catch or correct the flaws that led to Microsoft’s epic summer meltdown. (Though it does make some useful recommendations about expanding how frequently, and at what cause, FedRAMP reviews are deployed.) 

In this slew of acronyms and requirements, it’s hardly surprising that every cloud company shapes itself to different sets of legal priorities and applies an inconsistent set of controls across their complex globally distributed infrastructure. This has real consequences. Just last month, a hacker breached Sisense, a business intelligence company, and stole the credentials it used to access its customers’ data. Then attackers were able to compromise systems belonging to those customers, accessing their data held on behalf of other companies and widening the radius of the blast. The fact that many of those businesses’ customers found out only second-hand, if at all, about how they were imperiled underscores the danger of opacity in cloud environments. The Sisense breach is not the first time that attackers have leveraged opacity to target little-noticed and vulnerable nodes in the cloud supply chain (see, for example, the Sunburst attack against SolarWinds), and it will not be the last. Finally, the Sisense incident is further evidence that the systemic failure of basic, common-sense controls at Microsoft is an indicator of a wider problem rather than an isolated culture issue.

Consider the CSRB’s analogue (and inspiration) in transportation safety: the National Transportation Safety Board (NTSB), which examines the complex failures of controls and processes that can cause events like airplane accidents. The NTSB has a few advantages over the CSRB, including its 50-year history (compared to the CSRB’s three) and its close relationship with the Federal Aviation Authority (FAA), whose authority can turn NTSB’s findings into safety requirements for new aircraft and operating practices for both pilots and airlines. The CSRB has given policymakers and the growing community of cloud customers like Accenture, Target, the Department of Homeland Security, and your friendly neighborhood think tank a bevy of insight into how to improve the security of cloud infrastructure—but there is no FAA for the cloud. Instead, policymakers can approach this industry with common standards instead of a single common regulator. 

The first step toward better policy for cloud security will be wading through this acronym soup to identify the most important shared measures for transparency into the security architecture of cloud systems. The Cybersecurity and Infrastructure Security Agency (CISA) has an important role to play here, as does the FedRAMP board. A push for consistent transparency from regulators and cloud customers alike would support better ways to understand the complex dependencies that make up cloud infrastructure, to identify vulnerabilities, and give consumers the power to push their vendors to improve. The cloud vulnerability database provides a terrific community-led forum for sharing cloud security issues and vulnerabilities, but it is volunteer led and operated. Cloud providers aren’t even required to produce a formal record, known as a Common Vulnerabilities and Exposures (CVE) number, for software vulnerabilities they find in their systems. The CSRB gets it right here (Recommendation 17)—this has to change.

The White House should drive major cloud vendors to share detailed architectural information with a federal entity such as CISA or the National Security Agency as they design and deploy their infrastructure. This responsibility could sit within the Office of the National Cyber Director but may need the political muscle of the National Security Council to move the administration. These disclosures should be scoped to include physical infrastructure dependencies like data centers and networking hubs as well as logical resources like the identity and access management (IAM) systems at issue in the Microsoft compromise. Improving cloud security can’t start and stop with just what cloud customers see—it has to include the supporting infrastructure, especially that which is common across many customers and risks catastrophic failure, as with Microsoft’s brittle IAM infrastructure. The U.S. government has acknowledged that it has yet to fully prioritize software bills of materials (SBOMs) for cloud services and cloud-delivered software (perhaps in part due to vendor pushback), which would directly address these needs.

One way to bring cloud providers to the table will be to offer a carrot alongside the stick: Frame the push for visibility as a necessary prerequisite to building a coalition around stringent but uniform standards for cloud security, ones that would create consistent requirements for cloud companies to comply with and regulators to oversee. This would be a shot in the arm for the current U.S. National Cybersecurity Strategy’s effort to “rebalance responsibility” in cybersecurity toward those “most capable and best-positioned actors,” and a logical extension of the Secure by Design campaign being waged by CISA

The FedRAMP program could serve as a proving ground for new transparency requirements and smarter controls. The CSRB report on the Microsoft incident recommends important updates to the FedRAMP process such as mandatory audits after incidents and increased scrutiny of crucial systems like IAM that provide linchpin security for cloud systems. There is an ongoing effort to expand and improve FedRAMP’s basic control processes—this is an opportunity for the U.S. government to take action on some of the sobering lessons from this incident and to advance a more consistent and thorough standard for cloud security.

In the absence of a total regulatory overhaul of the U.S. cloud provider industry, the CSRB itself has an important role to play here: It must keep shining an uncomfortable light on complex systems and their failures, forcing businesses to snap out of blissful ignorance about the risks they might be facing. As lawmakers consider codifying the board into law, they can make it even stronger for this mission by (a) expanding its resources and providing for full-time members to increase the number and depth of investigations it can complete; and (b) providing it with subpoena power, so that it can ask hard questions of companies like Microsoft after these kinds of spectacular meltdowns. The scope of the CSRB’s analysis of Microsoft should have included the business processes that led to these security failures—true root cause analysis—which would have almost certainly engendered opposition from Microsoft and its lawyers: Questions about failures in process and oversight are at least as important as questions about failures in software and design. As the board continues to mature, it should emphasize transparency about its own processes and the barriers it faces in areas like seeing its recommendations implemented. This will help iteratively improve the institution over time as it works to unpack complex and important failures in cyberspace.

* * *

Pushing cloud providers for consistent and meaningful transparency won’t be easy. As the authors have written previously, cloud providers are “magnificent engines of complexity”—this complexity drives institutions to throw up their hands and accept a flawed status quo rather than sorting out the mechanics of trust and effective risk management. The CSRB’s report on Microsoft underlines that this strategy is not viable in the long run.

It’s possible that Microsoft has gotten the message. The latest response comes not from the usual sources in the policy and legal teams, but directly from the CEO most responsible for shifting Microsoft into the cloud market: At an earnings call last week, Satya Nadella shared, “We are doubling down on this very important work, putting security above all else, before all other features and investments.” With the aftermath of one cloud security failure behind it and a major compromise by Russian actors possibly still ongoing since January, it’s clear that Microsoft needs to make this pivot. 

Hopefully, Nadella is good for his word. But wouldn’t it be nice if policymakers and Microsoft’s customers didn’t have to just hope?


Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She works on the Initiative’s systems security portfolio, including leading workstreams on technologies with systemic security impacts such as cloud computing and artificial intelligence.
Trey Herr is Assistant Professor of cybersecurity and policy at American University’s School of International Service and director of the Cyber Statecraft Initiative at the Atlantic Council. At the Council his team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, cyber safety and growing a more capable cybersecurity policy workforce.
Marc Rogers is Co-Founder and Chief Technology Officer for the AI observability startup nbhd.ai. With a career that spans decades, Marc has been hacking since the 80’s. Professionally Marc has served as VP of Cybersecurity Strategy for Okta, Head of Security for Cloudflare and Principal Security researcher for Lookout. He’s been a CISO in South Korea and spent a decade managing security for the UK operator, Vodafone.

Subscribe to Lawfare