Cybersecurity & Tech Executive Branch

Stitching Together the Cybersecurity Patchwork Quilt: Infrastructure

Jim Dempsey
Wednesday, September 18, 2024, 1:00 PM
Initiatives on connected products and critical infrastructure reflect an understandable incrementalism, but gaps need to be filled—urgently.
A data server (Photo: Rawpixel, https://www.rawpixel.com, Public Domain)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: This piece is the second in a two-part series by Jim Dempsey on cybersecurity initiatives taken during the Biden and Trump administrations. Part 1, “Stitching Together the Cybersecurity Patchwork Quilt: Data,” can be found here. Dempsey’s work on this series was funded in part by a grant from the Army Cyber Institute at West Point.

The year 2024 is emerging as a watershed in U.S. cybersecurity policy—and almost nobody has noticed. I don’t blame them. The pace of legislative and regulatory actions has been dizzying, making it hard to see the big picture. In Part 1 of this series, I focused on efforts to limit the flow of Americans’ sensitive data to foreign adversaries, covering TikTok, biometric identifiers and other sensitive data, data brokers, and, briefly, the Kaspersky anti-virus software. Among other things, I sought to trace the consistent thread linking these actions across the Trump and Biden administrations. (A quick update on Part 1: The Department of Justice missed the Aug. 26 deadline set in President Biden’s February 2024 executive order for issuing a proposed rule controlling export of Americans’ sensitive data. As noted in Part 1, missing such a deadline, even in a presidential order, is not unusual and was predictable in this case, given the complexity of the problem.)

In Part 2, I focus on infrastructure and connected products: connected cars, cloud computing resources, artificial intelligence capabilities, ship-to-shore cranes, undersea cables, and one of the internet’s key protocols. (Of course, the distinction between data and hardware is blurry, since one concern with connected products is that they collect and share sensitive data about Americans.) My goals are to describe the rapidly growing patchwork quilt of U.S. cybersecurity law, call out some of the major remaining gaps, highlight the urgent need for Congress to clarify the authority for some of the cybersecurity actions already taken and for those that need to be taken, and urge the Biden administration to try for clarification of statutory authority in at least one or two more sectors before the year closes.

Connected Cars

In Part 1 of this series, I explained how the Biden administration’s massively complex rulemaking on sensitive data builds on Executive Order 13873 issued by President Trump in 2019 under the International Emergency Economic Powers Act (IEEPA). Upon taking office, President Biden kept the Trump executive order in place. Since then, the Biden administration has renewed every year the declaration of a national security emergency regarding the nation’s information and communications technology and services supply chain that was the predicate for the Trump order.

In February, the Commerce Department—acting under that Trump-era executive order—announced an inquiry into the national security risks of connected vehicles, which are automobiles that integrate onboard networked hardware and software to communicate with external networks or with other cars. Connected cars are able to access global navigation satellites for geolocation, communicate with intelligent highway systems, receive over-the-air software or firmware updates, and access roadside assistance services. At the same time, however, they have access to a wide range of data—including biometrics, in-car conversations, vehicle location, sensor data and images, vehicle performance data, and even financial information. While the data practices of U.S. manufacturers of connected cars pose their own privacy issues, the Commerce Department’s announcement was specifically concerned with China-manufactured components in such vehicles. 

According to Secretary of Commerce Gina Raimondo, the goal of this inquiry is “to understand the extent of the technology in these cars that can capture wide swaths of data or remotely disable or manipulate connected vehicles.” The Commerce Department issued an advance notice of proposed rulemaking asking for feedback on the definition of “connected vehicles”; how potential classes of information and communications technologies and services transactions integral to connected vehicles may present undue or unacceptable risks to U.S. national security; implementation mechanisms to address these risks through prohibitions or mitigation measures; and whether to create a process for requesting approval to engage in an otherwise prohibited transaction by demonstrating that the risk to U.S. national security is sufficiently mitigated. 

So, as with President Biden’s February Executive Order 14117 on Americans’ sensitive data, described in Part 1 of this series, the Commerce Department initiative contemplates a complicated regulatory scheme that will touch all components of connected vehicles. Unlike the sensitive data order, the contemplated process here focuses on products, since the risk is not only collection of data stored in and about these cars but also the ability to use connected components to remotely disable vehicles while operating on the road. And while Biden’s February order gives authority to regulate sensitive data to the Justice Department, development and enforcement of any eventual program for connected vehicles will reside with the Bureau of Industry and Security within the Commerce Department, which also has the lead on enforcing other elements of President Trump’s Executive Order 13873 on information and communications technologies and services. (That power was on display in the Commerce Department’s recent banning of anti-virus software and cybersecurity products or services offered by the Russia-based Kaspersky company.)

Cloud Services and Artificial Intelligence

On Jan. 29, the Commerce Department published a rulemaking on foreign access to U.S.-based cloud services, referred to as infrastructure as a service (IaaS). The rulemaking is partly intended to implement another Trump measure, Executive Order 13984, issued on Jan. 19, 2021, which Biden also left in place. That executive order, also based on IEEPA, found that foreign actors were using U.S. cloud services to carry out malicious cyber-enabled activities in ways that were difficult to track, including by hiding on U.S.-based servers their malware and the command-and-control systems for their botnets of compromised computers. The order directed the secretary of commerce to propose regulations requiring U.S. IaaS providers to verify the identity of their foreign customers—essentially, a know-your-customer or customer identification program. It also authorized the secretary to prohibit or impose unspecified conditions on accounts with foreign persons obtaining or reselling U.S. IaaS products for use in malicious cyber-enabled activities. 

In September 2021, under President Biden, the Commerce Department published an advance notice of proposed rulemaking to implement Executive Order 13984. The January IaaS rulemaking action addresses comments submitted in response to that notice. The action also puts forth an actual proposed rule (described below), subject to a further round of comment. (Note that it is taking the Biden administration a remarkably long time to implement the Trump executive orders, even though official Biden policy is to continue these orders and projects that Trump began. For example, and as noted in Part 1 of this series, it took the Biden administration until June of this year to adopt the first ban under 2019 Trump’s executive order against a foreign-controlled information and communications technology or service—specifically the Kaspersky cybersecurity software and services.)

The question of foreign use of U.S. cloud services received further attention in President Biden’s October 2023 executive order on artificial intelligence. That order, Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” directed the secretary of commerce to propose regulations requiring U.S. providers of IaaS to submit a report to the secretary when conducting a transaction with a “foreign person” to train a large artificial intelligence (AI) model with capabilities that could be used in malicious cyber-enabled activity. 

The Commerce Department’s Jan. 29 notice of proposed rulemaking is intended to implement both the Trump executive order on IaaS and the cloud-related elements of the Biden executive order on AI. The proposed rule would define IaaS as “a product or service offered to a consumer, including complimentary or ‘trial’ offerings, that provides processing, storage, networks, or other fundamental computing resources, and with which the consumer is able to deploy and run software that is not predefined, including operating systems and applications.” It would require U.S. IaaS providers and foreign resellers of U.S. IaaS products to exercise reasonable due diligence to ascertain the true identity of any customer or beneficial owner of an account who claims to be a U.S. person. (This is huge: In order to conduct such reasonable due diligence, as a practical matter, IaaS providers might need to identify all account holders, including U.S. persons.) For any customer who appears to be foreign, it would require collection and verification of the customer’s name, address, the means and source of payment for each account, email addresses and telephone numbers, and internet protocol addresses used for access or administration of the account. In addition, the proposed regulation would require U.S. IaaS providers to require foreign resellers of their products to verify the identity of foreign persons who open or maintain an account with the foreign reseller.

So, instead of addressing U.S. data, the initiative focuses on foreign adversaries’ use of the United States’s considerable cloud infrastructure to carry out cyber operations or to train AI models. In a way, it is the service that is being exported, not any data. 

Limiting adversaries’ purchase of U.S. technology products or services is a national security concern at the core of IEEPA. And a know-your-customer system could be seen as the first step toward an IEEPA-based control on “export” of cloud services that adversaries use to mount cybersecurity attacks on the U.S. The proposal, however, has some problems. As Richard Salgado and Bob Litt have explained, the Stored Communications Act (SCA), part of the Electronic Communications Privacy Act of 1986, prohibits “providers of remote computing service” from disclosing customer-identifying information without a subpoena, court order, or warrant, and the definition of remote computing service under the SCA overlaps quite substantially with the definition of IaaS under this new initiative. More broadly, U.S. law has never mandated true name registration for usage of common internet services. And, as Peter Swire has suggested, the collection of valid identifying data on advanced foreign actors may be impossible. While the ease with which foreign-based cyberattackers can hide in plain sight on cloud infrastructure in the U.S. is galling, it may be better for law enforcement agencies to use the formidable tools already available under the SCA and the Foreign Intelligence Surveillance Act for compulsory disclosure of subscriber-identifying information in particular criminal and intelligence investigations.

Cranes at U.S. Ports

Also in February, the Biden administration took a set of actions on maritime cybersecurity: The president issued an executive order adding generic cybersecurity considerations to existing regulations aimed at protecting vessels and ports from sabotage or other damage. The Coast Guard issued a proposed rulemaking to add detailed cybersecurity requirements to other existing security rules for the nation’s marine transport system. And the Coast Guard issued a maritime security directive dictating cyber risk management actions for owners or operators of ship-to-shore cranes manufactured by Chinese companies. 

The latter directive addresses a matter of immediate concern: 80 percent of the cranes at U.S. ports were made in China and are internet connected. (A congressional report released Sept. 12 adds more detail.) By design, these cranes may be controlled, serviced, and programmed from remote locations and thus are vulnerable to cyberattack. The directive contains security-sensitive information and, therefore, was not made available to the general public.

The proposed rulemaking, in turn, covers a much wider range of maritime transportation infrastructure, including vessels, off-shore oil and gas drilling operations, and port facilities—all of which are increasingly cyber-connected. It would establish a comprehensive set of cybersecurity requirements, based in many respects on the cross-sector cybersecurity performance goals issued by the Cybersecurity and Infrastructure Security Agency and similar to the emergency directives issued by the Transportation Security Administration for pipelines, railroads, and the aviation sector. Under the proposed rulemaking, each owner or operator of a covered vessel or facility would be required to conduct a cybersecurity risk assessment by analyzing all of its networks to identify vulnerabilities to information technology and operational technology systems and the risk posed by each digital asset. Based on that assessment, covered entities would be required to develop a cybersecurity plan, which would then have to be approved by the Coast Guard. The proposed rule would also mandate specific measures related to account security (such as automatic account lockout after repeated failed login attempts on all password-protected information technology and operational technology systems), device security (such as disabling applications running executable code on critical systems), data security, cybersecurity training for personnel, supply chain management, network segmentation, incident reporting, and physical security. 

Unlike the sensitive data, connected cars, and IaaS initiatives, the maritime actions were not based on IEEPA. Instead, they were based on the Maritime Transportation Security Act of 2002, a provision in the Outer Continental Shelf Lands Act of 1953, and other statutory provisions related to vessels.

Submarine Cables

Tucked into President Biden’s February 2024 sensitive data executive order was a provision dealing with submarine cables. “The risk of access to [sensitive] data by countries of concern can be, and sometime is, exacerbated where the data transits a submarine cable that is owned or operated by persons owned by, controlled by, or subject to the jurisdiction or direction of a country of concern, or that connects to the United States and terminates in the jurisdiction of a country of concern,” the order reads.

Under the Cable Landing License Act of 1921, operators seeking to terminate one end of an undersea cable in the U.S. must obtain a license from the Federal Communications Commission (FCC). To assess the law enforcement, intelligence, and national security implications of its license decisions, the FCC defers to the interagency Committee for the Assessment of Foreign Participation in the United States Telecommunications Services Sector (known as Team Telecom). The sensitive data executive order directed Team Telecom to prioritize its reviews of existing licenses for submarine cable systems “that are owned or operated by persons owned by, controlled by, or subject to the jurisdiction or direction of a country of concern, or that terminate in a country of concern.”

Team Telecom was already using its authority to scrutinize the data security implications of new license applications. The executive order seems to add little (if anything) to that. Rather, it seems that the purpose of Biden’s order in this regard was to nudge the committee to hasten its review of existing licenses. The FCC has already reviewed and revoked licenses of Chinese telecom companies to offer services in the U.S., and it has proposed requiring regular reviews of existing licenses to take into account rapidly changing national security concerns. President Biden’s February order seems to be saying that existing landing licenses deserve similar reconsideration. 

But the order gives little further guidance. The Team Telecom process on submarine cables has been subject to criticism, not least for delays. Industry stakeholders favor more rapid buildout of submarine cable infrastructure as the best way to achieve resilience, while government officials have given primacy to national security concerns. Rick Salgado warned last year that the U.S government “has taken an ad hoc approach, which relies heavily on a regulatory regime that is effective on paper but in practice creates disincentives to build state-of-the-art cables on new routes.”

Securing a Key Internet Routing Protocol

There are now two, somewhat competing, approaches to increase the security of information routed across the internet. In June, the FCC issued a notice of proposed rulemaking aimed at providers of broadband internet access services and their reliance on something called the Border Gateway Protocol (BGP). Earlier this month, the White House issued its “roadmap” to enhancing the security of internet routing, also focused on BGP.

As the Internet Society explains, BGP “is the language spoken by routers on the Internet to determine how packets can be sent from one router to another to reach their final destination.”  The challenge is that BGP “does not directly include security mechanisms and is based largely on trust between network operators that they will secure their systems correctly and not send incorrect data.” For a long time, engineers and policymakers have recognized the risk that malicious attackers could affect the routing tables used by BGP. As the FCC put it in the announcement of its proposal, “these ‘BGP hijacks’ can expose Americans’ personal information; enable theft, extortion, and state-level espionage; and disrupt services upon which the public or critical infrastructure sectors rely.” In 2022, the Cybersecurity and Infrastructure Security Agency summed up the situation by declaring the threat of BGP insecurity “critical and widespread.”

Reflecting a philosophical split within the executive branch, the FCC and White House responses diverge. The FCC favors an approach known as Resource Public Key Infrastructure (RPKI), which enables cryptographically verifiable associations between specific IP address blocks and the “holders” of those internet number resources. Rather than flat-out mandate use of RPKI, the commission’s proposal would require service providers to prepare and update confidential BGP security risk management plans at least once a year. Such plans would have to describe and attest to the specific efforts service providers are making to secure their BGP routing architecture using RPKI as well as other methods at their disposal. A select number of the largest, most significant service providers would be required to file their BGP plans with the commission, with the plans of the remaining service providers available to commission staff upon request. The White House roadmap, by contrast, is more aligned with the voluntary, consensus-based, industry-led approach that has characterized internet governance for decades. While also endorsing RPKI and setting forth detailed recommendations for the private sector, it focuses on getting the government’s own house in order, using the procurement power to require the federal government’s service providers to identify and filter invalid routing information on those government-procured services. 

If the FCC were to move forward with any regulatory approach to internet security, whether on BGP or other issues, it might find itself on shaky legal ground. In its proposal to increase the security of internet routing, the FCC cited its authority over common carriers. But the FCC’s June re-re-classification of broadband internet service providers as common carriers has already been put on hold by a federal appeals court. The FCC also said that Section 706 of the Telecommunications Act of 1996, which requires the commission to inquire whether “advanced telecommunications capability is being deployed to all Americans in a reasonable and timely fashion,” may provide additional support for its internet routing proposal. And the commission tentatively concluded that the Communications Assistance for Law Enforcement Act (CALEA) also grants authority to the commission to secure internet routing, a theory that clearly violates Justice Antonin Scalia’s maxim of statutory interpretation that “Congress does not hide elephants in mouseholes” (not that CALEA is a mousehole in other respects).

Congressional Action Needed

Relying on the courts’ traditional deference to the president on national security matters, some, possibly many, of the aforementioned executive branch cybersecurity initiatives could in fact survive recent Supreme Court rulings limiting the power of regulatory agencies. But why risk it? Shouldn’t Republicans, at least, instead follow through on their platform promise to “use all tools of National Power ... to raise the Security Standards for our Critical Systems and Networks” in a bipartisan effort to add “cybersecurity” to existing laws?

Start with IEEPA. As Anupam Chander and Paul Schwartz document in their comprehensive new article, this pre-internet law, which was actually intended to limit presidential power, is carrying an increasingly heavy load. The rulemakings on sensitive data, connected cars, and IaaS were all issued under the act. While the separate statute, the Defense Production Act, that establishes the authority of the Committee on Foreign Investment in the United States was amended in 2018 to specifically include references to cybersecurity, critical infrastructure, and sensitive data, IEEPA has not been amended in a similar fashion. (IEEPA was amended in 2014 to address economic or industrial espionage in cyberspace, but the provisions do not fit well with the cybersecurity concerns of today.) 

The congressional actions this year to ban TikTok and other apps and to regulate data brokers, discussed in Part 1 of this series, sidestepped IEEPA by enacting free-standing authorities. They may or may not be effective, but they are too narrow to address broader cybersecurity concerns. The data broker provision, for example, does not regulate first-party data collectors, which the administration would address in its sensitive data rulemaking. And the provision on apps does not cover the components of connected cars. Neither covers concerns with exploitation of U.S. cloud computing resources. 

The Berman amendments to IEEPA, which deny the president the power to restrict the import or export of information, are not the only barrier to use of IEEPA. The Supreme Court’s major question doctrine, first articulated full bore in West Virginia v. EPA, may also cast doubt on the authority of the executive branch to invoke IEEPA to issue rules on privacy or data security. In the EPA case, the Court stated that, “in certain extraordinary cases,” regulatory agencies could not issue rules on “major questions” affecting “a significant portion of the American economy” without “clear congressional authorization.” The flow of data internationally certainly affects a significant portion of the American economy, and IEEPA certainly does not contain a clear congressional authorization to regulate it. The same may be said of components in automobiles and maybe of cloud computing. The major question doctrine is unpredictable. In this context, perhaps the president’s power on matters of national security will be a sufficient counterweight. 

But why not be explicit, especially after the double-whammy of Loper Bright v. Raimondo, which repudiated the doctrine of judicial deference to regulatory agency interpretations of statutes? Now that politicians’ obsession with TikTok has played itself out with the adoption of a legislated divestiture requirement, kicking that can to the courts, there may be interest in some truly narrow exceptions to IEEPA. To address the kinds of issues the administration is pursuing in its sensitive data rulemaking, it might be sufficient to parenthetically add the words “but not including the exportation of the sensitive personal data of U.S. persons” after the word “information” in one of the Berman amendments—15 U.S.C. § 1702(b)(3). That would simultaneously address the major question issue and leave the Berman amendment’s protection of free expression intact. And unlike the TikTok bans, it would not aim to keep foreign apps out of the U.S., nor would it limit the First Amendment rights of Americans to access information from anywhere in the world. Instead, it would only limit the outward flow of Americans’ sensitive personal information. 

Such a limited fix should survive First Amendment scrutiny. The Supreme Court’s 2011 decision in Sorrell v. IMS Health Inc. struck down a Vermont law that restricted the sale, disclosure, and use of pharmacy records that revealed the prescribing practices of individual doctors. The Court made it clear that the First Amendment rights at issue in a sale of the data were not the rights of the sellers of the data but rather the rights of the purchasers: drug companies that wanted to use the data for marketing purposes. In the words of the Court, “The statute thus disfavors marketing, that is, speech with a particular content. More than that, the statute disfavors specific speakers, namely pharmaceutical manufacturers.” A content- and speaker-based burden on speech almost never survives First Amendment scrutiny. But the adversary nations seeking to acquire Americans’ personal data in bulk are not interested in marketing, or speech of any kind. Sorrell indicates that their interests as purchasers without any intent to speak should not be protected under the First Amendment. Civil liberties supporters of the Berman amendments should not want to defend the sale of personal data, lest all privacy law fall.

In terms of other cybersecurity concerns, it may be sufficient to amend IEEPA, as the Defense Production Act was amended, to make it clear that its authority to “deal with” threats to “the national security, foreign policy, or economy of the United States” includes the cybersecurity of critical infrastructure, critical technologies, and sensitive personal data. Indeed, the language added to the Defense Production Act referencing technology, infrastructure and data (the so-called TID provisions) may be just the ticket for IEEPA. Of course, a narrow exception to the Berman amendment for export of sensitive data would not address the propagandistic potential of apps like TikTok; for that, we will have to see if the freestanding ban of earlier this year survives. But that is a feature, not a bug, of my approach: Only incremental, targeted authorizations of cybersecurity regulatory power can walk the fine line necessary for regulation of communications services, internet resources, and data.

Scanning the Landscape and Identifying the Gaps in Authority

In its March 2023 cybersecurity strategy, the Biden administration promised to use existing laws to set minimum cybersecurity requirements in critical sectors and, where agencies lack clear statutory authority, work with Congress to close the gaps. So far, the administration has not advanced a single legislative proposal to close a regulatory gap. (Legislation granting the Food and Drug Administration authority to adopt regulations for connected medical devices preceded the March 2023 strategy.)

President Biden’s April 30 national security memorandum on critical infrastructure security and resilience (NSM-22) called for a review of the gaps in the federal government’s capacity to require and enforce minimum security and resilience requirements for critical infrastructure.  It directed the secretary of Homeland Security, acting through the director of the Cybersecurity and Infrastructure Security Agency, to present a legislative proposal for any necessary additional authorities. But the review is not due until Jan. 25, 2025, after the next president is sworn in. Why wait? I would guess there’s already a memo somewhere in the White House that surveys critical infrastructure and identifies the gaps in existing statutory authorities. At this point, there seems little value in keeping that memo confidential.

Time is running out this year, but incremental action is not inconceivable. There’s still a continuing resolution and a defense authorization bill to enact. The administration should issue its gap analysis as a scorecard and as a challenge to Congress (this one and the next), to itself and the next administration (whether Harris or Trump), and to the relevant sectors. 

One place to start is the nation’s drinking water system. Iranian government hackers have compromised devices in the U.S. drinking water system. So have Chinese state-sponsored actors. And Russian state-sponsored hackers. Yet water industry trade associations and Republican state attorneys general blocked Biden administration efforts last year to improve the cybersecurity of drinking water systems, arguing that the Environmental Protection Agency (EPA) had exceeded its authority under the Safe Drinking Water Act. So, in March of this year, the White House was left to write letters to governors asking them to please ask the water systems they regulate to please change their default passwords and to please send someone to a meeting to discuss the problem. To fill the gap, the EPA has tried to bootstrap its enforcement of a statutory requirement that water systems conduct risk and resilience assessments and develop emergency response plans, but that statute provides no authority to states or the EPA to actually require water systems to address the cybersecurity vulnerabilities they identify.

One straightforward approach would be to amend the federal Safe Drinking Water Act to authorize the EPA administrator to promulgate a national primary drinking water regulation addressing the cybersecurity of public water systems. Under the act, enforcement would automatically fall to the states. To avoid burdening the many small systems that exist throughout the country, the cybersecurity regulation—like certain other requirements of the act—could be limited to systems serving ​a​ population o​f more than 3,300 persons. Another approach would leave it to the states to actually adopt the cybersecurity standards: The section of the act that grants states primary responsibility for enforcement already specifies criteria that states must meet if they want to assume that responsibility. (Almost all states have.) The criteria could be amended to specify that a state can assume responsibility only if it has adopted and is implementing adequate cybersecurity standards for public water systems. A third approach would codify the interpretation that the EPA issued last year: If any public water system that is subject to the sanitary survey requirements adopted pursuant to the act uses an industrial control system or other operational technology as part of the equipment or operation of any required component of the sanitary survey, then the state must evaluate the adequacy of the cybersecurity of that operational technology for producing and distributing safe drinking water. The language could further specify that if the state determines that a cybersecurity deficiency identified during a sanitary survey is significant, then the state must use its authority to require the public water system to address the significant deficiency.

Attorneys at the EPA may well point out something that I am missing in the statutory structure. But my point is this: It does not take a lot of words to meet the requirements of Loper Bright and West Virginia v. EPA. If clear cybersecurity language is added to an existing regulatory framework, it is not necessary to build a full regulatory scheme from scratch. (Consider the Energy Policy Act, where the presence of three words, “including cybersecurity protection,” in the definition of “reliability standard” has been enough to support the adoption of an entire suite of mandatory cybersecurity standards for the bulk electric power system.)

And, should it be desired, that more complicated approach is on the table: The American Water Works Association, which argued against the EPA action last year, supports a bill that would establish a self-regulatory system, where the industry will write the rules for itself, subject to EPA review and approval. Such a system seems to be working for the bulk electric power system, where a self-regulatory body has written and enforces a series of cybersecurity standards, subject to the approval of the Federal Energy Regulatory Commission. While one may be skeptical of industry writing its own rules, it would certainly be better than the current lack of any rules at all.

Think Comprehensively, Act Incrementally

Pipelines. Railroads. Aviation. Connected medical devices. Now ship-to-shore cranes. Initial steps on internet routing. Soon(ish) sensitive data. Maybe connected cars. Piece-by-piece, the Biden administration, in part building on Trump initiatives, has been stitching together a patchwork quilt of cybersecurity regulation for critical infrastructure. But key sectors, some identified above, still need to be addressed, and some complicated rulemakings need to be brought to completion. 

And don’t forget: The promulgation of cybersecurity regulations is just the start. Effective implementation is a whole different project. The agencies that have issued binding standards now need to enforce them, year after year. Self-certification and third-party audits will not be sufficient.


Jim Dempsey is a lecturer at the UC Berkeley Law School and a senior policy advisor at the Stanford Program on Geopolitics, Technology and Governance. From 2012-2017, he served as a member of the Privacy and Civil Liberties Oversight Board. He is the co-author of Cybersecurity Law Fundamentals (IAPP, 2024).

Subscribe to Lawfare