Cybersecurity & Tech

A Retrospective Post-Quantum Policy Problem

Herb Lin
Wednesday, September 14, 2022, 9:34 AM

Quantum computing will reveal old secrets once thought to be forever secure. What then?

Interior of an IBM Quantum computing system. (Graham Carlow, https://flic.kr/p/24VJTSD; CC BY-ND 2.0, https://creativecommons.org/licenses/by-nd/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

In May 2022, the White House issued a National Security Memorandum that stated:

a quantum computer of sufficient size and sophistication—also known as a cryptanalytically relevant quantum computer (CRQC)—will be capable of breaking much of the public-key cryptography used on digital systems across the United States and around the world. When it becomes available, a CRQC could jeopardize civilian and military communications, undermine supervisory and control systems for critical infrastructure, and defeat security protocols for most Internet-based financial transactions.

This concern is not new. The theoretical possibility that quantum mechanics could be used as the basis for computation was first posed in the physics literature around 1980. In 1994, Peter Shor developed an algorithm that could rapidly factor large numbers into their constituent primes if run on a quantum computer. The development and publication of Shor’s algorithm raised the possibility of undermining the RSA algorithm that underlies most secure messaging over the internet. The security afforded by the RSA algorithm is based on the difficulty of factoring large numbers, and thus Shor’s algorithm presents a potential threat to RSA.

 Since 1994, the cryptography community has speculated about the forthcoming availability of quantum computing hardware that could run Shor’s algorithm. In the early days of such speculation, the range of estimates for that time frame ranged from “pretty soon” to “probably never.” However, in recent years, the emerging consensus seems to be that quantum computing, as it applies to cryptanalysis, cannot be dismissed as mere puffery. Scientific and engineering progress in quantum computing over the past 25 years has been nontrivial, and many nations are involved in supporting extensive research efforts into quantum computing. In 2016 under the Obama administration, the U.S. National Institute of Standards and Technology initiated the first public U.S. government effort to develop cryptographic algorithms that would be resistant to quantum computing. The Trump administration continued this interest in quantum computing by proposing substantial increases in funding for quantum information sciences. And, as noted above, the Biden administration’s 2022 White House National Security Memorandum has continued to emphasize the importance of quantum computing and has directed federal agencies to begin preparing for a transition. 

 Congress has also expressed concerns about encryption vulnerabilities that may result from quantum computing. For example, the House of Representatives passed the Quantum Computing Cybersecurity Preparedness Act in July 2022. This bill directs the Office of Management and Budget to begin the migration of U.S. government information technology systems to post-quantum cryptography a year after the National Institute of Standards and Technology issues post-quantum cryptography standards. In July 2022, a bipartisan group of senators introduced the same bill into the U.S. Senate. 

 These efforts have generally focused on the future by developing the technology base to support what the United States should do to ensure the security of its sensitive communications. As noted above, attention to policy regarding a post-quantum cryptography (PQC) world has been focused primarily on policy that would facilitate an infrastructural transition to quantum-resistant encryption algorithms.

 However, despite these efforts, policymakers have given little or no attention to what could be called a retrospective post-quantum problem. To wit—pre-quantum public-key encryption algorithms such as RSA have almost certainly been used to protect nearly all classified U.S. government messages since the 1970s, when the mathematics for public-key encryption were first discovered. A properly encrypted message is useless to anyone without the decryption key or the technology to discover that key, but even encrypted messages can be recorded for future analysis. Indeed, intelligence agencies have a habit of collecting information just in case it might be useful in the future, and there is no reason to suppose that these encrypted messages have not been recorded somewhere by some adversary government. 

 In a PQC world, those recorded encrypted messages will be vulnerable to decryption. In their decrypted form, they potentially hold a treasure trove of secrets. Though these are secrets from the past, decrypted messages may reveal embarrassments and dangers with potentially detrimental policy implications for today and tomorrow. The possibilities for these secrets are endless: Salacious information about a world leader currently believed to be a right and upstanding patriot to his country? Operational instructions regarding an assassination attempt or a coup supported or encouraged by U.S. authorities despite public denials? A communique about alien technology discovered by accident on the ocean floor? 

 As Chris Jay Hoofnagle and Simson Garfinkel rightly point out in Lawfare, even a remarkable breakthrough resulting in a quantum computer capable of factoring the large numbers characteristic of RSA public keys would not automatically undo all RSA-enabled encryption everywhere. Rather, the owner of such a computer would have to use its quantum computing resources on decrypting one message at a time. And since an encrypted uninteresting message cannot be distinguished from a similarly encrypted interesting message, it may be necessary to dedicate a significant portion of time, effort, and funding to decrypt a large volume of recorded messages before an interesting message is found.

 That said, this is largely a matter of economics. The cost of PQC cryptanalysis is likely to eventually drop to a level where it makes sense to devote quantum computing resources to decrypting old, encrypted messages. 

Policymakers would be wise to consider the very real possibility that in a PQC world, messages they once believed would be kept secret could in fact be made public. The adversary cannot be confident that it will be able to retrieve a large volume of interesting information from its trove of encrypted recorded messages, at least not in the immediate aftermath of a true quantum computing breakthrough. Still, the United States cannot be fully confident that any of its secrets encrypted with pre-quantum algorithms will never be revealed. Thus, the danger that such secrets will be revealed will only grow, as the adversary is able to devote more quantum computing resources to the process of retrospective decryption.

It is a common best practice for organizations to do a damage assessment in the wake of a data breach to identify what information may have been compromised and then to develop and implement a strategy to deal with that compromise. Here, policymakers have the distinct luxury of knowing that a data breach is looming in the future, even though they do not know precisely when it will occur. Every U.S. government agency that has sent a confidential message in the past should have at least a small effort devoted to developing plans for what that agency should do if and when particularly sensitive messages from the past are revealed in the PQC future.


Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare