Haste, Waste and Choice
Four years ago, there was the Heartbleed problem, a common-mode failure among products that were compliant with a particular networking standard—products that were inherently vulnerable to attack by way of their compliance itself. Early in the discussion of it here on Lawfare, this paragraph appeared:
Published by The Lawfare Institute
in Cooperation With
Four years ago, there was the Heartbleed problem, a common-mode failure among products that were compliant with a particular networking standard—products that were inherently vulnerable to attack by way of their compliance itself. Early in the discussion of it here on Lawfare, this paragraph appeared:
Recent headlines have been all about a coding error in a core Internet protocol that got named “Heartbleed.” It is serious. It was hiding in plain view. If it wasn't exploited before its announcement, it most certainly has been after. It is hard to fix.
But that's just software, right? The central beauty of software is its impermanence. Software is to hardware as language is to vocal cords: Just as the cure for speech you don't like is more speech, the cure for software you don't like is more software.
Hardware, by contrast, is the ground truth of a computing system. Old coders would talk about such and such a program as “running on bare metal”—meaning that the only thing going on was the execution of code those old coders wrote themselves. We've gotten away from that model except in the security community, where the very permanence of hardware remains tantalizing since, in this particular line of thought, security in the hardware itself is not subject to subversive exploitation.
But sometimes the line between hardware and software is not a bright line; sometimes it depends upon what the meaning of the words are in context. And so we come to two methods of attack that fall right on the gray boundary between hardware and software, the fraternal twins Meltdown and Spectre. How they work, much less how they differ, is irrelevant (here) except for that of their position on the boundary between hardware and software, the boundary where the demands of software steer the design of hardware. The last paragraph in the announcement of Spectre reads like this:
The vulnerabilities in this paper, as well as many others, arise from a longstanding focus in the technology industry on maximizing performance. As a result, processors, compilers, device drivers, operating systems, and numerous other critical components have evolved compounding layers of complex optimizations that introduce security risks. As the costs of insecurity rise, these design choices need to be revisited, and in many cases alternate implementations optimized for security will be required.
And there is the crux of the matter, both for technologists and for policy makers: What do we prioritize? We know, and have long known, that optimality and efficiency are the enemies of robustness and resilience. The payback on optimality and efficiency is quantitative, calculable, and central to short-term survivability. The payback on robustness and resilience is qualitative, inestimable, and central to long-term survivability. The field of battle is this: All politics is local; all technology is global.
In the most weighty matters, the question of greatest import is all but always "What did they know and when did they know it?" followed by "And what did they then do?" Meltdown and Spectre noisily raise those very questions, and it is time we answered not something so ultimately trivial as what to do about this or that flaw we just now know about, but what do we want to do about our vulnerability to flaws we don't yet know about. I have written a forthcoming paper for the Hoover Institution's Aegis Paper Series that discusses this in more detail.