On Spectre and Meltdown
I wrote about the Spectre and Meltdown attacks for CNN and my blog. The piece begins:
This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.
Published by The Lawfare Institute
in Cooperation With
I wrote about the Spectre and Meltdown attacks for CNN and my blog. The piece begins:
This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.
The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren't security teams on call to write patches, and there often aren't mechanisms to push patches onto the devices. We're already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can't be fixed.
The second is that some of the patches require updating the computer's firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn't get that update directly to users; it had to work with the individual hardware companies, and some of them just weren't capable of getting the update to their customers.
We're already seeing this. Some patches require users to disable the computer's password, which means organizations can't automate the patch. Some anti-virus software blocks the patch, or -- worse -- crashes the computer. This results in a three-step process: patch your anti-virus software, patch your operating system, and then patch the computer's firmware.
The final reason is the nature of these vulnerabilities themselves. These aren't normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.
It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.