Cybersecurity & Tech

Ransomware Remixed: The Song Remains the Same

Trey Herr
Wednesday, June 28, 2017, 10:11 AM

Another month, another ransomware epidemic. Broadsheets are screaming panic while companies yell back that All Is Well and Ukraine shows the world what gifs can do for incident response. Twitter is abuzz with the rapid, globalized forensics effort of a legion of amateurs and professionals (though nothing yet from the White House).

Published by The Lawfare Institute
in Cooperation With
Brookings

Another month, another ransomware epidemic. Broadsheets are screaming panic while companies yell back that All Is Well and Ukraine shows the world what gifs can do for incident response. Twitter is abuzz with the rapid, globalized forensics effort of a legion of amateurs and professionals (though nothing yet from the White House). We’ll probably blame the North Koreans, it likely wasn’t ISIS, and it may turn out that the Russians were behind the whole thing.

There’s a lot we still don’t know, but several facts have emerged. A series of ransomware infections that started in banks and utilities in Ukraine quickly spread into Russia and Belarus, then to Western Europe and the United States. Hundreds of organizations have been affected, from ports in New Jersey and New York, to the oil company Rosneft, the global shipping firm Maersk, and the UK media giant WPP. The current infections likely stem from a variant of a family of ransomware called Petya, spreading via the same vulnerability used by the WannaCry outbreak of just a few weeks ago. The degree to which this version actually resembles Petya, such that it should be labeled a variant, remains under debate. The new malware’s payload is the same as that of a recent version of Petya, but how it initially infects computers and some of how it spreads are new.

Unlike WannaCry, the current infection has quite a few tricks up its sleeve. It employs a rewritten version of the exploit used by WannaCry, originally developed by the NSA to target the decades-old SMBv1 networking protocol. Whereas WannaCry spread over the internet, the current ransomware appears designed to spread primarily over local networks. The source of the initial infection was a hijacked update to a popular piece of Ukrainian accounting software and a compromised website run by the city of Bakhmut, also in Ukraine. Where traditional ransomware encrypts individual files or the user volume (typically the C Drive in Windows), Petya encrypts the Master Boot Record (MBR) of a computer—basically the initial operating instructions for the machine. The ransomware then moves to encrypt the Master File Table (MFT), which acts as a central map for every file and directory on the computer.

While the Petya ransomware is nearly a decade old, the current infections are based on a much more recent variant called PetrWrap. PetrWrap took Petya as a starting point but added a new encryption mechanism and several techniques to spread and infect other computers, including abusing an administrative utility called PsExec. As another trick to fool targeted computers, this most recent ransomware added a forged Microsoft certificate to “sign” the code, verifying it as legitimate. (As a practical matter, simply patching against the SMBv1 vulnerability is not enough. Users need to block several networking utilities including PxExec and may also need to also apply the Microsoft Office patch released in April. For more on this point, see here.)

What Does It All Mean?

This is what proliferation looks like in cyberspace: someone writes a piece of malware, a third party finds it, adapts it, adds in some of their own code or that from an open source project …et voila, a new piece of malware is born. This latest epidemic is based on a commonly used ransomware, combined with a modified version of the NSA’s leaked exploit, and tied together with some new encryption functionality and part of an open source security tool.

The policy community lacks a good mechanism for stymying this kind of proliferation, either domestically or internationally. We’ve spent too much time debating the Vulnerability Equities Process (VEP) without addressing the broader issues of software security. VEP is a critical oversight process for the government’s use of software vulnerabilities and deserves to be codified into law. And it may be time for serious discussion of how NSA capabilities were leaked to the public and if the agency should be allowed to develop virulent tools it can’t keep secret. Still, the VEP is only one small part of the security puzzle and should not be mistaken for an effective tool in changing the behavior of software vendors.

On the international front, there has been extensive debate over government efforts to apply export controls to limit the use of malware. The original controls were misguided and the ensuing effort to modify them has seen few results, though it has created valuable links for advice and informal consultation between policymakers and some in the security community. In the meantime, attackers have continued to innovate despite the best efforts of tens of thousands of developers, security managers and researchers, and hundreds of billions of dollars.

What Can We Do?

Ransomware is the cybersecurity gods’ way of saying you’re doing it wrong. Malware flows into the cracks of an organization’s security posture and points up where things could be improved— Steve in Accounting’s annoying tendency to click on all the links or the CEO who says the rules for updates and two-factor authentication don’t apply to her.

So how to heed the message? Policymakers need to counter this sort of malware proliferation. In the long term, that means encouraging the use of easier-to-secure languages like Rust and creating incentives for organizations, including those within the federal government, to shift to architectures that can be more rapidly secured and updated, including offerings like Amazon’s EC2 and Microsoft Azure.

In the near-term, however, “just patch” doesn’t seem to be helping. Policymakers can take steps to shorten the life cycle of vulnerabilities, increasing attacker’s costs to innovate and improving the security of software. In a recent paper, I make several recommendations, starting with increasing the quality and significance of software vulnerability discovery—for example, by creating a government-led bug bounty not just for software used by the federal government, but also for open-source projects and orphaned software critical to the internet. Bug bounties aren’t a solution on their own, but this kind would help incentivize discovery of vulnerabilities in code that lacks a responsible for-profit vendor.

Discovering vulnerabilities doesn’t do much good if no one tells the developer. Two U.S. laws —the ancient Computer Fraud and Abuse Act (CFAA) and the much-maligned Digital Millennium Copyright Act (DMCA)—criminalize activities needed to investigate and test security flaws and make it less likely that a discovered vulnerability will be disclosed to a developer. Both laws provide companies and the state with tools to harass and bludgeon researchers, but there are clear changes that can be made to reduce their harm. Policymakers should focus on removing from both laws those provisions that companies or the government could use to punish good-faith security research. The Librarian of Congress recently approved a three-year exemption from the DMCA for just such research; this change should be made permanent.

Once disclosed, a vulnerability still needs to be fixed, but unfortunately there is no transparent means to track how well organizations patch their software. One way to remedy this is through a set of clear and consensus-based standards, issued by a well-respected technical body, like the National Institute of Standards and Technology (NIST). Once developed, patches often must be validated by other companies, so regulatory bodies should apply these same standards and track them publicly. These standards would benefit from a nudge, like federal software procurement rules that give preference to companies that patch rapidly and effectively.

And then we come to patch adoption. As with patch development, it’s often not clear which organizations patch rapidly and which do not. There are myriad reasons why patches aren’t applied immediately, especially for critical systems. Regulatory bodies such as the SEC should add disclosure requirements to bring this information to customers so that they can respond to the security behavior of their business partners and vendors. These disclosure rules should include explicit notice for any software in use beyond its service life.

The good news is that there is plenty of room to improve how we secure software. The bad news is that the stakes are growing; the next vulnerability to receive front-page coverage may be found in autonomous cars or a medical imaging system. Finding bugs, fixing them, and applying patches still only amounts to improving how we respond to software that’s often designed for features before security. Patching is not a panacea, but shortening the life cycle of vulnerabilities will help improve the security of software and increase costs for attackers. Some of these suggested policies are more politically demanding than others, but the larger point is to shift the discussion from silver-bullet responses to addressing the process of maintaining secure software in the near-term.


Trey Herr is Assistant Professor of cybersecurity and policy at American University’s School of International Service and director of the Cyber Statecraft Initiative at the Atlantic Council. At the Council his team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, cyber safety and growing a more capable cybersecurity policy workforce.

Subscribe to Lawfare