Cybersecurity & Tech

The Risks of Huawei Risk Mitigation

Alexei Bulazel, Sophia d’Antoine, Perri Adams, Dave Aitel
Wednesday, April 24, 2019, 3:28 PM

While there is widespread agreement that Huawei devices in 5G infrastructure pose some risk to the U.S. and allied nations, the policy community—in particular the U.K.’s National Cyber Security Centre—has paid insufficient attention to the technical aspects. The discussion must examine not simply whether China would use this technology maliciously, but the specific threats that Huawei equipment could pose and the extent to which these threats can be mitigated.

Huawei P10 (Source: Wikimedia/Petar Milošević)

Published by The Lawfare Institute
in Cooperation With
Brookings

While there is widespread agreement that Huawei devices in 5G infrastructure pose some risk to the U.S. and allied nations, the policy community—in particular the U.K.’s National Cyber Security Centre—has paid insufficient attention to the technical aspects. The discussion must examine not simply whether China would use this technology maliciously, but the specific threats that Huawei equipment could pose and the extent to which these threats can be mitigated. This is especially important in the face of recent news that the U.K.’s National Security Council has okayed the use of Huawei technology for the country’s new 5G network.

The belief that this risk to the U.S.’s and its allies’ critical infrastructure could be mitigated on a technical level is belied by two major realities, both revealed by a close look at the technical dimensions of the issue. While one might argue that these realities are present in other technical arrangements, they are exacerbated in the 5G scenario. First, the risks are incalculable. Providing a third party with a foothold into a network introduces an entirely new array of risks. Any single risk has the potential to flow into other parts of the system in ways difficult to protect against, exponentially increasing exposure. Second, mitigation is impossible. With the rising complexity of technologies, validating security properties to any significant level in third-party systems has become untenable.

Technologies in a trusted computing path put the entire system at risk of failure, and a complete systemic risk is not mitigatable (or even calculable) by traditional mechanisms. We do not contend whether or not a geopolitical rival would use the opportunity posed by 5G to compromise the national security of the U.S. and its allies—this is a question for political and intelligence analysts. However, should China use this technology maliciously, now or at some future date, risk mitigations will be wholly insufficient.

The Risks Are Incalculable

Information security risks often are discussed in terms of the triad of confidentiality, integrity, and availability (CIA). Writing recently in Lawfare, Herb Lin argued:

In practice, the cybersecurity risks posed by embedded Huawei technology fall into the traditional categories of confidentiality, integrity and availability. Concerns … can be addressed using known technical measures, such as virtual private networks (VPNs) and end-to-end encryption. Indeed, such measures are widely used today in securing confidential communications that take place over insecure channels.

Concerns about availability are harder to address, because nothing prevents the vendor from installing functionality that will disrupt or degrade the network at a time of its choosing; the only known solution to the loss of availability (i.e., turning off the network) is backup equipment from a different vendor that can be used in an emergency.

While the CIA triad is a helpful framework for thinking about some computer vulnerabilities—such as “someone I’m not friends with on Facebook can see my vacation photos by doing this one weird thing”—it falls short when considering national infrastructure and nation-state attackers.

For one, this analysis generally assumes an abstract “attacker,” perhaps some ill-meaning basement-dwelling hacker. The reality is that when it comes to telecommunications infrastructure, these attackers are often nation-state actors—organized and well funded, and specialized in attacking “hard targets.” These adversaries differ in several significant ways:

  • Targeted intelligence operations are not constrained to the traditional generic large-scale attacks often seen in the commercial world.
  • Intelligence operations by nation-state actors can be long term: One can use encryption to protect the confidentiality of their data today, but this does little to protect that data tomorrow if the encryption keys have been compromised, or an adversary has developed computing power that can break the encryption. To put this in less abstract terms, the data that could pass through Huawei devices may be encrypted gibberish, but it won’t necessarily stay that way.
  • Attacks against esoteric targets are the bread and butter of nation-state actors—including Man-in-the-Middle attacks; embedded systems attacks; and rare hardware, cryptographic and software attacks. The path of least resistance for many targeted and well-resourced attackers is by compromising things the outside world has put no effort into protecting, not by simply being better at attacking the same browsers or web applications you’ll see on the stage at BlackHat.

The focus should be not on how data might be compromised by a single vulnerability at a single point in time but, rather, on how a vulnerability can serve as a foothold for future malicious behavior. Good penetration testers talk less about individual vulnerabilities (and the CIA triad) and more about attack trees and exploit primitives—stepping stones as opposed to snapshots of impact. These concepts better illustrate the kind of risks posed by infrastructure-grade problems, which cannot easily be papered over by VPNs or other simple controls.

Access Is Everything

In the intelligence world, everything hinges on access. Take Cold War espionage: A well-placed Soviet mole is worth more than the best Kremlinology money can buy. Access is a fundamental part of the equation, and every national security strategy depends on denying it to the adversary.

Whoever provides the technology for 5G networks will be sitting in a position of incredible access and, thus, power. All data sent and received from a mobile device, smart home or even a car will pass through a network built with Huawei devices. These devices will be remotely controlled and updated, leading to exponential vectors of attack.

Encryption is offered as one method to protect data in transit, yet it does little to prevent the abuse of metadata: An adversary may not be able to read the contents of a message, but they know who sent it to whom and when. The intelligence value of this information cannot be overstated—in fact, it was the basis of the U.S.’s own Section 215 program. This data is all the more potent when combined with modern advances in artificial intelligence and the ability to cross-reference with other intelligence collection (say, data from the Office of Personnel Management hack).

In reality, the threats posed by this equipment extend far beyond the compromise of the data in transit itself. Huawei-controlled devices in critical infrastructure networks can be used as a foothold for access: a springboard from which to compromise connected computers or even obscure attribution efforts. This risk cannot be mitigated by VPNs or other data privacy mechanisms. A reverse risk assessment can be done by asking the National Security Agency (NSA) what it would do for similar levels of access inside Chinese networks.

Mitigation Is Impossible

“Distrust but verify” is the watchword for information security. But verification that can protect from a malicious vendor is often impossible at this scale. Source code validation has been offered as a possible mitigation technique: Huawei would allow governments to inspect the code presumably being run on their devices. While source code can be useful in offensive vulnerability research, it is less so when used for defensive purposes such as validating the security of a prebuilt system. The source code is a blueprint; the binary code is the building that blueprint describes. That blueprint can tell you the backdoor doesn’t have a lock, but it doesn’t tell you anything about the camera the construction team hid in the wall.

Source code validation works when validating trusted partners, such as a national lab validating the work of a U.S. defense contractor. Efforts like these can tell users that a system was built to a particular standard, or that it has some number of vulnerabilities. However, the absence of vulnerabilities or backdoors can rarely be proved.

Huawei has given the U.K. government access to its source code, but the only way to guarantee that it is the same code as what is running on Huawei equipment is for the U.K. to build it into binary code and compare that with the binary code on the device. The committee auditing the source code found it was unable to replicate Huawei’s build process and generate an equivalent binary to the one on their devices. Examining the ostensible “source code” is hardly a mitigation when the adversary controls the actual source code, the build process and the hardware on which the code runs.
Foreign control of the hardware is even more problematic. As with software validation, it is almost impossible to ensure the integrity of a hardware system created by another party acting in bad faith. Especially with hardware security concerns, verifying one device does not ensure that the same guarantees hold across a large production of the same devices.

Despite the issues with software verification, checking the binary code on devices is at least far more scalable than verifying the hardware. Hardware reverse engineering methods are often destructive, involving boiling chips in beakers of acid or grinding away layers of silicon with power tools. Even if the process could be scaled up by randomly selecting devices to check, an adversary could simply play the odds and surreptitiously modify one in a million chips. As the old adage of cyber goes, defenders need to win every time, attackers only need to win once.

Defending Forward

In the cyber domain, shaping the battlefield involves hard choices regarding domestic technology use. In other words, good information security strategy is often an issue of defending not just forward in space but also in time. This means looking at the complex issues of transitive trust, the most obvious of which is software updates and maintenance.

Malicious updates from a trusted manufacturer are extremely difficult to prevent, even when skilled reverse engineers can study the code. During the Black Sunday Hack of 2001, for example, DirecTV needed a way to stop satellite TV pirates who had a knack for deciphering and then circumventing updates. DirecTV responded by parceling out a secret weapon to their satellite TV cards spread across several dozen updates over two months. They finally hit the killswitch a week before the Super Bowl, permanently destroying any illegal pirated satellite TV cards while leaving legitimate cards untouched.

Modern secure systems have an even more complex transparency model that affects the ability to verify properties of the system in ways beyond software updates. Many complex systems are nearly impossible to “crack open” to verify that their behavior matches what’s expected. This means that to bring a third-party device into your network is to place inordinate trust in the manufacturer that the security meets what was agreed upon.

Take the iPhone’s locked-down security model: Code signing guarantees updates come from Apple itself and not a malicious actor. It is nontrivial for a user to verify that these guarantees work as Apple claims, so trust lies with the vendor. While most people trust Apple to deliver security for their data, not all vendors inspire such confidence. In other words, nontransparent systems require political trust. And in Huawei’s case, that trust is lacking.

The Shape of Supply Chains to Come

It is commonplace for international companies to manufacture critical information technology. At face value, the present Huawei situation is no extraordinary case—to the contrary, it is mirrored by the recent situation with Kaspersky Labs, in which the U.S. Congress specifically disallowed use of the company’s technology in government systems, setting in law a statement that there were no mitigating factors that could ameliorate the risk of having Russian-controlled antivirus software in critical systems, without putting into public sources any proof of malicious behavior. Even when final assembly or management can be kept domestic, securely sourcing components remains difficult, particularly for mass-manufactured semiconductors—the most sensitive and obvious place for surreptitious hardware implantation.

For technology embedded in critical infrastructure, the scale and complexity of risk mitigation so quickly outstrips the lower-cost benefit that discussing possible mitigations seems akin to rearranging deck chairs on the Titanic. Over the long term, the upfront savings in cost will be dwarfed by the need to constantly create, update and maintain mitigations for the ever-evolving risk the government would be taking on. Uneven standards in international regulation, innovation and labor prices allow adversaries to offer economic expediency that is underwritten by access-enabling penetration of infrastructure. While these issues are complex, the public and international debate over this issue suggests a secure supply chain will prove to be the 21st century’s most elusive luxury good.


Alexei Bulazel is a senior security researcher at River Loop Security, where he provides expertise on reverse engineering, vulnerability research and cyber policy. He has presented research on reverse engineering anti-virus software at a variety of international venues including Black Hat, DEFCON, REcon Montreal and Brussels, and SummerCon, among many others, and has published scholarly work at the USENIX Workshop on Offensive Technologies (WOOT) and the Reversing and Offensive-Oriented Trends Symposium (ROOTS). Alexei is a proud alumnus of RPISEC.
Sophia D'Antoine has spoken and keynoted at more than a dozen global security conferences worldwide and sits on the program committee for USENIX WOOT. She is also the Hacker in Residence at New York University. A graduate of Rensselaer Polytechnic Institute, Sophia earned her MS on exploiting CPU optimizations. While at RPI, Sophia helped create and teach Modern Binary Exploitation. Her current work involves developing novel tooling to assist in research and discovery of vulnerabilities in a spectrum of targets.
Perri Adams is a computer security researcher who works with U.S. Government entities to provide subject matter expertise. She focuses on reverse engineering, vulnerability discovery, exploitation and program analysis techniques, as well as U.S. cyber policy. An alumna of Rensselaer Polytechnic Institute, Perri is a proud member of RPISEC and frequently competes in Capture the Flag (CTF) competitions, including DEF CON CTF Finals in 2018 and 2020.

Subscribe to Lawfare