Bases for Trust in a Supply Chain

Fred B. Schneider, Justin Sherman
Monday, February 1, 2021, 3:14 PM

As nations become increasingly interested in defending against supply chain attacks, it is necessary to establish trust in digital systems. Here, we evaluate the strengths and limitations of various trust-building proposals.

Powerlines at dusk. (Pixabay, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: This paper was prepared as part of Lawfare’s Trustworthy Hardware and Software Working Group, which is supported by the Intel Corp. Fred Schneider is also supported, in part, by AFOSR grant F9550-19-1-0264, and NSF grant 1642120. The authors are grateful to Sadie Creese, Steve Lipner, John Manferdelli and Bart Preneel for comments on earlier drafts.


Introduction


To use a digital system, individuals and nations must have some basis to trust that the system will do what is expected and that it will not do anything unexpected, despite attacks from the environment in which that system is deployed. A digital system might range from a single electronic component to a networked information system; its environment could include humans (intended users and others with malevolent intent), utilities (such as electrical power and communications networks), computer hardware (from personal devices to desktop computers to cloud infrastructures), and software (operating systems, databases and applications). Any and all of the elements making up the environment might be involved in an attack.


A Defense Science Board report [1] groups possible attacks on digital systems into three basic categories:


  • Prepackaged attacks that exploit known vulnerabilities.
  • New attacks developed by analyzing the system, discovering its vulnerabilities and devising ways to exploit those vulnerabilities.
  • Attacks that exploit new vulnerabilities an attacker introduces during development or manufacture but before deployment.

The third category is known as supply chain attacks. With a supply chain attack, there is a potentially long delay between the introduction of a vulnerability and its exploitation. In addition, infiltrating a supplier generally requires a well-resourced adversary and interaction with that supplier. So compared to the alternatives, preparations for a supply chain attack take longer and have a higher risk of discovery. The risks of discovery can be reduced, however, if inserted vulnerabilities resemble ordinary flaws and, thus, the malicious intent is disguised.


The digital systems on which individuals and nations increasingly depend are large and complex, so today they are likely to be rife with vulnerabilities. Many of those vulnerabilities will be known, some unpatched, and others easily discovered by analysis. In short, such systems are easy to compromise.


There are nevertheless still good reasons to undertake a supply chain attack. First, for systems that will not be accessible after deployment, introducing a vulnerability during system development or manufacture might be the only means of compromise. Second, supply chain attacks exhibit a certain economy of scale, since vulnerabilities are installed into all new instances of a system. Third, vulnerabilities exploited by supply chain attacks can be well concealed, offering the attacker reliable access when needed.


Distinct Bases for Trust


Whether users trust a digital system will be based on the beliefs they hold about that system’s behaviors. But beliefs are not necessarily truths. Holding unsound or incomplete beliefs could lead to trust in a system whose behavior does not satisfy expectations. Consequently, it is crucial to understand the soundness and completeness of any beliefs that might be derived to justify trust in a digital system. Characteristics of a supply chain are sometimes part of such justifications. As we discuss later in this post, the same bases for justifying trust in a digital system [2] also shed light on schemes being proposed to detect or forestall supply chain attacks.


Axiomatic Basis for Trust


In mathematics, an axiom is a statement that is accepted at face value. In this spirit, we define an axiomatic basis for trust in a digital system to be any rationale where the beliefs about a system’s behavior are accepted without evidence derived from the system itself. Needless to say, by ignoring details about the system’s implementation, an axiomatic basis for trust is necessarily a weaker source of assertions about a system’s behaviors than one based on data from that system.


A common example of an axiomatic basis for trust presumes to predict attributes of a system’s behaviors from attributes of that system’s developer: the country in which the company is located, for example, or the reputation of the company that sold the system, the ISO certifications that company holds, or the certifications or degrees held by the people the company employs. The problem is that attributes of a system’s developers are not sufficient to conclude anything about a system’s behaviors. Rather, it is the attributes of a system’s design and implementation that should be used to support conclusions about a system’s behaviors. Consequently, trust that is based on attributes of developers requires accepting the absence of vulnerabilities without evidence of that absence derived from the system itself.


Deterrence through accountability—whether that accountability is regulatory or reputational—is another example of an axiomatic basis for trust. Once it is established that developers who are being held accountable for their actions have built the system, users assert (again without proof derived from the design of the system) that these accountabilities lead to a system that exhibits the right behaviors. Many of the nontechnical measures being advocated to improve supply chain security can be seen to be schemes that facilitate deterrence through accountability and, thus, are axiomatic bases. An extensive list of such measures forms the body of a recent Center for Strategic and International Studies (CSIS) report [3] concerned with trust in 5G telecommunications networks. That CSIS list includes various ways to foster visibility into a company’s operation, either to create accountability or to detect conditions under which incentives exist for introducing vulnerabilities into products. The CSIS list also identifies institutions or processes that can provide the incentives and disincentives needed for deterrence through accountability.


A final example of an axiomatic basis for trust is seen with defenses that are implemented by criteria on source selection during acquisition. For example, when it is too costly for an adversary to infiltrate and corrupt all of the suppliers for a particular system, one possible defense against supply chain attacks is to prevent adversaries from learning who is the supplier of a specific system by making the purchase in secret or by making purchases from many suppliers and then randomly selecting only one supplier’s systems for actual deployments. This is a rationale for trust that ignores how the system works, so, by definition, it is an axiomatic basis. The rationale for trust depends on (a) having many suppliers and (b) a system implementation that does not incorporate any component produced by only a few suppliers, since otherwise an adversary might infiltrate those and compromise that component. Special-purpose digital systems are unlikely to have many suppliers, though commodity software might. For this basis for trust to be effective, users also must believe that purchases of the system can be made in secret or that a randomly chosen system is unlikely to have come from a supplier that has been infiltrated by the adversary.


Analytic Basis for Trust


In contrast to axiomatic bases for trust, trust might be acquired by studying a system’s possible behaviors (a) by running the system on selected sequences of inputs or (b) by deducing properties from the system’s construction. Testing is an example of approach (a). It is an inductive form of analysis, wherein experiments inform beliefs about unobserved behaviors; inputs submitted and outputs they produce constitute evidence for broader beliefs about the performance of the system. With approach (b), the analysis is deductive; a logical proof that relates an implementation to some formal specification constitutes evidence for beliefs about the system’s behavior. Verification, model checking, and other formal methods are instances of this approach. Testing and formal methods exemplify an analytic basis for trust, which uses the system itself to derive beliefs about its possible and impossible behaviors.


The soundness and completeness of beliefs derived by using testing depends on what tests are performed. For nontrivial systems, it is infeasible to check all inputs, much less to check all of the sequences of inputs necessary for a complete understanding of possible system behaviors. Even systems that include interfaces for direct access to internal state (thus reducing the number of test cases that need to be observed) will typically require that a prohibitively large set of inputs be observed. With exhaustive testing likely to be infeasible, beliefs derived from testing could be inaccurate. Specific misbehaviors might be observed, but testing cannot be used to infer that misbehaviors are not possible. In addition, whether vulnerabilities are revealed by testing will depend on what interfaces can be monitored when a test is run. For example, side channel attacks—data leaks involving unanticipated ways to monitor system activity—are unlikely to be detected by ordinary testing approaches.


With formal methods, a property of the system that has been verified becomes a belief. Today’s formal methods impose practical limitations on the size of systems that can be analyzed and the classes of properties that can be checked. So certain kinds of vulnerabilities will be missed when formal methods are being used to justify trust in a system. In addition, unanticipated vulnerabilities could be missed by failing to verify the right property. Moreover, use of a formal method is impossible without a description of the internals of the system to be analyzed. Such a description might not always be available if it is proprietary or if the system builder integrated existing subsystems according to specifications of their interfaces, rather than descriptions of internal operation.


Access to a system for testing and access to a system description for formal methods is sometimes afforded to evaluators through special arrangements with vendors. Microsoft, for example, established its Government Security Program (GSP) [4] in 2003 to provide national governments with controlled access to Windows source code and other technical information from within special facilities located throughout the world. In late 2010, Huawei Technologies opened its Cyber Security Evaluation Center in Banbury, England, to provide the U.K. government and U.K. telecommunications providers with an opportunity to analyze the security of Huawei products. This kind of privileged access to systems and information, however, leaves open the possibility that beliefs derived using it are not valid for all instances of the system. An inspected instance of a system might not have the same vulnerabilities as the system instances that are deployed in the field. The accuracy of such an analysis is also limited by the capabilities of the available analysis methods, and evaluators may be left wondering whether there is something they forgot to check.


Synthesized Basis for Trust


A synthesized basis for trust enables trust to be justified in the whole from trust in its components and trust in whatever glue enables their interaction. A synthesized basis for trust often will relocate what must be trusted to a smaller trusted computing base. For instance, users might seek to trust that multiple large and complex virtual machines together deliver some service. Establishing trust that a hypervisor (a potentially small piece of software) isolates the memory of each virtual machine makes it easier to establish trust in each individual virtual machine, since now no virtual machine can read or update the memory of another. The hypervisor can be seen as implementing a reference monitor, which is a common building block for achieving a synthesized basis for trust, by restricting possible behaviors of some target components.


Another example of a synthesized basis for trust is split fabrication [5] for semiconductor chips. Here, different steps in producing a semiconductor chip are performed by different manufacturers, thereby limiting the impact that is possible by corrupting a single step in the fabrication process. When the different manufacturers are independent, split fabrication increases the cost of supply chain attacks that require attacker involvement in multiple steps. Replicated distributed systems implement a software analog to this kind of defense. The output from a replicated system is defined to be the output from a majority of its replicas. Provided that replicas are independent and, therefore, have different vulnerabilities, an attacker must do the work to compromise multiple replicas in order to corrupt a replicated system’s output [6].


Supply Chains


Most supply chains are actually not chains, and for establishing trust in a digital artifact, supply chains are better described by trees. Figure 1 shows a supply chain depicted as a tree. The sink of the tree represents the artifact of interest; it has no outgoing edges. The other nodes in the tree also represent artifacts, where a directed edge from a node n to a node m signifies that the artifact n denotes is used in producing the artifact that m denotes. Updates to artifacts thus also follow the paths depicted by the edges in a supply chain tree. For our purposes, artifacts will correspond to digital systems and their components, ranging from wafers to networked information systems. Associated with each node could be the manufacturer and other attributes used for an axiomatic basis for trust.



Figure 1. Example of a Supply Chain.


To establish trust in the artifact that the sink models, it might be tempting to focus on that artifact and ignore the rest of the supply chain. That view, however, is shortsighted:


  • The use of synthesized bases is precluded, since they require knowledge of and trust in the artifacts being combined to form the sink.
  • Analytic bases that employ testing might not be feasible, because access is required to the artifact’s internal components for submitting inputs and for observing outputs. Those internal components correspond to other nodes in the tree.

Accessing the other artifacts in the supply chain overcomes these limitations. In theory, it should suffice to identify a cut—a set of nodes that intersect all paths to the sink from the leaves of the supply chain tree. The green nodes in Figure 2 are an example of a cut for the tree in Figure 1. To establish trust in the sink when there has been a cut, it suffices to establish (a) trust for nodes comprising the cut and (b) the needed transitivity of trust by having a basis to trust each successor under the assumption that its immediate predecessors can be trusted.



Figure 2. Example of a Cut in a Supply Chain Tree


In theory, obligation (a), trust for nodes comprising the cut, would be discharged by using one or more of the bases for trust discussed above. In practice, for most digital artifacts, beliefs derived in this way will be incomplete and, as noted above, the trust bases do benefit from access to predecessor nodes. The obvious remedy is to work backward through the supply chain, establishing trust in predecessors of the nodes forming the cut. Just running tests on an artifact could, for example, miss problematic behaviors; also running tests on its predecessors in the supply chain might reveal some of those problems. However, for various reasons it might not be possible to learn the full pedigree for an artifact in a supply chain. That means trust in an artifact corresponding to a node at the cut—and therefore trust in the artifact at the sink—is necessarily going to be incomplete.


Obligation (b), transitivity of trust, involves the moral equivalent of tamper-proof packaging, so that users can detect when an artifact they trust becomes compromised as it progresses through the supply chain. For semiconductor chips, this guarantee can be enforced by physical packaging. For software, hash functions or other cryptographic means along with deterministic compilation and builds can be used to compute checks that would indicate when bits have been changed; hardware support (for example, the Trusted Platform Module and Software Guard Extensions) for so-called trusted computing [7] is helpful here. Self-test capabilities also can be effective for defending against certain attacks that compromise tamper-proof packaging of hardware or software.


Both obligations can benefit from deterrence through accountability, even when some of the institutions involved in producing the components are not subject to the regulations (and prosecution) that is creating the deterrence. If trust in n is being justified by deterrence through accountability, then the producer of n has reason to enforce requirements on its suppliers. So, a supply chain creates a reverse cascade [8] that enforces requirements on suppliers even if those suppliers are not within the jurisdiction of the institution implementing the accountability or the deterrence.


Concluding Remarks


The recent interest in defending against supply chain attacks should not be surprising. Nations are increasingly coming to depend on networked systems for commerce and for defense. Attacks by foreign threat actors on domestic suppliers and domestic use of foreign suppliers raise questions about national security. In addition, means are being sought to aid a nation’s domestic industry by assuaging fears that foreign customers might have. In this post, we have described a way to view proposed means for establishing trust in digital systems—the expected result of any scheme intended to defend against supply chain attacks. Our hope is to advance the discussion by giving a basis for viewing the strengths and limitations of various proposals.


References


[1] Defense Science Board, Resilient Military Systems and the Advanced Cyber Threat, Department of Defense, January 2013. https://dsb.cto.mil/reports/2010s/ResilientMilitarySystemsCyberThreat.pdf


[2] Fred B. Schneider, “Putting Trust in Security Engineering,” Communications of the ACM, 61(May 2018), 37–39. http://www.cs.cornell.edu/fbs/publications/CACM.viewpoint.PuttingTrust.pdf


[3] CSIS Working Group on Trust and Security in 5G Networks, Criteria for Security and Trust in Telecommunications Networks and Services, Center for Strategic and International Studies (CSIS), May 2020. https://www.csis.org/analysis/criteria-security-and-trust-telecommunications-networks-and-services


[4] Microsoft Corporation, “Program Overview,” Government Security Program (GSP). https://docs.microsoft.com/en-us/security/gsp/programoverview .


[5] Meenatchi Jagasivamani, Peter Gadfort, Michel Sika, and Michael Bajura, “Split-Fabrication Obfuscation: Metrics and Techniques,” 2014 IEEE International Symposium on Hardware-Oriented Security and Trust (HOST), May 6–7, 2014. https://ieeexplore.ieee.org/abstract/document/6855560


[6] Tom Roeder and Fred B. Schneider, “Proactive Obfuscation,” ACM Transactions on Computer Systems, 28(July 2010), 1–54. https://www.cs.cornell.edu/fbs/publications/ProactiveObfuscTOCS.pdf


[7] Paul England, Butler Lampson, John Manferdelli, Marcus Peinado, and Bryan Willman, “A Trusted Open Platform,” Computer, 36(July 2003), 55–62.


[8] Nathaniel Kim, Trey Herr, and Bruce Schneier, The Reverse Cascade: Enforcing Security on the Global IoT Supply Chain, Atlantic Council Scowcroft Center for Strategy and Security, Cyber Statecraft Initiative, June 2020. https://www.atlanticcouncil.org/wp-content/uploads/2020/06/Reverse-Cascade-Report-v3.1.pdf


Fred B. Schneider is Samuel B. Eckert Professor of Computer Science at Cornell University, and he is the founding chair of the National Academies Forum on Cyber Resilience.
Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.

Subscribe to Lawfare