How Can One Know When To Trust Hardware and Software?

Paul Rosenzweig, Benjamin Wittes
Monday, May 2, 2022, 3:36 PM

The Lawfare Institute convened a working group of experts to answer that question. The group's report, titled "Creating a Framework for Supply Chain Trust in Hardware and Software" is available now.

Published by The Lawfare Institute
in Cooperation With
Brookings

In a world of growing dependence on technology, consumers of information and communications technology (ICT) goods face increasingly important questions: How, and to what extent (if any), can users be confident that the systems on which they rely are worthy of trust?

That question is one that bedevils everyone—from end users to major enterprises. Each must decide which products, and vendors, to trust. Many make those decisions without a great deal of consideration, but the stakes in those decisions are often significant. And even those who choose to think about the questions carefully lack a system of thought for addressing questions of trustworthiness. The ongoing discussion regarding the use of Chinese information technology products as components of Western systems is one example of a broader and deeper problem: How should we assess the degree of trustworthiness in ICT products?

To answer that question, the Lawfare Institute convened a working group of experts to articulate and justify a set of trustworthiness principles—concepts that, ex ante, would justify accepting a digital artifact as worthy of being trusted. Today, the group publishes its report on the subject, and we offer this short-form summary. The task force members included: Paul Rosenzweig, who served as the group’s chief rapporteur; Justin Sherman, who served as assistant rapporteur; Benjamin Wittes, who chaired the task force; Trey Herr, David Hoffman, Herb Lin, Bart Preneel, Samm Sacks, Fred B. Schneider and Daniel Weitzner

This project began in the fall of 2019, when the Intel Corporation asked the Lawfare Institute to convene an expert panel to study the question of trustworthiness in hardware and software, to try to establish and describe the constituent elements of trustworthiness, and to identify a set of principles by which one might determine how trustworthy a product is. Intel provided funding for this working group. But this report represents the work product and thinking of the working group as a whole and was not subject to editorial control or oversight of Intel or any other outside body or organization.

Our inquiry began at the most basic level by defining what we mean when we say that a digital artifact is trustworthy. We settled on the following definition: an artifact that does what is expected of it and nothing more is trustworthy.

The challenge however, was not so much in defining the desired end-state of trustworthiness but, rather, in defining how it is that one may demonstrate trustworthiness to a skeptical world. In modern systems, all artifacts are the sum of multiple parts—so the inquiry of trustworthiness is an inquiry into creation, into assembly out of diversely produced components, into distribution, and into use. In effect, it is to ask about the entire supply chain of ICT goods from conception to consumption.

Thus, the principal question addressed in the report is: What indicia of trustworthiness ought we to be looking for and why are those particular indicia of trustworthiness ones on which users should rely? As the report lays out in detail, our answer to this problem resonates in three different dimensions.

  • First, the inquiry implicates questions of technical capacity and security: How are we to know that the manufacturers of a hardware or software system have designed and built it in a way that is trustworthy and, therefore, secure against deliberate internal or external attack? In other words, has the manufacturer performed competently in constructing the digital artifact? Has the enterprise performed transparently? And has it used only those suppliers that have, in turn, also performed competently and transparently?
  • Second, the inquiry implicates the question of organizational intent as reflected in a corporate governance structure: How are users to be assured that manufacturers have not constructed and marketed a system that affords either themselves or some other party privileged access and control, without the users’ knowledge? In other words, does the manufacturer see a value to be gained for itself from the design, to the disadvantage of the consumer?
  • On yet a third axis of inquiry, the problem of trust can be seen as a question of policy and law: What protections exist against inappropriate intervention into the manufacture or operation of an ICT system? And, relatedly, what are the mechanisms for determining the appropriateness of such interventions to the extent they may exist? Are there flaws in the system that some third party, for whatever reason, can benefit from exploiting?

Of course, the report recognized that a dispositive assessment of trustworthiness will never be feasible. However, we concluded that evaluating relative trustworthiness is possible. Thus, even though it is the nature of ICT systems that risks of compromise can never be fully eliminated, we concluded that risks can be mitigated to a degree that often depends strongly on the effort devoted to the task and that intercomparisons of trustworthiness (the flip side of risk) were possible.

To that end, we developed a comparative checklist of steps an organization can take that significant stakeholders might agree demonstrates its products to be trustworthy—what one might call a functional definition of trustworthiness. Even without the prospect of precisely assessable levels of trustworthiness, the report concludes that a framework for assessments can be made with a relatively high degree of confidence.

The value of a framework based on agreed-upon principles should be evident. Using these principles—as well as acceptable evidence—as a guideline, ICT manufacturers and users, including organizations and consumers, can analyze comparative risks and make reasoned risk-benefit and resource-allocation decisions.

In the end, the report articulated a set of principles that, in our view, form a suitable structured framework for analysis to guide the assessment of ICT trustworthiness. Our framework distinguishes between trustworthiness criteria that are analytic in nature (that is, capable of definitive measurement) and those that are axiomatic (that is, procedures or rules that are indicative, but not dispositive, of trustworthiness). Though the framework we suggest is detailed at some length in the report, at the top level, the report summarizes this framework as follows:

  • Maximize transparency—The single greatest engine of trust is transparency. Visibility into the production and the operation of any system (whether technical or not) increases the ability to test a system and verify its operation.
  • Ensure accountability—Transparency can help users make better decisions, but from a trustworthiness standpoint, it must be paired with accountability. Those who put ICT artifacts into the stream of commerce must be accountable for their trustworthiness. Mechanisms for accountability may vary depending on the artifact, regulatory environment, and context of use, but their absence is a sign of systemic untrustworthiness.
  • Allow for independence of evaluation—Often the transparency and accountability of a system are self-assessed by the manufacturer and/or designer of the digital artifact, both of whom have a conflict of interest in engaging in this analysis on their own. In general, independent (often outside) evaluation is superior.
  • Prefer provable analytic means of trust verification over axiomatic, nonverifiable means—As a general matter, trustworthiness assessments that are analytic in nature are superior to those that are axiomatic. We recognize that analytic means of establishing trust are significantly more costly and difficult to implement, but where feasible, they are preferable.

People or organizations making trustworthiness judgments regarding given acquisitions or implementation decisions will undoubtedly find that the various factors we have outlined point often in different directions. Thus they will have to draw conclusions about how different trustworthiness measures trade off against one another. Trade-offs in identifying acceptable risks may be driven by a variety of factors, including the risk tolerance of the user or organization, the use case for the product, and the environment in which the product can be used.

There are those hoping for a one-and-done, universally applicable algorithm to evaluate trustworthiness. Alas, there are no silver bullets for trustworthiness, and no substitute for careful, considered judgments made on a case-by-case basis. By distinguishing between products that bear many of the indicia of trust as outlined in our report and those that do not, we believe that the principles described in the report can help to frame trustworthiness assessments of the supply chain and provide a way forward for decision-makers.


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.
Benjamin Wittes is editor in chief of Lawfare and a Senior Fellow in Governance Studies at the Brookings Institution. He is the author of several books.

Subscribe to Lawfare