Cybersecurity & Tech

On the Utility of Transparency through Disclosure of Software Bugs

Paul Rosenzweig
Tuesday, September 6, 2016, 8:45 AM

Last week, co-blogger Nick Weaver posted a short summary of why he holds the view that Apple products are safe, but that Android products systematically are not. His recommendation was to throw your Android phone in the garbage and he asked, somewhat rhetorically, what could be done about the situation.

Published by The Lawfare Institute
in Cooperation With
Brookings

Last week, co-blogger Nick Weaver posted a short summary of why he holds the view that Apple products are safe, but that Android products systematically are not. His recommendation was to throw your Android phone in the garbage and he asked, somewhat rhetorically, what could be done about the situation.

Never one to recognize a rhetorical question I tweeted an answer—that the answers to market failure are well known and generally limited to four types of government intervention: taxation, subsidy, regulation and the imposition of liability. This was a short form reference to an earlier article I wrote for Hoover discussing the public economics and theory of cybersecurity at some length. Those who want a defense of the general proposition are welcome to dive in. (It is also possible for the government to influence the private sector by acting as a private actor—i.e. by using its market power as a purchaser.)

Give Lawfare
a birthday gift!

This response, however, drew an interesting question from co-blogger Dave Aitel. He asked, "What about transparency?," by which I take him to mean, "What about a requirement or incentive program to urge smartphone manufacturers to disclose the nature of the code they are using so that observers can judge the adequacy of the code and make informed purchasing decisions based on those disclosures?" Transparency is often thought of as a way of adding better information to markets to make them function more accurately. It is seen most frequently as the response to a phenomenon known as "information asymmetry" (i.e. a situation where one party to the transaction knows more about the transaction than the other does—in this case the phone manufacturer knows more about the phone's code weaknesses than the purchaser). I told David that the answer required a blog post—and here it is. Some (though by no means all) of this answer is likely generalizable to all code-based products.

To begin with, it isn't clear that knowledge of the Android weakness is actually an information asymmetry that needs rectification. As Nick points out, the Android weakness is pretty widely known at a high level of generality (virtually every lecture I give is accompanied by the question: “Is Apple really safer?”) and it isn't clear that more detail would be useful to decision makers. It seems pretty clear that purchasers are preferring familiarity, convenience, and price over security. That's precisely why there aren't a lot of Android phones in the dumpsters. And it is actually a sign that the market for information is functioning well. This seems at first blush to be a classic case of a market failure where the preferences and choices of individual actors create externalities (system insecurity) that do not maximize collective social utility. A particularly poignant example of this failure is in the Android-ransomware space where, as Dave Aitel says, we are reaching a tipping point. So at first blush, I'm not sure transparency would matter at all from an economic perspective.

But let's assume for sake of argument that the premise of David's question is correct and that greater transparency/disclosure would improve information flows and work to correct a market that is plagued by the asymmetry of the manufacturer's information. Fair enough. But it is still not necessarily a soluble problem, even if we were to conclude that it was worth addressing.

The reason in this case is that cures for information asymmetry only work if the recipient has adequate capacity to accept, interpret, and act on the disclosed information. In the commercial context (sales to big corporations, for example), that probably is not the case today but it is possible. Software disclosure requirements would let large institutional purchasers exercise their market power with better information. It would, however, require a significant investment of resources—code auditors, lawyers to enforce promises, insurers, etc. It is at least an open question whether those costs would result in greater benefits through more secure Android phones or whether the transactions costs incurred would exceed the benefits. And the only way to make that empirical judgment is, sadly, to run the real world experiment.

I confess to an instinct that the benefits would be greater than the costs—but part of my instinct involves the anticipation of free riders, and I'm not sure if we will see them in the cyber context. Here's what I mean—in the equity market, the SEC requires significant disclosures about the financial health of a corporation. Almost no individual investors have the expertise to interpret those disclosures. So they ride on someone else's coattails (not all free riding is bad). They may, for example, follow the investments of Goldman Sachs, reasoning that if they like an equity they've probably done their due diligence. More conservative investors might follow the California Pension fund—again, because CalPens has enough resources to take the disclosed information and make use of it, and so as a wise consumer I might just follow them in the market.

The problem here is that individual consumers of, say, smart phones, probably won't be able to follow larger institutional purchasers. It seems unlikely to me that anyone would buy a Nexus because Nick said to (bad move on their part) and even less likely that they would be in a position to learn that (hypothetically) General Motors decided to buy only Apples or only Nexuses (is the plural of Nexus Nexi?).

On the other hand, there are action forcing “nudges” that might be ways for large institutions (like GE or the US government) to “signal” to individual purchasers the better choices. For example, (HT: Dave Aitel, again), the government might only allow employees to BYO approved D—say iPhones and not Androids. GE might only send corporate email to a Nexus and not a Samsung. And so on … if institutional systems managers used their influence, they might change individual behavior. Of course, their own incentive for making those “nudges” would depend on institutional cost-benefit analysis as well.

But even after that, I imagine their influence would be inadequate to massively change individual behavior. Not enough people work for GE or the US government to be a dominant part of the market, I fear. And so we come to the point of the question—would code disclosure be of value to the average consumer at the point of sale? And as to that, sadly, I think the answer is likely "no." We have experience in trying to educate consumers in the purchase of technical products in other arenas (think cars, for example) and those efforts have been of only modest success. That difficulty is, at bottom, behind car "Lemon Laws" that mandate used car salesman to take back lemons—precisely because the consumer can't tell they are a lemon until they drive it off the lot. (This is, of course, the exact opposite of today’s software rule of caveat emptor). Rather than disclosure we might need a Lemon Law for smart phones—though I confess I can't see how to build that.

One solution, of course, is for some trusted party to interpret the disclosures for the consumer. A Consumer Reports for cyber, if you will. The hacker Mudge is trying to start something like that, but there is a clear market gap that could be filled. Perhaps David, Nick, Matt Tait and I can monetize this concept and get rich?

Two final points on the issue of transparency worth mentioning. The first is that it won't happen naturally. Those who benefit from information asymmetries (and the lack of liability for failure to disclose) typically need to be dragged kicking and screaming to the party. I suppose some manufacturer might see full disclosure of source code as a competitive advantage (think Linux, I guess), but as the market is not currently awash in those sorts of disclosures, I suspect this will require legislation. Readers of this blog are as well-positioned as I am to hazard a guess as to whether or not such legislation is in any way feasible. Color me skeptical in the real world (with the caveat that some enterprising state like California might lead the way).

Finally, I do want to speak in favor of a limited type of disclosure that might be useful. I derive it from a tweet that Nick sent out in a follow up to his post, in which he identified some types of vulnerabilities (he named, for example, SQL injections and stack overflows) that are so bad, so easily fixed, and so catastrophic that failing to patch them amounted to per se negligence—what I as a lawyer might call gross negligence.

I can see the construction of a small list of such manifest vulnerabilities (By whom? I don't know—maybe NIST? Maybe DHS Cyber?) tied to a mandatory disclosure requirement about those same vulnerabilities. Does your system have SQLi problems? Given how widely they are known to be deep flaws, that type of disclosure might drive insurers and tort lawyers to take action, with a resulting benefit that would be worth setting up the system. (To be sure, the list might be short as there is often disagreement as to how bad a flaw is—but even a short list would be better than none at all).

In any event, that's a long answer to the question of "will transparency help?" The short answer: Not as much as we would hope, I fear.


Paul Rosenzweig is the founder of Red Branch Consulting PLLC, a homeland security consulting company and a Senior Advisor to The Chertoff Group. Mr. Rosenzweig formerly served as Deputy Assistant Secretary for Policy in the Department of Homeland Security. He is a Professorial Lecturer in Law at George Washington University, a Senior Fellow in the Tech, Law & Security program at American University, and a Board Member of the Journal of National Security Law and Policy.

Subscribe to Lawfare