On Vendor-Installed Wiretaps for Smart Devices
Susan Landau’s posting addresses a number of issues that have also been conveyed to me offline by others. My original post suggested that the vendors could use software updates to install wiretapping capabilities on a targeted individualized basis.
Published by The Lawfare Institute
in Cooperation With
Susan Landau’s posting addresses a number of issues that have also been conveyed to me offline by others. My original post suggested that the vendors could use software updates to install wiretapping capabilities on a targeted individualized basis.
Susan identified two advantages to my proposal (simplification of interception and saving money in installing wiretaps), and then tallies up the societal cost, which she says is too high to make my approach viable.
Ironically, I didn’t think of either of the advantages that Susan identifies. The Clark, Bellovin, Blaze, Landau paper called for court orders and custom-built wiretaps depending on the specific vulnerability in question. I don’t see why my proposal would necessarily eliminate court orders for installation—legislation or regulation could direct that vendors only be allowed to introduce customized and targeted updates for wiretapping when presented with court orders.
Susan also implies that custom-built wiretaps would be more expensive than vendor-developed wiretaps, thus inhibiting the use of the former more than the latter. I’m entirely sympathetic to the argument that cheap things are used more freely, and that economic barriers help to prevent overuse (and perhaps misuse). But if that is the problem, a fee could be levied—statutorily—on law enforcement agencies every time they are granted a wiretap—a fee that would be paid out of operating budgets. Today, there is no explicit charge for getting a wiretap – but there could be. That fee could be paid into the general revenue budget, into a special fund to support the ACLU (or Lawfare!), or any other fund—the only requirement is that the amount of the fee would be unavailable to support the other law enforcement activities of that agency.
Susan identifies two costs: (a) damaging (further) the trust relationship between the customer and the vendor, and (b) inhibiting the patch process, which repairs vulnerabilities as they are found.
On (a), I’m not at all persuaded that the Snowden revelations destroyed the trust relationship between vendors and customers. I think the Snowden revelations destroyed the trust relationship between companies and customers on one hand and the U.S. government on the other. In any event, vendors have always been willing to comply with lawfully issued requests to conduct wiretaps. So if you grant that the vendor only install wiretap software upon lawful order, that situation isn’t different than what has already existed for many years.
On (b), those who will not patch their systems fall into two categories—the good guys who won’t patch and the bad guys who won’t patch. I don’t really care about the latter category—in fact, the more vulnerable the systems of the bad guys are, the better off we are as a society.
The category of concern is the good guys who won’t patch because of the possibility that the government might order a vendor to introduce a wiretap when it should not. I think that the number of people in this group is small. Experience with telephones to me is suggestive—phones today are wiretappable, and I don’t know that anyone argues that they would be used more if they were not wiretappable. I suspect that Susan (and others sympathetic to her views) believe this number is large. I don’t know of any reliable evidence to point to either “small” or “large”, but I find it hard to believe that “large” would be large enough to significantly affect the herd immunity from which we all benefit.
So at least in principle, the issue comes down to an empirical question – what fraction of the user base would refrain from patching because of the possibility that government could order vendors to install wiretapping software through the update process? If that fraction is large, I’d acknowledge it poses a significant security defect in my proposal.
Susan also claims that “because patches often break running software, people already delay in installing patches.” I know that this happens from time to time, but I’d argue with the word “often”. More importantly, I think the largest source of delay in applying patches is the fact that carriers delay updates, especially for Android phones (see, for example here and here.) In this environment, I’m much less concerned about users delaying installation of patches.
There may be other flaws lurking in my proposal, and I’m happy to acknowledge the proposal as a bad idea when someone tells me about them. But I don’t think Susan’s points suffice.