Cybersecurity & Tech Democracy & Elections

We Need to #AudittheVote—and It has Nothing to Do with Who Becomes President

April Doss
Monday, November 28, 2016, 2:18 PM

The internet has been buzzing with news and speculation regarding state election recounts and a movement to audit voting results. Last week, accounts emerged that well-respected researchers had found indications of potential irregularities in the voting tallies in Michigan, Wisconsin, and Pennsylvania.

Published by The Lawfare Institute
in Cooperation With
Brookings

The internet has been buzzing with news and speculation regarding state election recounts and a movement to audit voting results. Last week, accounts emerged that well-respected researchers had found indications of potential irregularities in the voting tallies in Michigan, Wisconsin, and Pennsylvania. Those, of course, were three states pivotal in securing Donald Trump’s electoral college win, and so the movement has taken on an undeniable political character; many supporting Hillary Clinton view it with dim hope of a second chance, while those who support Trump claim it is desperation from sore losers. And the President-elect himself weighing in on Twitter yesterday evening—alleging widespread voter fraud without any evidence—certainly has not dampened the partisan sentiments.

But it is important to set aside the partisan issues from the real reasons to audit the vote, which have nothing to do with who won or lost. We should audit the vote because in cybersecurity matters, when there is any indication of a problem, that is the responsible and proper course.

The individuals offering concerns are serious researchers and computer security experts, not political hacks. J. Alex Halderman, whose concerns first began getting traction following an NYMag article, is the director of the University of Michigan Center for Computer Security and Society and the godfather and dean of research into the vulnerabilities of electronic voting machines. Halderman, along with voting rights attorney John Bonifaz, makes a compelling case for why it is possible that there were irregularities with the vote counts in those states. And their voices haven’t been the only ones: USA Today published a similar opinion piece by Ron Rivest, one of the founding fathers of modern computer security and a professor at MIT, and and Philip Stark, associate dean of math and physical science at UC Berkeley; both men have served as technical advisors to the U.S. Election Assistance Commission.

Their evidence and arguments are well-documented elsewhere, so I won’t rehash them here. But it is important to note that none of these experts’ opinions are tied to the political fortunes of Republicans or Democrats. These are merely computer and election security experts offering their view that there’s some evidence of a problem. While Nate Silver and other statisticians have posited other counter-explanations for the evidence, it is to be expected that different disciplines focus on different dimensions of a problem and offer different explanation. Silver’s theory that demographics and not hacking or computer error is responsible for the deviation is number is certainly plausible, perhaps even likely. But the presence or absence of other possible explanations are not a reason to not perform an audit. When faced with a potential cybersecurity problem, the purpose of the audit is to confirm or eliminate that possibility, not because it is the definitive explanation. The voices suggesting that we don't need an audit because the results are probably correct are missing the point.

I spent over a decade working at the National Security Agency, where I learned more than a little bit about the capabilities and vulnerabilities of computer and technology systems. Since leaving government, I’ve been in private law practice, advising clients on cybersecurity preparedness, data privacy, and cybersecurity incident response. What strikes me most about the #AudittheVote discussion is how much the lessons of a private sector approach to cybersecurity has to offer political thinking on the wisdom and feasibility of auditing election results.

When a client comes to me and says they’ve seen something anomalous that makes them think they could have a vulnerability in their technology systems, it is often the case that their first inkling doesn’t come from something obvious. More often, they piece together their awareness of the risk from an array of pieces of circumstantial evidence. Someone’s been scanning their networks, or a new zero-day exploit was announced that matches up against an operating system or application they use, or they realize employee training hasn’t kept up with advances in social engineering techniques, or they discovered that someone was using peer-to-peer file sharing programs on systems where those programs shouldn’t have been installed. In other words, very often, the client doesn’t start out with any definitive knowledge that there’s been a compromise of their systems. Far more often, they start out with pieces of information suggesting the possibility that their systems were insecure. And from there, they have to decide what to do.

And once in a while, someone will ask, “Do we really have to look into this? After all, we aren’t sure there’s been a breach.” And just like Silver and others, they will offer other possible explanations for the anomaly.

That’s when I, as a lawyer, get to explain all of the regulatory consequences and enforcement actions and potential class action litigation they might face for failing to act on information about a possible breach. When the client is a regulated sector like health care, financial services, or higher education, then it is an easy case to make—those groups have to “look into it” as a matter of law. But even in non-regulated sectors, it is not all that difficult to convince clients why ignoring a possible problem is not a great business call. The cost of a data breach or other system compromise typically goes up, not down, over time. And in many cases, the escalation of those costs is directly tied to the timeliness of the response actions, in part because the regulatory and enforcement consequences can be significantly greater if the company drags its feet in carrying out a thorough forensics investigation and in promptly making any notifications that are required under state or federal law. But the risks are only of litigation or enforcement action: it’s also the full set of risks that come with not being sure whether your technology systems are secure. It’s risk to technology infrastructure and system availability; risk to the continuity of business operations (ransomware, anyone?); risk to intellectual property; risk to the confidentiality, integrity and availability of information in the future as well as today.

By the time we reach the end of the list, most companies understand why they need to swallow the bitter pill of the costs involved in investigating and in reporting data breaches. Companies need to be sure whether their systems and data are secure, and to provide the appropriate level of transparency about what happened, why, and when, and who is affected. From there, those same clients can use what they’ve learned to harden systems, carry out more effective training, and take other actions to reduce the risk that they’ll be hit with the same kind of vulnerability again.

Clients generally reach the conclusion that they already intuitively knew was correct: it is better business to do the right thing, and promptly, than it is to hide one’s head in the sand or hope that ignoring a problem will make it go away. (And those that decide otherwise, and gamble of not investigating or reporting a breach, rarely make the same mistake twice.)

Here, good democracy and governance decisions look a lot like good business decisions.

Potential compromise of voting systems and potential compromise of corporate computing systems are not actually all that different. In both cases the compromise could be intentional, resulting from an external attack, or inadvertent, resulting from some kind of error in the operating system or applications or, more often, in the human interaction with it. (Studies such as this one from the Ponemon Institute consistently show the majority of cybersecurity incidents are the result of insiders, with authorized access to a system, who took actions that compromised the confidentiality, integrity, or availability of information. Human error is more common than malicious insiders, but both are consequential.)

And in both business and election systems, the stakes are bet-the-business high. For a private sector entity whose systems have been compromised, the impact can be devastating, leading to loss of reputation and business and staggering out-of-pocket costs. In elections, the stakes of breaches go to the integrity of the democratic process.

But while plenty of business, such as health care and financial services sector entities, are subject to detailed regulations governing data security and protection, essentially nothing is required for voting machines. Businesses that handle personal information are swept into many state data breach notifications laws, and voluntary standards like the Payment Card Industry Data Security Standards govern handling of payment card information. This mean, without exaggeration, that the mom-and-pop craft shop selling handmade items on Etsy is subject to more stringent requirements for data security than our national voting apparatus.

And yet we resist calls to audit voting machines.

This can’t be right. But it can be fixed. It can be fixed by taking seriously the concerns of these respected computer security specialists and auditing this year’s election results. The audit should not be done in hopes it will change the outcome—it is overwhelmingly unlikely an audit would show inaccurate results in all three states, let alone of a large enough scope alter the overall result.

But the instinct to not audit the vote, because maybe there is some other explanation, is the same head-in-sand thinking that has harmed so many companies in the past.

Yes, there is some non-negligible expense and hassle in recounts and auditing voting machines. Yes, there is a strong temptation to not want to find a problem, even if it does exist. But electoral legitimacy, like business reputation, is not a matter of what we don’t know can’t hurt us.

And anxiety that conducting the audit itself undermines legitimacy is overstated. We have had recounts in the past, including ones which only verified the presumptive winner. In fact, at this moment, the Republican governor of North Carolina has asked for a recount of the vote in an election in which the Democratic gubernatorial candidate has held a thin lead (despite the state breaking for Trump at the top of the ticket). While there has been pushback against Governor Pat McCrory’s unsupported claims of “widespread voter fraud,” we collectively seem comfortable with this as an ordinary recount request.

If election legitimacy can survive recount requests based on allegations (specious or not) of traditional voter fraud with no evidence, then surely we can bear the desire the “double check” when there is some evidence of computer-based problems.

This has not been a normal election cycle. The widely-documented evidence that Russia was trying to sway this election creates a set of circumstantial information about the motive and opportunity for a data breach which falls far outside the usual norms. That is a reason for the state election officials to be looking at this as a time of greater-than-usual risk when verifying machine-tallied results.

But setting aside the specific features of this election, if we’re going to have long-term confidence in our local, state, and national elections, we need to get to a new conceptual place. Long-term confidence and legitimacy depends on a future where calls to verify the accuracy of computer-generated vote counts are viewed as a normal and appropriate means of ensuring the validity of our election results—precisely the way recounts reinforce the legitimacy of close elections.

Presidential elections are the most important action of participatory democracy that our citizens take. And we need to find ways to incentivize or require states to ensure that voting systems maintain the appropriate levels of confidentiality, integrity and availability. That starts with recounts in Wisconsin, Pennsylvania, and Michigan.

Is it likely to change the results of the 2016 presidential election? No. But this is not a question of partisan rancor. We ought to be applying similar requirements for data security, investigation of potential compromises, and transparency in the reporting of results to our biggest decisions as we do to far smaller ones. For the current and long-term legitimacy of our election process, we should #AudittheVote.


April Doss chairs the Cybersecurity and Privacy practice at Saul Ewing, and is the former Associate General Counsel for Intelligence Law at NSA. [The views expressed here are the author’s and not those of the NSA.]

Subscribe to Lawfare