A Few Easy Steps to Campaign COMSEC
As experts debate the policy implications of the DNC hack and need for attribution, there appears to be consensus that campaigns need better cybersecurity. The unanswered question is how exactly to ensure secure internal campaign communications in a hostile world. One assumption seems to be that campaigns will need to invest significant resources in acquiring expertise and tools and may even need to rely on the capabilities of the federal government to improve security. But that’s not the case.
Published by The Lawfare Institute
in Cooperation With
As experts debate the policy implications of the DNC hack and need for attribution, there appears to be consensus that campaigns need better cybersecurity. The unanswered question is how exactly to ensure secure internal campaign communications in a hostile world. One assumption seems to be that campaigns will need to invest significant resources in acquiring expertise and tools and may even need to rely on the capabilities of the federal government to improve security. But that’s not the case.
It is perhaps a testament to how bad campaign security currently is, but things could be dramatically improved by a few common sense security measures and using widely-available—and in some cases free—commercial products.
Consider the basic requirements. The system must be inexpensive, easy to use by non-experts, and relatively difficult to screw up. The attacker may be varying degrees of skilled, but we can presume they will be highly persistent. Their goal will be communication content.
The biggest problem here is email. It is easy to phish people either for passwords or malcode, emails is grossly insecure and aggregates tantalizing data in one convenient place. It might as well have a sign: “Fancy Bear, hack this!"
Therefore, a primary goal of campaign cybersecurity should be to get people off of emails, and then to ensure the remaining emails are as safe as possible. Behavior nudging could help. An automatic alert should asked each sender: “Does this email pass the New York Times test?” If so, proceed. And emails should be stamped with automatic headers, “Email is insecure, do not use for sensitive communications,” and include the sender’s phone number. Make the alternative as easy as possible.
Particular attachments and links—javascript, exe., etc.—should be automatically deleted. Most other attachments and links are processed by sender. Attachments and links should be automatically quarantined unless the sender is known and the communication verified with valid DKIM.
Campaign workers still need a "store and forward" group communication channel—one that is secure, easy to use, and does not leave a central repository to attack. This should also serve for phone call communications since, while actual calls are ephemeral—and thus resistant to retrospective attack—voicemails are not.
Whatsapp checks nearly every box. It is easy, free, familiar to most people, and widely available. It is now secured using state-of-the-art Signal protocol. WhatsApp transparent security would make even federal law enforcement jealous (really jealous).
A secure message platform is not enough; campaign workers also need secure devices. That means iPhones for everyone. Android is a disaster with even "supported" devices not receiving timely security fixes. And while iPhones are far from perfect—for example, there is now a full remote exploit chain for iOS 9.3.2—but even a 5 year-old iPhone 4S can update to 9.3.3.
It is important not to over-secure campaigns to the point that individuals will circumvent controls, defeating the entire purposes. For example, while Bluetooth keyboards are not ideally secure, eliminating the convenience of a keyboard risks driving people to less secure practices like using their laptops to send emails. And because WhatsApp has “forward secrecy,” so long as all participants delete their records there is no way to recover a chat even if the attacker recorded all encrypted communication.
If WhatsApp enabled iPad support, it would nearly eliminate the need for “real” computers which are less secure for more campaign users and purposes. Of course, it’s important to use central administration to disable iCloud backup—and instead use local sync within campaign headquarters—and all users should have 1Password installed, use the fingerprint reader, and force 'day-of' automatic updates.
The campaigns should also shift to Google for centralizing calendars, contacts, and app systems. True, Google is not secure against the US government, but presumably the presidential campaigns don’t include the feds in their threat model. Bottom line, Google is going to be a better job running the systems than most of the people a campaign could hire.
By availing themselves of a few simple, and relatively inexpensive commercial products—WhatApp, iPhones, and Google—the campaigns could drastically improve security. The approach is far from foolproof, but it raises the cost to national states or sophisticated adversaries who now need to deploy high-value zero-days to compromise communications. And even when they are successful, they’ll be limited to the data available to whatever individual user they are able to compromise, and not a central repository ripe for Wikileaking. Sorry Julian.