Armed Conflict Cybersecurity & Tech Surveillance & Privacy

Three Questions on COVID-19 and Digital Technologies

Andrew Burt
Monday, May 11, 2020, 8:10 AM

There are long-overlooked dangers embedded within the adoption of digital technologies—and as society shifts online during the pandemic, consumers and policymakers must figure out how to address those risks.

Zoom Mobile Application (Image by antonbe, https://pixabay.com/images/id-5064083/; Pixabay License, https://pixabay.com/service/license/)

Published by The Lawfare Institute
in Cooperation With
Brookings

If software has been eating the world for the past decade, it has consumed the world entirely in recent weeks. From happy hours and job interviews to virtual funerals, weeks of quarantine and self-isolation in the midst of the coronavirus pandemic have pushed citizens and consumers deeper into cyberspace. In the process, the once-clear distinction between digital and physical seems all but lost.

Which is a shame. Because there are long-overlooked dangers embedded within the adoption of digital technologies—pitfalls that its designers and users have not managed to address despite decades of use. The current crisis is liable to make these dangers worse.

Security issues, for example, still plague the massive amounts of data generated online every day. From Yahoo’s 3 billion hacked accounts to the stolen records of nearly every national security employee in the U.S. government, organizations of all shapes and sizes have consistently failed to secure all the data they store. This stubborn fact is displayed in every new headline for every new breach.

Indeed, it’s not a coincidence that one of the biggest beneficiaries of the worldwide quarantine, the teleconference provider Zoom, has also come under fire for massive security vulnerabilities that left many of its users deeply exposed—even as Zoom’s usage and stock price have skyrocketed in recent weeks as more and more people work from home. There is, it turns out, a direct correlation between the vulnerability of data and the wide adoption of the software that collects it. The more widely any given application is used, the more likely it is to put that data at risk.

Connected technologies are clearly central to the many ways citizens and consumers will respond to the coronavirus crisis. How to best safeguard the data those technologies produce will hinge on the answers to three questions.

1.     What Is Too Important to Take Place Online?

The physical world has been migrating into the digital world for quite some time. The use of physical money, for example, was in decline even before the pandemic, with roughly 90 percent of the world’s currency now stored in digital form. If the internet or power goes down, so too does the ability to transact, as occurs repeatedly in the aftermath of natural disasters. Even the most sensitive acts of war now take place virtually, as when a drone operator at an Air Force base in Arizona decides to let loose a hellfire missile halfway around the world.

But the pandemic is accelerating all these trends. Over the past month, for example, government cabinet meetings in the U.K. have moved online, and the U.S. Congress is debating whether and how to do the same. Buried within the massive congressional aid package was a provision that allowed the Department of Justice, acting at the height of its powers, to conduct a host of criminal hearings virtually. Will we allow declarations of war to occur remotely? Surgeries? Food inspections? More?

Given the inherent vulnerabilities of cyberspace, some activities should remain analog, surely—but what those activities are should not (and, indeed, cannot) be decided in retrospect. New rules in Congress may need to set stricter bounds on what decisions the government cannot make online. Sens. Dick Durbin and Rob Portman have been pushing to allow for remote voting in the Senate, limited in times of national emergencies and authorized only for 30-day intervals. Dozens of representatives in the House have been lobbying the Committee on Rules for similar changes. Meanwhile, regulatory agencies themselves will need to determine new boundaries for remote medical activities, virtual food inspections and more. The Food and Drug Administration itself released an initial March statement on foreign inspections, referencing a series of measures it may use in lieu of on-site inspections. There are a host of choices waiting to be made—policymakers need merely to make them.

2.     What Is Temporary and What Is Permanent?

National emergencies spur quick reactions, as indeed they should. But many emergency responses are so drastic that they become long-term and ingrained. Income tax withholding, for example, was meant to boost government revenues during World War II, yet the program continues to this day.

Because tracking of potential coronavirus carriers will be a central feature in allowing Americans to return to work before the development of a widely available vaccine, there appears to be no way around a massive increase in digital surveillance. The most thorough plans to lift stay-at-home orders, such as those put forth by California Governor Gavin Newsom, prioritize contact tracing and monitoring as the first among many other steps for exactly this reason. This fact is also highlighted by the aptly named COVID-19 Consumer Data Protection Act, introduced in the Senate on April 30, which is specifically focused on protecting data used in response to the pandemic.

The challenge is therefore not whether to build a digital surveillance apparatus to track the spread of the virus, but when and how. These efforts might mirror authoritarian controls imposed by states like China, or they may more carefully attempt to protect individual privacy, as in some tracking efforts in the EU. In either case, understanding what is temporary, and how any new programs will be dismantled once they achieve their goals, will be among the most powerful ways to assert control over any new invasive technologies.

The possibility of adopting permanent solutions to temporary problems is made even more likely by the very nature of the digital world. While connected devices and new applications may seem fleeting—Can’t an app just be deleted, or an old device thrown away?—the choices consumers make about their digital lives are longer lasting than they often think. The persistence of data, combined with the complexity of the networks through which that data transits, means that every device consumers buy and every line of code developers add has the tendency to increase vulnerabilities and place more data at risk. What seem like small choices in cyberspace often become major ones.

3.     Who’s Incentivized to Protect All the Data We Will Generate?

Even in normal times, this is a question that hasn’t been addressed fully. Over the past few decades, incentives have skewed toward speeding a product’s time to market in all but a handful of circumstances. The rush is all too frequently to develop software that works and that can be sold—and only afterward ensuring that the software is fully secure. This helps explain the endless series of breaches and failures that have become commonplace.

But the current crisis is likely to exacerbate these problems. Who will protect all the data that needs to be pooled and shared for contact tracing? Who will be held accountable if that new data is left vulnerable, stolen or exploited? One solution lies in attaching new legal liabilities to software creation, combined with creative forms of cyber insurance, which can encourage responsible coding while holding bad actors to account, as I’ve written before. The good news is that many of these issues are not new. The cyber insurance industry too has been making notable—if ultimately insufficient—headway in recent years. But the bad news is that these efforts still haven’t received the attention they deserve.

Time for Decisions

This leads to what is perhaps the most important question of all: Will policymakers and consumers alike answer any of these three questions? Or will we instead rush to respond to the crisis without taking into account the larger impact of our collective actions—conducting more and more activities online, generating data that’s surveilled in ways that are too complex to understand, and placing privacy at risk? Admittedly, the current political climate doesn’t leave room for much hope.

But the fact is we do have a choice. Which means that we can, if we are careful, embrace new technologies without being surprised, when the future finally emerges, at where we ended up.


Andrew Burt is managing partner at bnh.ai, a boutique law firm focused on artificial intelligence and analytics, and chief legal officer at Immuta.

Subscribe to Lawfare