Lawfare News Surveillance & Privacy

A Camera in My Shower? Fine

Susan Hennessey
Wednesday, May 4, 2016, 9:02 AM

During the recent panel event at the Hoover Institution on using data to protect privacy, I had an interesting exchange with Laura Donohue of Georgetown Law, which I’ve been mulling over ever since.

Published by The Lawfare Institute
in Cooperation With
Brookings

During the recent panel event at the Hoover Institution on using data to protect privacy, I had an interesting exchange with Laura Donohue of Georgetown Law, which I’ve been mulling over ever since.

I had made the argument that, in discussing information sharing and privacy, it is important to differentiate between different types of data. There are a number of areas in which privacy and security are mutually reinforcing, as a genuine operational matter and not just as a linguistic framing. In particular, I argued, where we can automate collection and processing of data, technology can increasingly promote both privacy and security.

Donohue disagreed, and she had a pretty good line in response:

What I want to address first is this idea that if no person looks at it, there’s no privacy violation. If we’re in the shower and somebody puts a camera in the bathroom and says “I’m just going to record what happens in the bathroom but I promise I’m not going to look at it,” is there a privacy violation? And I think most of us would say “yeah, that’s a violation of my privacy to put a camera in my bedroom, in my bathroom, in my daily conversations with my spouse, with my children.”

That’s a privacy violation because privacy is not from you, government perspective, privacy is from my perspective. Katz got that right, privacy is from our perspective. When we have a reasonable and objective expectation of privacy if the government collects that information, privacy is violated. So I just reject this “well if a machine reads it, not us, and if we promise not to read it unless it’s really important and then we’ll do it secretly in a highly-classified environment then there’s no privacy violation”; it’s just not true.

The “camera in the shower” hypothetical had me flummoxed for a moment. My immediate reaction was that Donohue was right: I don’t want a camera in my shower either, not even an automated one. Whatever instinctual sense I have for the zone of privacy, my shower certainly qualifies.

But as I’ve considered the matter more, I realize that I actually do have a few words in defense of automation as reinforcing privacy—even in defense of automated shower cameras.

The point I was making on the panel, I should pause to add, was in regards to information sharing: private sector sharing of information with the Federal government pursuant to the Cybersecurity Information Sharing Act of 2015 (CISA). My point was that certain data implicates exceedingly low-grade privacy interests—for example identifying whether the user is on a Mac or a PC—while still retaining some security value, particularly when aggregated. But the value of that information, particularly data of more modest value, tends to have a perishable dimension. It can only be acted on when it is shared very rapidly in near-real time. The half life of its value can be measured in tiny fractions of seconds.

Just as not all information is of equal security value, privacy interests also vary depending on data type. Where privacy interests are high, we must balance the privacy benefits of automation (reducing human interaction with sensitive data) against the limits of current screening technology. Human review can incorporate much more tailored and sophisticated screening and redactions, but the review itself represents a type of privacy invasion. And it is hard to conduct meaningful review at a pace that is operationally responsive to modern cybersecurity threats. In this context, the tension is not necessarily between privacy and security, but between privacy and speed. As automated mechanisms become more sophisticated, I posited, more finely-grained review may become possible without human involvement, while also increasing useful response time. I suggested that such an approach would maximize both privacy and security.

This point, shall we say, got washed away in Donohue’s shower hypothetical.

But here’s the thing: Before deciding if a camera in my shower is, in fact, an unacceptable violation of privacy, I would need more information. As an initial matter, I’d want to know why the camera is there. Do I have some medical condition that requires constant visual monitoring? Does the government have a warrant after showing a judge probable cause that I am meeting in my shower with members of the drug cartel I run?

For purposes of analysis, let’s take the least salacious reason why someone might elect to put a camera in my shower. Imagine I have a severe seizure disorder, like epilepsy, and that continuous monitoring allows me to live a more routine and independent life, while still taking safety precautions while alone and in water. Apropos of this example, the Epilepsy Foundation notes:

Activities such as bathing and cooking place the person with seizures at risk for injury. Making simple changes in household activities or your environment may create a safer home…. Bathrooms, which have mirrors, sinks, shower doors, bathtubs, and hard floors, can be risky for people with uncontrolled seizures. Bathroom activities are generally private matters and balancing the need for both privacy and safety is important for people with seizures.

Epileptics, of course, can make their own decisions. But people in nursing homes with dementia cannot always do so. So imagine I have some other impairment as well—the point being I did not personally elect to install the camera. Is it really a per se privacy violation to have a monitored shower in such a situation?

My point is that collection—even in the most extreme hypotheticals—is not, in and of itself, an unacceptable privacy violation unless you make a threshold judgment that the collection is not justified. Is it okay to take photos underneath stranger’s clothes? Certainly not—unless you’re a TSA officer, in which case you can do it thousands of times a day. Privacy is a matter of interests, knowledge, purpose, and control. There are a great many circumstances in which I allow or voluntarily elect to share deeply personal data with private companies or the government. And there are some situations in which my choice and consent is not the operative factor.

But this is not to say my privacy interests end at that initial choice: the decision to collect or not. The second part of Donohue’s comment gets at a slightly different but important notion. Privacy is about more than mere collection—though certainly putting a camera in someone's shower without a warrant would violate every conceivable formulation of the Fourth Amendment. Collection opens the door to the risk of abuse and to actual abuse. And mere awareness of collection—either specific or suspected awareness—creates insecurity if we are afraid there is some possibility for abuse.

So let’s consider the question of automation and that camera in my shower with a caveat that Donohue left out of her hypothetical: Let’s assume the camera is there for some overpoweringly good reason, some reason we agree justifies its presence. If you don’t believe that situation exists, no degree of automation will create it. But the automation question is analytically distinct from the question of whether the camera should be there at all. It is whether, given a certain form of surveillance, automation can mitigate the privacy consequences of that surveillance. I think it can. And the camera in the shower is actually an excellent illustration of why.

The camera, in and of itself, is an inanimate object, no more threatening in my shower than the showerhead. It threatens my privacy for two distinct reasons: first, because it can take images that some human may someday see, and second, because it takes images that some computer may act on in a fashion I wouldn’t like. Control those risks completely and a camera is no different from a brick. The automation question is really the question of how completely and with what level of confidence you can control those risks.

Let me agree with Donohue on one key point: it certainly would not be enough for me to rely on the government’s or a company’s assurances they they “promise not to” look at the camera’s feed. To be secure—and a personal sense of security is bedrock to privacy—I need more than “trust us.” I would want to know how much data the camera is recording, and how long such data is retained. I would want to know what images are visible in the frame? Can the camera be remotely controlled? Is there any possibility of the video feed being remotely accessed? I would not be mollified by general assurances from a private company that a shower video was monitored only by well-intentioned employees watching for proper purposes. With intimate data, privacy is best served by strict, transparent, and regulated controls which minimize human interaction.

But what if that shower video were subject to continuous, automated checks and only accessible in the event particular criteria were met—perhaps a panic button or visual or audio indication of a fall? And what if, instead of allowing humans to access the video images, the criteria merely triggered an alert to individuals elsewhere in the home or contacted first responders? Or automated controls could ensure that only my designated physicians were ever able to access the actual video images, subject to patient consent. And automatic destruction might further formalize privacy by design.

The point here is not that we should all go and install shower cameras. It’s that in assessing privacy violations, it is not enough merely to know that collection is taking place: we have to know why, who is doing it, and subject to what rules.

As a general matter, automation of privacy protections serves a number of important functions. It minimizes human interaction with our data, which in turn necessarily minimizes the risk of abuse. And incorporating standardization and built-in compliance mechanisms can create rules which cannot be broken and rapidly detect anomalous activity that indicate someone may be trying.

As I noted above, when data is collected and screened in an automated fashion, there are two basic privacy threats. First, there is the risk that the data, having been collected, will be used for a bad automated purpose. A camera installed for safety reasons should not be used to determine my dress size and send me ads.

Second, there is the risk that the data will be seen by someone. This might happen if the rules are not followed—if an employee improperly access the data—or it might happen if the data is kept insufficiently secure, if someone steals it.

Along both of these axes, automation offers potentially increased protection. Use restrictions can be built into the type of data collected in the first instance—tailored to that which is least invasive to achieve the legitimate purpose. And robust automatic compliance checks—inalterable logs of what data is accessed, when, and by whom—go beyond trust, to verification.

Think about it this way: If you were satisfied that shower surveillance were taking place for a wholly legitimate reason and you were satisfied also both that no human would ever see the fruits of the surveillance except as intended and that the camera would function only as intended, would you be less afraid of that camera?

Like it or not, we are in a world in which lots of once-unthinkable surveillance technologies are closer to commercial realities than to scary academic privacy hypotheticals. As collection necessarily increases, we have to defend privacy by design and by use of information. But we also have to be defend it by means of strong security practices.

Scary hypotheticals may not be hypothetical for long. But thoughtful engagement on the practical realities of data and privacy may reveal them to be a bit less ominous.


Susan Hennessey was the Executive Editor of Lawfare and General Counsel of the Lawfare Institute. She was a Brookings Fellow in National Security Law. Prior to joining Brookings, Ms. Hennessey was an attorney in the Office of General Counsel of the National Security Agency. She is a graduate of Harvard Law School and the University of California, Los Angeles.

Subscribe to Lawfare