CISA in Context: Privacy Protections and the Portal

Susan Hennessey
Monday, January 11, 2016, 2:23 PM

When CISA passed the Senate back in October, many commentators warned of the panoply of ways in which a hypothetical DHS information-sharing portal would function to allow companies to collect and then funnel citizens’ private information directly into the hands of the most fearsome elements of the federal government:

Published by The Lawfare Institute
in Cooperation With
Brookings

When CISA passed the Senate back in October, many commentators warned of the panoply of ways in which a hypothetical DHS information-sharing portal would function to allow companies to collect and then funnel citizens’ private information directly into the hands of the most fearsome elements of the federal government:

  • Mark Jaycox of the EFF warned that the “incentive and the framework [CISA] creates is for companies to quickly and massively collect user information and ship it to the government.”
  • DCInno wrote: “Once data is collected/submitted, it is ‘scrubbed’ of personal/user information and condensed prior to arriving before the intelligence agencies—CIA, NSA and FBI. Who is responsible for and to what degree the data should be scrubbed remains a major question carried by vague language and imprecise definitions within the bill.” The article goes on to speculate that “[t]here's almost no standard for formatting shared data under the regulation, today, and so simple issues like providing IPs could become complicated when thousands of participants are submitting similar content but in different ways.”
  • Marcy Wheeler knew so much about the information sharing technologies that she slammed even SSCI ranking member Dianne Feinstein for using language that “made it clear [Feinstein] doesn’t really understand how the information sharing portal works.” According to Wheeler, Feinstein “said something like, ‘Once cyber information enters the portal it will move at machine speed to other federal agencies,’ as if a conveyor belt will carry information from DHS to FBI.”

At long last DHS has publically unveiled its new CISA-sanctioned, civil-liberties-intruding, all-your-personal-data-grabbing, information-sharing uber vacuum. Well, actually, it did so three months ago, right around the time these commentators were speculating about what the system would look like. Yet even as the cleverly-labeled OmniCISA passed into law last month, virtually none of the subsequent commentary took account of the small but important fact that the DHS information sharing portal has been up and running for months.

I’ve previously discussed, as have other commentators, how CISA impacts existing laws governing the ability to monitor an information system. Now, let’s turn to the statute’s information sharing provisions—and let’s do so in the context of the actual portal, not one we imagine. I’ll take up the issues in a series of three posts. First up: privacy protections.

Fair warning: This gets weedy and technical both legally and in terms of jargon, acronyms, and the specifications of how this portal works. But if we’re going to have a debate about government overreach in the context of CISA and its portal, it is important to take a real look at the publicly available portal interface. And before speculating about the consequences of the law, it’s necessary to determine exactly what privacy interests are really at stake and whether the protections already in place are adequate—and if not, where they fall short.

DHS’s National Protection and Programs Directorate (NPPD) stood up the Automated Indicator Sharing (AIS) system back in October pursuant to PPD-21 and EO 13636. The portal authorized in CISA, as I say, isn’t hypothetical. It exists. It is called AIS and it is run through the National Cybersecurity and Communications Integration Center (NCCIC). According to DHS, the goal of the AIS portal is to:

achieve near real-time sharing of cyber threat indicators by enabling DHS’s National Cybersecurity and Communications Integration Center (NCCIC) to (1) receive indicators from the private sector; (2) remove unnecessary personally identifiable information and other sensitive information; and (3) disseminate the indicators to, as appropriate, other Government departments and agencies and the private sector.

To understand CISA, it is critical to understand the AIS system—a subject that has received almost no attention from anyone in the media or commentariat.

Helpfully, DHS has released its Privacy Impact Assessment of the AIS portal which provides important technical and structural context. To summarize, the AIS portal ingests and disseminates indicators using—acronym alert!—the Structured Threat Information eXchange (STIX) and Trusted Automated eXchange of Indicator Information (TAXII). Generally speaking, STIX is a standardized language for reporting threat information and TAXII is a standardized method of communicating that information. The technology has many interesting elements worth exploring, but the critical point for legal and privacy analysis is that by setting the STIX TAXII fields in the portal, DHS controls exactly which information can be submitted to the government. If an entity attempts to share information not within the designated portal fields, the data is automatically deleted before reaching DHS. Think of an online form for, say, making a flight reservation: if you try to enter your favorite animal in the credit card field, it just doesn’t work.

CISA’s codified privacy protections only really make sense if one considers the provisions as they relate to the functional system. Prior to any sharing authorized under the Act—private-private, private-government, government-government, government-private—the sharing entity must either manually remove or employ a technical capability to remove any information “not directly related to a cybersecurity threat” that the entity “knows at the time of sharing to be personal information of a specific individual or information that identifies a specific individual.”

This privacy protection reflects an interesting and careful balance that is worth unpacking a bit. Let’s consider three distinct pieces of the legislative language:

  • “Not directly related to a cyber security threat”

Other versions and proposed amendments to the bill unsuccessfully sought to tighten this constraint to require entities to remove any personal information not “necessary to describe or identify” a cybersecurity threat. But despite the language of the final bill, the AIS Terms of Use actually adopt the tighter requirement: participants are obligated to remove “any information that can be used to identify specific persons reasonably believed to be unnecessary to describe or identify the cyber threat.” The result is a dual-layer of obligations – companies must comply with both the law and the portal terms.

From a practical standpoint, the government does not want any information—PII or otherwise—that is not necessary to describe or identify a threat. Such information is operationally useless and costly to store and properly handle. But codifying terms like “unnecessary to describe or identify” raises legal concerns. For example, in some circumstances, DHS may deem the type of victim sector (financial services, critical infrastructure, media outlets) important information directly related to countering a cybersecurity threat. But this type of descriptive information might be excluded under a strict or narrow reading of necessity; the sector is not strictly required to understand the technical elements of a threat. Thus, the final legislative language sets the baseline—information must be directly related—while DHS imposes additional safeguards through its Terms of Use and by setting technical parameters that minimize the risk of ingesting PII that is not itself a component of the threat indicator.

  • “Knows at the time of sharing”

Another source of controversy during the legislative drafting surrounded the knowledge requirement for removing personal information not related to the cyber security threat. The knowledge requirement sets the standard of care an entity must undertake to be entitled to liability protection. Companies are only obligated to remove that which they actually know at the time of sharing to be personal information not directly related to the threat; actual knowledge represents the least onerous standard. This presents a genuine risk that companies might actively seek to avoid learning about improper personal information in order to avoid triggering obligations to further review and remove information prior to sharing. The government attempts to mitigate this “head-in-the-sand” risk through technical mechanisms—only ingesting information DHS deems unlikely to contain extraneous personal information and providing guidance to companies as required by the statute—and the Terms of Use set the standard at “reasonable belief” which is slightly more stringent.

Ultimately, this resolves to the trade-offs inherent in any voluntary information sharing bill. Under the standard of actual knowledge, companies are able to configure a technical capability to remove any high-risk fields based on what they actually able to know and thus obtain liability protection. Inserting a more ambiguous standard weakens the strength of liability protection. And altering the calculus against robust liability protection could result in companies deciding to forgo the risk by not sharing at all.

  • “Personal information of a specific individual or information that identifies a specific individual”

The trade-offs in the legislative language do not exclusively cut against privacy, however. The obligation of what information must be removed represents a subtle but important privacy protection. Traditionally, privacy obligations extend only to personally identifiable information (PII). And the AIS terms reflect the PII construct – those terms were released prior to CISA, and therefore will be updated to conform to any higher bars the law imposes. And here CISA imposes a broader protection: entities must remove both PII and also “information of a specific individual” which encompasses some set of information broader than PII. CISA recognizes a privacy interest in all personal information, even that which cannot be used to identify an individual.

The operation of AIS also reveals some interesting structural mechanisms which are intended to protect privacy while preserving functionality.

Under CISA, DHS is obligated to share cyber threat indicators with appropriate federal entities—seven designated agencies plus their component parts—“in an automated manner.” The automated sharing must be in real-time—meaning DHS applies only automated privacy and accuracy scrubs—unless the heads of the designated entities unanimously agree, and the modification is made in advance and uniformly applied. The provision reflects the reality that not all privacy protections can be automated and certain source fields require human review to ensure that personal information is not inappropriately shared.

But just as importantly, the provision also reflects the cultural perception—and probable reality—that DHS is more privacy-minded and privacy-protective than other elements of the executive branch. At first read, this language would appear to give other federal agencies, including DOD and ODNI, veto power over any privacy protections DHS is unable to automate in real-time. That may be true, but under the statute and in practice DHS controls AIS; specifically, it sets the STIX TAXXI fields. Therefore, DHS holds the ultimate trump card because if that agency believes additional privacy protections that delay real-time receipt are required and is unable to convince fellow federal entities, then DHS is empowered to simply refuse to take in the information in the first place. This operates as a rather elegant check and balance system. DHS cannot arbitrarily impose delays, because it must obtain the consent of other agencies, if other agencies are not reasonable DHS can cut off the information, but DHS must be judicious in exercising that option because it also loses the value of the data in question.

The broad point is that the technical parameters of the system function as a kind of uncodified privacy protection and set the stage for almost every other element of information sharing included in CISA. Furthermore, the balance of control among executive agencies is more complex that a glancing reading of the text might suggest. Ultimately, the privacy community and civil libertarians may well conclude that additional protections are still warranted. But those critiques should be anchored to an underlying functional reality.

Later this week, I’ll turn to how the structure of the portal interacts with the voluntary information sharing model, that “other” portal, and the government use provisions. Stay tuned.

[Update: Emptywheel has a rebuttal regarding the evolving purposes of the data sharing portal.]


Susan Hennessey was the Executive Editor of Lawfare and General Counsel of the Lawfare Institute. She was a Brookings Fellow in National Security Law. Prior to joining Brookings, Ms. Hennessey was an attorney in the Office of General Counsel of the National Security Agency. She is a graduate of Harvard Law School and the University of California, Los Angeles.

Subscribe to Lawfare