The Lawfare Podcast: Jim Dempsey and John Carlin on U.S. Cybersecurity Law and Policy: There’s a Lot Going On
Published by The Lawfare Institute
in Cooperation With
There is a lot to keep up with in U.S. cybersecurity law and policy these days. To talk about the current regulatory landscape and the progression of the DOJ’s strategy relating to takedown and disruption efforts, Lawfare Senior Editor Stephanie Pell sat down with Jim Dempsey, Senior Policy Advisor at the Stanford Program on Geopolitics, Technology, and Governance, and John Carlin, Partner at Paul Weiss. They talked about the SEC’s cyber disclosure rule, the new executive order focused on preventing access to Americans’ bulk sensitive personal data, the LockBit and Volt Typhoon disruption efforts, and more.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Audio Excerpt]
John Carlin
A lot of the impact is actually competing, so one regulatory regime may be encouraging you, for instance, to report cyber incidents, when you know almost nothing, privately to those in government that can do something about it, like the FBI or NSA or Department of Homeland Security. And they don't really want it public at that point because at that point that actually helps the bad guy. And they also don't want you to face liability, so that they can encourage you to report voluntarily. And at the same time, you have other parts of the regime whose focus is all about reporting it publicly, be that the SEC or be that the construct of some of the regimes that are around personally identifiable information or focused more on privacy. So there are some real competing currents right now.
[Main Podcast]
Stephanie Pell
I'm Stephanie Pell, Senior Editor at Lawfare, and this is the Lawfare Podcast, April 8th, 2024. There is a lot to keep up with in U.S. cybersecurity law and policy these days. To talk about the current regulatory landscape and the progression of the DOJ strategy relating to take down and disruption effort, I sat down with Jim Dempsey, Senior Policy Advisor at the Stanford Program on Geopolitics, Technology, and Governance, and John Carlin, Partner at Paul Weiss. The topics we discussed include the SEC's cyber disclosure rule, the new executive order focused on preventing access to American's bulk sensitive personal data, and the LockBit and Volt Typhoon disruption efforts.
It's the Lawfare Podcast, April 8th: Jim Dempsey and John Carlin on U.S. Cybersecurity Law and Policy: There's A Lot Going On.
There is a lot going on in U.S. cybersecurity law and policy, and many of these relatively new activities are addressed in the second edition of your book, “Cybersecurity Law Fundamentals,” which has just been published by the International Association of Privacy Professionals. There are, for example, a host of different regulatory activities from federal agencies, along with various operational activities pertaining to ransomware that are part of the new material in the new edition of your book. I want to start by talking about some of the most significant regulatory actions we're seeing. But before we delve into the specifics, Jim, are there any overarching themes or principles that provide insight into these regulatory actions and how they support the objectives of the Biden administration's national cybersecurity strategy?
Jim Dempsey
Stephanie, I would say yes and no. Some of the major actions that we're going to be talking about, like the SEC disclosure rule and the incident reporting requirements for critical infrastructure, were not addressed in the strategy at all. It just didn't touch on those disclosure and reporting issues. Other initiatives we've seen recently, such as the rulemaking to establish essentially a know-your-customer system for infrastructure as a service, that was called for in the strategy and that had been in progress for some time, even predating the March 2023 strategy. And certainly the administration's ongoing efforts to regulate critical infrastructure sector by sector, that was a major theme of the strategy and was one of several elements of the strategy, which distinguished it from all the prior strategies across administrations.
Stephanie Pell
So let's start then with the Security Exchange Commission's relatively new rule on disclosure that became effective in December of 2023. John, what is the purpose of this rule and what does it require?
John Carlin
I think the principal purpose and the reason that's been articulated by the SEC for the rule was their sense that from the perspective of CEOs, general counsels, board members of publicly traded company, one of, if not the top threat that was keeping them up at night was cybersecurity. And yet, they weren't seeing a corresponding number of public filings through what's called an AK, an approved way of telling your investor that they'd actually had material cyber incidents.
So they did something that was interesting. After an extensive rulemaking period where they floated some controversial ideas that received a lot of negative feedback in the rulemaking process, what they ended up passing as the new rule and promulgating, was a rule that required you to determine whether or not an incident is material to a publicly traded company. And then, if you determine that the incident is material, that you have four days to make that disclosure in an approved way—that's the so-called AK—to your shareholders.
And this is interesting because, for those who are more cyber or national security and don't pay that much attention to public reporting rules, that actually already is the rule for anything that's material to a public company. So it begs the question of, why did they create a rule specific to cyber that says, essentially, for a cyber incident you need to determine is it material to the company and if you do disclose it in four days? And I think their reasoning or rationale that they've put out for that is by putting it out separately from the general rules about materiality as a cybersecurity-specific rule, they were essentially sending an alarm bell to publicly traded companies that you really need to focus on internally how do you make that determination as to whether or not something is material? And in particular, they're focused on how do you make sure that the technical side of the house is telling, essentially, the business side of the house that would normally be thinking about risks to a public company, like financial risks and other risks. And they were concerned that there wasn't proper governance to do that type of internal decision-making.
Since the rule has passed, there've been a number of what is somewhat unusual in this space, where publicly traded companies have made disclosures using the method that tells all your investors at once that's designed for when you have a material incident. So it hits all the benchmarks of what it would look like if it was material. But they've said in their disclosure that at this time they don't believe it to be material and they're still investigating. That's a new practice and we haven't seen yet how the SEC is going to respond to it, but I think it reflects a little bit the confusion from companies about how the SEC is going to enforce in this new space.
And behind that is a particularly controversial enforcement action that they already took in the case of SolarWinds, where for the first time they brought an enforcement action, not just against the company, but against the Chief Information Security Officer. That actually, that enforcement action, was pre-rule, but I think it's added to the consternation of companies as they're trying to figure out how to comply with the new rule.
Stephanie Pell
So taking all of that into consideration then, do you think that the rule as it was finally issued was fair, and has the elements necessary to achieve its purpose?
John Carlin
There are a couple parts to the rule, but the part that I just discussed, I think there's a lot of concern about the four-day, but as long as that clock does not run until you make the determination that it's material, and as long as the SEC is deferential, because that's actually a rather complicated determination—it would be easy, if it was quantitative. So to use an example from the financial perspectives, let's say you put out a projection of how your company is going to do the next quarter. And then you have some incident you realize, boy, we're going to be 10 percent off on our revenue. Or let's use an even bigger figure—50 percent off. People would say, that's material, you need to inform your investors. Cyber is a little more complicated and the SEC has given guidance that says it is not just a quantitative figure. You also need to consider reputationally how people are going to react to the fact that you've had a cyber incident. That is really difficult to do.
So, I think as to the fairness of it, it'll be determined by how they choose to enforce the new rule. But on its face, I don't think it's unfair to say you need to make a determination that something's material. And if it is material, get it out to your shareholders. The other issue would be, what is it that you have to get out to your shareholders? I think it's a fair reading under the new rule that they're not requiring you to give the type of detail that actually would help the bad guy. So, so much detail about the vulnerability that the next crook or nation state has a better chance of getting into your company. But just enough information that a shareholder can make a reasonable investment decision. So in that sense, I don't think it's, again, that different than what the general understanding was of how the SEC treats materiality across the board, not just for cyber.
Stephanie Pell
Jim, do you have thoughts on this as well?
Jim Dempsey
I agree with everything John said, and I see where the SEC was coming from on this, that this is another factor which the investing public needs to know. But nobody should be misled here. This rule will probably have zero impact in terms of actually improving cybersecurity practices of publicly traded companies. Maybe it will have an indirect positive impact on cybersecurity in that all the attention the rule has generated will prompt any board of directors that hasn't yet paid attention to cybersecurity, which they're obligated to do. Part of the role of the board of directors is to ask questions about and oversee the cybersecurity practices of the companies that they serve on the boards of. That message should have come through loud and clear already, but I think this is the last board in the country who hasn't gotten the message before should have gotten the message now. But that obligation of boards of directors and senior management to pay attention to cybersecurity, that arises from general principles of corporate law, separate from the laws administered by the SEC.
For publicly traded companies, the SEC's, really, only tool is the disclosure set of requirements. And so, basically, they're taking the set of disclosure requirements, as John said, have always applied to all kinds of other factors that affect the financial state of the company. And it said cybersecurity breaches have to be part of that.
John Carlin
It is one of the confusing areas, so many different regulatory agencies, so many different authorities. And Jim raises a good point, which is the SEC itself would tell you, “That's not our mission. Our mission is not to make public companies more secure from threats, from cyber threats. Our mission is to make sure that investors receive in a fair and reliable manner the information they need in order to protect the market so they can make reasonable investment decisions.” And flowing from that might be that there's a certain price impact on the value of a company that would come from how much they've invested in cybersecurity or not. And that that might indirectly drive cybersecurity practices. But if the conclusion of the market was that wasn't important, it also might not. And so that's their mission as compared to say, the Department of Homeland Security and some of its new rules where its mission really is to protect, at least, our critical infrastructure from cyber-attack.
And one of the things I know Jim and I, when writing our book on cybersecurity fundamentals that just came out, we're focused on is, what a dizzying array of different authorities from state to federal, from different parts of the federal government to things being driven by civil suit that you have to think about as a practitioner in this space. And a lot of the impact is actually competing. One regulatory regime may be encouraging you, for instance, to report cyber incidents, when you know almost nothing, privately to those in government that can do something about it, like the FBI or NSA or Department of Homeland Security. And they don't really want it public at that point, because at that point, that actually helps the bad guy. And they also don't want you to face liability so that they can encourage you to report voluntarily. And at the same time you have other parts of the regime whose focus is all about reporting it publicly, be that the SEC or be that the construct of some of the regimes that are around personally identifiable information or focused more on privacy. So there are some real competing currents right now.
Stephanie Pell
So let's focus now on a new regulatory regime that is on the horizon, and this is a new executive order and rulemaking that is focused on preventing access to Americans’ bulk personal data, and also United States government-related data by countries of concern. This executive order was issued on February 28th, and DOJ issued an Advance Notice of Proposed Rulemaking shortly after the EO came out. John, can you talk about the problem that the EO is intended to address?
John Carlin
Yeah, it's interesting and it relates to a general shift you've seen over the last 10 years or so, maybe longer. And it's one of the few areas where you've actually seen continuity. People don't usually think of the Obama administration, the Trump administration, and the Biden administration as having a through line, but they do on this issue, which would be, hey, look, if we're seeing Chinese state actors in particular use cyber-enabled means to hack and steal vast bulk quantities of information, then we see that's a strategic priority for them, and it's something that we want to protect against here.
How do our other rules protect against it? One of the changes that you've seen take place is in the context of so-called CFIUS, the Committee on Foreign Investment inside the United States. And in the CFIUS context, the United States has a national security regulatory power through the president to say that the change of ownership, essentially, in a U.S. business could pose a national security risk. It used to be an area that was sleepy, and it dealt with things like a foreign country tries to buy a company that produces missiles. Okay, we probably don't want, for instance, Russia owning a missile company inside the United States, that's a national security risk. And so people would know in that instance, I need to go before this committee and they need to take steps to either, they may not approve the transaction or if they do, they may mitigate the risk.
When you look at it through the threat, we see that the stealing bulk information is something that our nation state adversaries are trying to do. How can we protect against it? CFIUS, then you see, and there's public reporting about this, what would have just shocked people beforehand, a Chinese company was trying to acquire Grindr, which is a dating site, and that raised CFIUS concerns. So a far cry from missiles when you're talking about dating sites, but the reason why is because that's the type of personal information that could be used for things like blackmail and other purposes.
Now CFIUS was an existing regulatory regime, but what they realized is if you tried to get that dataset by acquiring a U.S. business and having a change in control of the ownership of foreign control, you could use CFIUS to block or mitigate it. But if that same U.S. company sold the data, either directly to a foreign company or through a data broker, it was perfectly legal and there was nothing that could be done to mitigate the national security risk.
So this new regime was designed to plug that hole and say in certain instances, there is going to be a regulatory regime and they specifically call out China when they were discussing their rollout, “You are not going to be able to sell certain bulk data sets to China.” So for now, it's in a proposed rule. I think there's a lot of questions about how it will work in practice that remain to be answered.
But they, Justice, has given a speech on it to say, “Here's some things you can think about now,” for companies. One is, know your data, know what type of information you're collecting and how you're protecting against it. Know where it's going. So, review the agreements you have in place to sell it or provide it to others. Know who has access to it. This is an interesting, difficult part of the rule because it's not just about conscious sales. It's also you might have non-U.S. consultants or investors. So you're not actually selling it [to people] who have access to the data. And they called out in their speech specifically China, Russia, Iran in consider[ing] the implication and then obviously the sales. So right now they're doing a listening tour as they try to figure out how to implement it. But I think this is another real sign of the lawfare regime, to cite the podcast that we're currently in.
Stephanie Pell
Consistent with your discussion of the national security implications and why the CFIUS process exists, this executive order is also leveraging a U.S. national security authority, IEEPA, to address this problem. Jim, can you talk more about IEEPA, and also a bit more about the intended scope of the executive order? What might fall in and outside of its scope?
Jim Dempsey
So IEEPA is the International Emergency Economic Powers Act, first adopted in 1977. It’s separate, actually, from the Defense Production Act, which is the authority for the CFIUS process that John was describing, but the two have long overlapped. IEEPA authorizes the president to regulate or prohibit any acquisition, transfer, importation, exportation, or transactions by any person subject to U.S. jurisdiction involving any property in which a foreign country or foreign national has an interest. Very, very broad. It requires the president to first declare that there is, in fact, a national emergency. And then, once that emergency is declared, then the president can block transactions.
As John said, this is a long running and consistent across administrations effort. Presidents going back, as John said, to President Obama have used IEEPA as a tool of U.S. cybersecurity policy. IEEPA served as the basis for sanctions against ransomware gangs. They had been listed as sanctioned entities and therefore it's illegal to make ransomware payments to sanctioned entities. President Trump invoked IEEPA, when he tried to ban TikTok and WeChat. He ran into a huge problem in that IEEPPA includes the so-called Berman Amendment or Berman Amendments, which say that the power of the president does not extend to any personal communications or any export or import of information or informational material. And the import of TikTok was clearly informational material. And the courts blocked President Trump's efforts.
But separate—related, but separate—from the TikTok and WeChat orders, President Trump had issued an Executive Order, 13873, declaring a general national emergency with respect to the nation's information and communications technology and services supply chain. And this included everything from Huawei switches to data transactions. President Trump set in motion a regulatory process there. It was actually President Trump, or President Trump's Commerce Department, which designated China, Russia, Iran, and three other relatively small countries, including North Korea, which is significant disregard, but designated the countries that were at stake here.
Now we see the Biden administration taking the next step. They are really following through on what president Trump started big time in 2019, with his Executive Order 13873, and which arguably even President Obama foreshadowed with his invocation of IEEPA to address cybersecurity more broadly.
I do think that there may be a problem here. I'd be interested to hear what John says. I do think that data is information and information cannot be regulated under IEEPA. This order covers the—Biden administration was trying, I think, to thread the needle here and being very specific on certain categories of data, genomic data, biometric identifiers, geolocation data, personal health data, financial data, obviously. And it said these are sensitive categories of data. I bet they have intelligence backing up, or at least insight and knowledge, backing up why these categories of data are of particular interest to our adversaries. And the order and the rulemaking, which has just begun—John said there's a lot of questions. There are specifically 114 questions in the Advance Notice of Proposed Rulemaking, which by the way is only an advanced notice of proposed rulemaking. It's not even a proposed rule. 114 questions that the Justice Department asked for input on. But it's a very detailed notice. A lot of thought has clearly gone into it. This has been a multi-year effort as the Biden administration took the baton from the Trump administration on this one.
And they would set up a quite complicated regulatory structure. There would be a few classes of prohibited transactions, and then a much broader class of regulated transactions. Prohibited transactions would be data broker transfers, which itself I think is potentially very broad. And then any transaction that provides a country of concern with access to bulk human genomic data or biospecimens. So one pretty narrow class of prohibited transactions, but one relatively broad class of prohibited transactions. And then other bulk transactions in U.S. sensitive personal data would be restricted. And that's particularly where, as John said, you have to look at your vendor agreements, your contractor agreements, your employment agreements, your investment agreements. And at some level, do what you've always needed, what any data custodian has needed to do, but this puts a national security spin on it. Know what data you have, know who has access to it. Look at all of your data flows. Look at all your third-party vendors who might have access to this data. It would really create sort of a brand-new regulatory structure around data in some ways, adopting the privacy law that Congress has failed to enact on privacy generally, adopting it here for these transactions that might involve a country of concern getting access to the data.
Stephanie Pell
So there are a lot of different regulatory efforts that we could examine looking across various sectors that fit under the critical infrastructure umbrella. And sometimes it's interesting to compare and contrast them because it illuminates what may be progressing in a positive direction and what may not. Jim, can you contrast the new rule on maritime security with the failure of the EPA's effort to require states to consider cybersecurity in their surveys of drinking water systems?
Jim Dempsey
So yeah, remember as I said the March 2023 National Cybersecurity Strategy for the Biden administration had one major element of departure from all prior administrations prompted, I think, in part by the Colonial Pipeline incident, but also, I think by a growing recognition that the public private partnership approach of hands-off, no regulation of critical infrastructure wasn't working. There were a lot of infrastructures that were unregulated on cybersecurity and therefore vulnerable. The administration in March 2023 said, we're going to fix that, we’re going to address that. And we are going to use, they said, existing authorities wherever possible. And I've argued and the administration, I think, has agreed, many existing statutes, when they talk about regulatory structures for the safety and reliability of critical infrastructures, safety and reliability should include cybersecurity. As the administration said, where there are those authorities, we will use them. And if for any particular critical infrastructure, there's nothing there, we're going to ask Congress for it. Second half, of course, is hard given the current situation on Capitol Hill and how hard it is to get things done there.
And the administration has been following through on that. They issued, of course directives under the statute on transportation security, which includes pipelines and railroads. They issued binding rules on pipelines, on railroads, on the aviation sector. And just recently, they issued one on maritime security.
There are laws on the books that talk about the security of ports and port facilities. And using that existing authority, the administration took a variety of measures, including one aimed at cranes. Those crazy-looking, they look like dinosaurs. Those huge cranes at the ports. 80 percent of them are made in China. They are all internet-connected. They are, at least they are all have remote access capabilities. So that's obviously a huge vulnerability that if an update sent remotely to those cranes could shut them down it would be disastrous to the economy. So the Coast Guard, which has authority over port facilities, issued a directive to ports to immediately begin risk mitigation analysis of the connected cranes that are at American ports. No challenge to that yet. Hopefully, in my opinion, I think, hopefully, it will stay in place. I think it's logical and necessary.
The administration tried something similar with drinking water systems. The Safe Drinking Water Act has a somewhat complicated system of shared regulation between the federal government and the states, but it basically requires the states to conduct sanitary surveys of the operations and technology of drinking water systems. The EPA last year issued a memo to the states saying operations and technology includes the operational technology that all of your control systems are now dependent upon, like everything else, the valves and devices to control the flow of water and the purification of it and the filtration of it, etc. Those are all dependent upon computer functions, industrial control system functions. Those are vulnerable to cyber-attack. And the EPA said, when you are checking the security and reliability and safety of your drinking water systems, also look at the cyber aspects.
A number of state attorneys general and the drinking water trade association basically sued and said, operations and technology does not include operational technology. You can't do this by a memo. You have to do it by a rule. I think obviously if the EPA had tried to do it by rule, they would have opposed that too. But the Eighth Circuit Court of Appeals agreed with the plaintiffs and blocked that rule. And the EPA withdrew or blocked the interpretive memo. And the EPA withdrew it a couple months after that. It was of course reported that the Iranians had infiltrated and were inside the operational technology of many drinking water systems in the United States. But right now, there is no rule that requires the operators of those systems, many of which are municipal, some, a fair number of which are privately owned, but all regulated under the Safe Drinking Water Act. There's now nothing requiring them to address the cyber vulnerability of their systems.
The CISA, Cybersecurity and Infrastructure Security Agency, issued basically a letter saying please, please, please change those default passwords on those devices that the Iranians have compromised. But all they could do was beg. They had no power to do anything about it.
Stephanie Pell
That is all really problematic, but we're going to have to change sectors just for the moment. Are there new efforts focus, Jim, on cybersecurity for hospitals? And if so, can you talk about that?
Jim Dempsey
Yeah. So just very briefly, again, the administration, as I said, is going sector by sector, looking at its existing authorities, railroads, airports, air traffic etc. They have said that they intend to issue sometime soon, a rule for hospitals and other healthcare facilities that accept Medicare and Medicaid payments. And on this one, I think that the government's going to be on very solid footing because basically they're saying to the hospitals, if you want federal money, you've got to do this. If you don't want to do it, okay, fine. But you can't get federal reimbursement for Medicare and Medicaid patients, which are obviously a very large percentage of the income of hospitals. There are these so-called conditions of participation. There are about two dozen of them, one on dietary needs, one on building code standards, etc. Things that Medicare, CMS centers for Medicare and Medicaid services have said, if you're going to take our money, if you're going to take federal tax dollars, you must meet these minimum standards. And I think we will see sometime in the coming months such a standard for hospitals and other healthcare facilities to take Medicare and Medicaid payments.
Stephanie Pell
So John, CISA recently issued a Notice of Proposed Rulemaking to implicate the Cybersecurity Incident Reporting for Critical Infrastructure Act. Can you tell us what that's about?
John Carlin
So that, rather than determining a standard of care or cybersecurity baseline, like Jim was talking about when it comes to our water supply or hospitals, this regulation is really designed—and this was a great frustration to me in various posts that I had in government, which is you do not need to report when you have an incident right now. Or so that means given that the vast majority, over 90-some percent, of the critical infrastructure that's digital resides in private hands, it means there's no reliable figure for the government to track, in particular, ransomware/extortion attacks. So we don't know whether the number’s increasing, decreasing, staying the same, what law enforcement or other efforts are successful or not. It was a question I often got. So we did our best to estimate, as a federal government, and you also have private sector vendors, some of the contractors in security also try to use different methods to estimate.
This new rule would make it mandatory to report when you've had an incident and then there's a question as to how serious the incident needs to be for it to qualify. And also for you to report when it's ransomware, when you've made a payment. Now, who it applies to, what constitutes critical infrastructure, how serious the incident needs to be, and what is reported, none of that was fully defined in the statute that set up the authority for the Department of Homeland Security to issue this regulation. And so what just got proposed is their approach to tackling those questions, and now they're in a period where they're getting feedback on their proposed rule.
This could really be an important, I think, overall positive change. We want to make sure, ideally, though, that it's done in a way where the fact that the reporting isn't so onerous that it makes it more difficult for the victims of a cyber-attack or in the middle of responding to it to get that information over, so that's point one. And point two, that the information that is provided to the government isn't then used against the victim to essentially re-victimize them in terms of a regulatory or other type of action that seeks to penalize the victim. So this is really meant to provide the intelligence so you can track the trend. And then point three, and this one wasn't as well articulated, I think, in the statute, from my view is, we want the government to be able to act on the information, not to go after the victim, but to be able to go after the bad guys. So if it's information that is helpful to determining who did it, or providing general warnings to the sector, you want the government to be able to do that. That means being able to use it for search warrants or FISA collection or in criminal cases that can hold some of these bad actors to account.
Stephanie Pell
So switching from CISA's activities to the Federal Trade Commission, Jim, the FTC has also been engaged in enforcement efforts that could be characterized, at least in part, as data security. Can you talk about some of those efforts?
Jim Dempsey
Yeah, we're really covering the waterfront here, but with FTC, they have for a long time, in fact, going back, I think, to 2002 with Eli Lilly, made it clear that they believe that the Federal Trade Commission Act Section 5, which makes it illegal to engage in unfair or deceptive trade practices, they have said that it is an unfair trade practice to collect personal data and not protect it. And they've brought dozens of cases against companies that have suffered data breaches and lost personal data.
They've taken, what I would say, is an increasingly expansive view of that and increasingly, you could say, even aggressive view. And two recent enforcement actions that were settled bring that home to bear. For one, they brought an enforcement action against a company called Blackbaud, which had provided data management services to nonprofits. FTC alleged that Blackbaud suffered a data breach and that it had inadequate security protections and that that was a violation of the Federal Trade Commission Act. Blackbaud, like almost every other company that the FTC proceeded against, settled. And one interesting thing stands out—and by the way, these orders that the FTC has been imposing on respondents have become increasingly more and more detailed in terms of the information security program that a settling respondent is required to undertake.
But what's one thing that stood out to me in the Blackbaud incident was the FTC required Blackbaud to delete any data that it was holding that was no longer necessary for business operations. Now, data minimization, that is, the requirement that you should collect the minimum amount of data necessary and hold it only for so long as is necessary to fulfill the business purpose for which it was collected, that's always been part of the fair information practices or the modern concept of privacy. But data minimization had for years been referred by many as the forgotten FIP, the forgotten fair information practice. The FTC has brought that front and center now, and it has in orders, ordered respondents to no longer collect unnecessary information, and it's basically going to look at your collection—because after all, once you collect it, you're responsible for it. If it's personally identifiable information, if you collect it, you have to protect it. And the FTC is saying if you're not using it, delete it. And you hear it with the Blackbaud, it enforced that retrospectively on the company sending a message, I think, to everybody that it's going to keep data minimization on its watch list of issues.
Another interesting case recently involved a small company that provided communication services to prisons and other facilities entered into contracts with government agencies and would basically run the phone system inside the prison for inmates to call their family members and also the company provided a payment funds transfer mechanism that allowed inmates to get access to their small amounts of money. This is the company called Global Tel*Link, and they recently settled, the FTC drafted a complaint against them. Global Tel*Link had put a huge amount of this data, because they collected financial data and social security numbers and all kinds of other information on both the inmates as well as their family members on the outside that they were communicating with and that were the other end of some of these financial transactions. Global Tel*Link, as alleged by the Commission, put a lot of this data on a cloud-based environment, left it there unencrypted, unsecured. Bad guys found it and accessed data on over a half a million people, including the sensitive financial and social security numbers and so on.
The FTC alleged that Global Tel*Link violated the FTC Act in not adequately protecting the data. It also alleged that Global Tel*Link Violated the FTC Act in not giving full and timely notice of the breach. And here, this is as far as I know, the first case in which the Commission has expressly said that in its view, Section 5 of the Federal Trade Commission Act includes a breach notification requirement. So we have 50 state breach notification laws, and the FTC is taking the position that basically there's a single nationwide breach notification rule implicit in Section 5. I think that's remarkable. The staff had said that was their position—I think in 2022, on the Commission blog, but as far as I know, this is the first commission case. And anyhow, it stands out by saying that there is a notice requirement under the Act. Also, third, the Commission said that in some of the public statements to the media that the company made about the breach, where it minimized the breach, said that no sensitive data was taken, that statement to the public, to the media, violated the FTC Act. And fourth, even after it knew about this breach, the company continued to bid on other RFPs of other correctional authorities to get new business. And in those bids, they said that they had never suffered a cybersecurity attack. And the Commission held that those misstatements, business to business, were a violation of the Act.
So this is a remarkably broad set of assumptions and implied requirements. Of course, all of this is the FTC saying what those two words, unfair and deceptive, mean. But this case is telling companies, in all of your post-breach activities, communications with affected individuals, communications with the public, and communications with other potential business partners, you need to be forthcoming about the depth and breadth of a cybersecurity incident that you experienced. I think that's remarkable.
Stephanie Pell
So I want to change gears a bit. Move away from talking about these regulatory efforts and enforcement actions to more operational activity, ake down and disruption efforts. Because over the past few years, we've seen an uptick in the Department of Justice and FBI's disruption or take down efforts of botnets and the disruption efforts of ransomware actors, some of which included law enforcement's obtaining encryption keys, which allowed for the decryption of victim data. We've also seen the FBI able to recover ransom that had been paid and return those funds to victims. I want to talk about two very interesting disruption efforts, the LockBit takedown and Volt Typhoon, but before we do that, John, can you talk about DOJ's broader strategy with disruption efforts and how this strategy has developed over the past several years?
John Carlin
If you think about the different polls, the carrot and the stick, what we've been talking about for the bulk of the conversation today, are actions that the government takes against victims. And it's on, in the name of improving the security of the victims so that they can protect themselves from criminals. At the Justice Department, coming into this administration, a key part of the strategy was, what can we do to go after the ones actually causing the problem? The very sophisticated criminal groups, the very sophisticated ecosystem that looks like almost a Fortune 100 company, in the way that some of these groups are set up and profiting from their cybercrime, and then the nation states that both harbor these groups and also use them as proxies for state activity, and then sometimes leverage what is otherwise a purely criminal group for state ends in return for them operating with that impunity in their boundaries.
The idea was simple, which was, if you think about the transition the government had made post-September 11th, to treat the prosecution of a terrorist act after it has occurred as not a success but a failure—it may be necessary, but at that point families are grieving and had lost loved ones—that what the prosecutors, the intelligence community, should do was try to figure out what the intelligence says the threats are and disrupt the threat prior to the harm being caused. And to do that, to be as creative as you can in the different ways in which you can use existing legal tools to essentially bring pain to the adversary, to make it more difficult for these actors to attack victims here. And so, part of the strategy was not to leave criminal prosecution off the table. Criminal prosecutions can and do proceed. And people travel, and I think you've seen a number of successes working with partners overseas at arresting and extraditing individuals when they travel. Or, because we've helped build up parallel regimes in countries that share our values, end up getting prosecuted in the country in which they are found to reside.
But in addition to that, part of the strategy was, hey, look, what else can we do? What else could really help a victim here? And it was part of the privilege of setting up a ransomware task force early in this administration with that goal in mind. And you've hit on some of the key aspects. But if the reason why the crooks are attacking and doing the extortion is to get the money, why can't we use legal process to get the money back? Why don't we go after the wallets, after a victim has made a payment into the wallet? Or, as you gave another example, if the key to their scheme is they're using malware to encrypt unsuspecting victims’ systems, and why not see if we can get the key ourselves or create a key, and then—key to this being innovative. The strategy is don't declare to the world that you have it. Instead, work confidentially with victims to provide them the key so that the bad guy, for a period of time—and this was in the Hive operation, most spectacularly over a period of six months—secretly provide the key to the victims so the victims aren't paying but the bad guy doesn't know why the scheme isn't working. So they continue to invest their resources in essentially a broken scheme.
Or another aspect was, go after the command and control servers that are being used for so-called botnets, these army of zombie computers where bad guys have essentially created a cyber weapon of mass destruction by taking our computers at home or at our business, compromising them, and then being able to, on scale, have an army of 100,000, 200,000 or more computers that they can use to launch things, like denial of service attacks, so many requests for information simultaneously that a system can't function. And when we discover where those command and control use the access in order to disable the botnet, so not only do you stop one criminal group from using it, you get rid of the vulnerability on those computers so that the day that you leave, another group can't go and grab that same command and control and compromise all those victim computers.
So that's all I think under the rubric of this strategy of, let's figure out the most creative use of legal tools working with partners that causes the most harm to the bad guys who are trying to hit us here so they no longer can operate essentially cost-free because they're attacking digitally.
Stephanie Pell
So you mentioned the Hive disruption effort. Can you talk more about the LockBit takedown and its significance, and also what's interesting to know about Volt Typhoon?
John Carlin
I think one of the key distinguishing factors on LockBit is—and this was one of the most, maybe the most prevalent group at the time that it was taken down. They not only disrupted and took down the group; they then distributed, they essentially did a law enforcement version of what the bad guys do. So they went to the site that's used to extort victims, and they posted on the site that they were now—so this is the site where normally the criminal group would take that which is stolen and post it to embarrass the victim. And here, the government seized that site and they posted on it saying that they were going to expose the leader of the group. They didn't ultimately do that. But I think that was, in some ways, a prank. But then they did also release—
Stephanie Pell
Someone had fun, it sounds like.
John Carlin
Yeah, it was a good day for an FBI agent. And what they did do, is they took all the information they had about how the groups operated, who they were victimizing, and they made it public. So they took that, which is the shadows of the criminal group. I think that has really damaged, essentially, if you think of it as the bad guy brand, right, the criminal brand. It's really damaged the brand so that the crooks can't trust each other. I know it's a strange concept, but the truth is the success of some of these criminal groups is because they do rely on trust. And so, the more you can disrupt that trust, it makes it more difficult for them to act on scale.
So, ah, yes, Volt Typhoon. To me, it is not new actually, that there would be sophisticated Chinese actors who are in critical infrastructure systems, some of the systems that Jim was describing, where there aren't adequate regulatory enforcement actions. And essentially, they call it living off the land, where you're essentially impersonating someone on the system and you're not using new tools to compromise the system and you're lying in wait. And the reason why you're lying in wait is because you want, in the event of tension between countries, to have the ability to disrupt your adversary's critical infrastructure, so disrupt civilian infrastructure for military ends. So I don't think it's new that we've seen foreign adversaries trying to do that.
What was new was the degree to which the government talked about it publicly with Director Wray starting off in congressional testimony. I know I was out at the Munich Security Conference, which is a major conference with NATO allies and others. And you had U.S. official after U.S. official—Director Wray from the FBI, Secretary Mayorkas from Department of Homeland Security, and Neuberger from the White House, Lisa Monaco as the Deputy Attorney General of the Justice Department—they all emphasized what they had seen with Volt Typhoon. And I think that was deliberately to try to send a message not just to the U.S. public about the urgency of the need to take action, whether it's regulatory to secure systems, also to set up a rule of norms, and also when it comes to whether or not you can trust Chinese infrastructure or tools for your own system, to spread the message about what they were seeing and do it in a public way.
And that in a sense is its own way of imposing costs for those who are trying to use cyber means for either criminal purposes or in this state, for a type of attack on civilian infrastructure that I think the G7, anyway, and G20 signed up as a norm that you shouldn't do. You shouldn't be doing things like trying to deny access to water in the event of a conflict through cyber means. So we'll see if there's more to come on that front. And I hope the message gets through loud and clear that this is a serious threat, and we need to take action to prevent our systems from being compromised in a way that it could cause danger to civilian health or safety.
Stephanie Pell
So looking toward the future, what would you say are the most important issues on the horizon? Jim, do you want to start?
Jim Dempsey
There's a lot in play right now. Sensitive data rulemaking, a separate inquiry by Commerce Department on connected cars, the potential of this CMS rule for hospitals. Interesting question for me is, where are some of the other and where are some of the other infrastructures that deserve attention? And what will the administration do to flesh out its strategy there?
A second issue on the horizon, although it's the long horizon, one that I've been working on a lot, is the question of software liability for the developers of software. The other major departure in the Biden cyber strategy was its call for imposing liability on software developers for their faulty products. How to make that happen right now? The developers disavow, in the terms of service or the licensing terms, disavow liability for any damages. Overcoming that would be a huge leap, but how do we get deeper to the core of the problem and this buggy software, this flood software, they're all so heavily dependent upon? That's a tough one. The administration is seeking to create a dialogue there. They know it's a multiyear effort.
And then I guess third, I would say action at the state level. An increasing number of states now have laws requiring reasonable data security by custodians of personal data, which if any states other than California, because we know California will be active, which other states will be active remains to be seen, but now with so many laws on the books, a state AG could easily take up this issue.
Stephanie Pell
John, what are your thoughts?
John Carlin
We're in an area, for those who practice in it, where the technology continues to change very, very rapidly. And you just look at where we've been in a relatively short period of time, and almost everything we value as a society has moved from books and papers, from analog to bits and bytes to digital, and then was connected through a protocol that was never designed for security in mind. And since then, we've continued to build, essentially, in that straw house of a foundation where it's just fundamentally not capable of the level of security for the functions that we're putting on it, and putting more and more of what we depend into that form for moving now is where it's not just information, right? It's physical things. It's everything from the cars on our roads to the planes in our skies. So the drones in our skies that are digitally connected using the same protocol.
So we're in a race in some way for our regulations, our laws, our practices, our way of thinking about risk, catch up with the innovation of the adversaries, the terrorists, the criminals, the nation states who want to take advantage of that move in ways to cause harm. And right now, in some respects, it's only a good climate for lawyers. But I don't think, in my lifetime, I can recall as much activity taking place to try to address the threat, as Jim put it, by different regulatory authorities. So whether it's the laboratory of the 50 states, the different approaches by regulation in government or, new legislation coming through inside, inside the United States.
So I think if you look at the years to come, we need to see, is that going to settle down, is there going to be some universality to what the terms are when we're trying to define what it is to be secure, what it is to have an incident, what it is to have a serious incident, what it is to report, and can we break the curve of the current exponential growth in criminal actors? I think we can, and I'm so glad that you're covering this issue today. But part of the solution is definitely, unlike other national security or other threats, this one in particular, I think part of the solution is going to be having lawyers figure out a regime that is easier for non-lawyers to live with.
Stephanie Pell
On that note, we'll have to leave it there for today. Thank you both so much for joining me.
John Carlin
Thank you.
Jim Dempsey
Thanks Stephanie.
Stephanie Pell
The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters. Please rate and review us wherever you get your podcasts.
Look out for our other podcasts, including Rational Security, Chatter, Allies, and The Aftermath, our latest Lawfare Presents podcast series on the government's response to January 6th. Check out our written work at lawfaremedia.org.
The podcast is edited by Jen Patja Howell, and your audio engineer this episode was Noam Osband of Goat Rodeo. Our music is performed by Sophia Yan. As always, thank you for listening.