Foreign Relations & International Law

Building a Better U.S. Approach to TikTok—and Beyond

Justin Sherman
Wednesday, December 23, 2020, 10:01 AM

Three principles for tackling the security risks of foreign-made software.

TikTok on a mobile phone (Solen Feyissa, https://flic.kr/p/2jjP6YF; CC BY-SA 2.0, https://creativecommons.org/licenses/by-sa/2.0/).

Published by The Lawfare Institute
in Cooperation With
Brookings

One of the defining technology decisions of the Trump administration was its August 2020 “ban” on TikTok—an executive order to which legal challenges are still playing out in the courts. The incoming Biden-Harris administration, however, has indicated its intention to pivot away from Trump’s approach on several key technology policies, from the expected appointment of a national cyber director to the reinvigoration of U.S. diplomacy to build tech coalitions abroad. President Biden will need to make policy decisions about software made by companies incorporated in foreign countries, and to what extent that might pose national security risks. There may be a future “TikTok policy,” in other words, that isn’t at all about—or at least isn’t just about—TikTok.

In April 2020, Republican Rep. Jim Banks introduced legislation in the House of the Representatives that sought to require developers of foreign software to provide warnings before consumers downloaded the products in question. It’s highly likely that similar such proposals will enter Congress in the next few years. On the executive branch side, the Biden administration has many decisions ahead on mobile app supply chain security, including whether to keep in place Trump’s executive order on TikTok. These questions are also linked to foreign policy: President Biden will need to decide how to handle India’s bans of Chinese software applications, as India will be a key bilateral tech relationship for the United States. And the U.S. government will also have to make choices about cloud-based artificial intelligence (AI) applications served from other countries—that is, where an organization’s AI tools are run on third-party cloud servers—in the near future.

In this context, what might a better U.S. policy on the security risks of foreign-made software look like? The Trump administration’s TikTok executive order was more of a tactical move against a single tech firm than a fully developed policy. The new administration will now have the opportunity to set out a more fully realized, comprehensive vision for how to tackle this issue.

This analysis offers three important considerations for the U.S. executive branch, drawing on lessons from the Trump administration’s TikTok ban. First, any policy needs to explicitly define the problem and what it sets out to achieve; simply asserting “national security issues” is not enough. Second, any policy needs to clearly articulate the alleged risks at play, because foreign software could be entangled with many economic and security issues depending on the specific case. And third, any policy needs to clearly articulate the degree to which a threat actor’s supposed cost-benefit calculus makes those different risks likely. This is far from a comprehensive list. But failure to address these three considerations in policy design and implementation will only undermine the policy’s ultimate effectiveness.

Defining the Problem

First, any policy on foreign software security needs to be explicitly clear about scope—that is, what problem the government is trying to solve. Failure to properly scope policies on this front risks confusing the public, worrying industry and obscuring the alleged risks the government is trying to communicate. This undermines the government’s objectives on all three fronts, which is why scoping foreign software policies clearly and explicitly—in executive orders, policy memos and communication with the public—is critical.

Trump’s approach to TikTok and WeChat provides a lesson in what not to do. Arguably, the TikTok executive order was not even a policy: It was more a tactical-level move against a single tech firm than a broader specification of the problem set and development of solutions. Trump had discussed banning TikTok in July 2020 as retaliation for the Chinese government’s handling of the coronavirus—so, putting aside that this undermined the alleged national security motives behind the executive order, the order issued on TikTok wasn’t completely out of the blue. That said, the order on WeChat that accompanied the so-called TikTok ban was surprising, and its signing only created public confusion. Until then, much of the congressional conversation on Chinese mobile apps had focused on TikTok, and the Trump administration had given no warning that WeChat would be the subject of its actions too. What’s more, even after the executive orders were signed in August, most of the Trump administration’s messaging focused just on TikTok, ignoring WeChat. The administration also wrote the WeChat executive order with troublingly and perhaps sloppily broad language that scoped the ban as impacting Tencent Holdings—which owns WeChat and many other software applications—and thus concerned gaming and other software industries, though the administration subsequently stated the ban was aimed only at WeChat.

Additionally, the Trump administration’s decisions on U.S.-China tech often blurred together trade and national security issues. The Trump administration repeatedly suggested that TikTok’s business presence in mainland China inherently made the app a cybersecurity threat, without elaborating on why the executive orders focused solely on TikTok and WeChat rather than other software applications from China too. Perhaps the bans were a possible “warning shot” at Beijing about potential collection of U.S. citizen data—but it’s worth asking if that “warning shot” even worked given the legal invalidations of the TikTok ban and the blowback even within the United States. Again, the overarching policy behind these tactical decisions was undeveloped. It was unclear if TikTok and WeChat were one-off decisions or the beginning of a series of similar actions.

Going forward, any executive branch policy on foreign software needs to explicitly specify the scope of the cybersecurity concerns at issue. In other words, the executive needs to clearly identify the problem the U.S. government is trying to solve. This will be especially important as the incoming Biden administration contends with cybersecurity risks emanating not just from China but also from Russia, Iran and many other countries. If the White House is concerned about targeted foreign espionage through software systems, for example, those concerns might very well apply to cybersecurity software developed by a firm incorporated in Russia—which would counsel a U.S. approach not just limited to addressing popular consumer apps made by Chinese firms. If the U.S. is concerned about censorship conducted by foreign-owned platforms, then actions by governments like Tehran would certainly come into the picture. If the problem is a foreign government potentially collecting massive amounts of U.S. citizen data through software, then part of the policy conversation needs to focus on data brokers, too—the large, unregulated companies in the United States that themselves buy up and sell reams of information on U.S. persons to anyone who’s buying.

Software is constantly moving and often communicating with computer systems across national borders. Any focus on a particular company or country should come with a clear explanation, even if it seems relatively intuitive, as to why that company or country poses a particularly different or elevated risk compared to other sources of technology.

Clearly Delineate Between Different Alleged Security Risks

The Trump administration’s TikTok ban also failed to clearly articulate and distinguish between its alleged national security concerns. Depending on one’s perspective, concerns might be raised about TikTok collecting data on U.S. government employees, TikTok collecting data on U.S. persons not employed by the government, TikTok censoring information in China at Beijing’s behest, TikTok censoring information beyond China at Beijing’s behest, or disinformation on the TikTok platform. Interpreting the Trump administration’s exact concerns was difficult, because White House officials were not clear and explicit about which risks most concerned them. Instead, risks were blurred together, with allegations of Beijing-compelled censorship thrown around alongside claims that Beijing was using the platform to conduct espionage against U.S. persons.

If there was evidence that these practices were already occurring, the administration did not present it. If the administration’s argument was merely that such actions could occur, the administration still did not lay out its exact logic. There is a real risk that the Chinese government is ordering, coercing or otherwise compelling technology companies incorporated in its borders to engage in malicious cyber behavior on its behalf worldwide, whether for the purpose of censorship or cyber operations. Beijing quite visibly already exerts that kind of pressure on technology firms in China to repress the internet domestically. Yet to convince the public, industry, allies, partners, and even those within other parts of government and the national security apparatus that a particular piece or source of foreign software is a national security risk, the executive branch cannot overlook the importance of clear messaging. That starts with clearly articulating, and not conflating, the different risks at play.

The spectrum of potential national security risks posed by foreign software is large and depends on what the software does. A mobile app platform with videos and comments, for instance, might collect intimate data on U.S. users while also making decisions about content moderation—so in that case, it’s possible the U.S. government could have concerns about mass data collection, censorship and information manipulation all at once. Or, to take another example, cybersecurity software that runs on enterprise systems and scans internal company databases and files might pose an array of risks related to corporate espionage and nation-state espionage—but this could have nothing to do with concerns about disinformation and content manipulation.

“Software” is a general term, and the types and degrees of cybersecurity risk posed by different pieces of software can vary greatly. Just as smartphones are not the same as computing hardware in self-driving cars, a weather app is not the same as a virtualization platform used in an industrial plant. Software could be integrated with an array of hardware components but not directly connect back to all those makers: Think of how Apple, not the manufacturers of subcomponents for Apple devices, issues updates for its products. Software could also directly connect back to its maker in potentially untrusted ways, as with Huawei issuing software updates to 5G equipment. It could constantly collect information, such as with the TikTok app itself— and it could learn from the information it collects, like how TikTok uses machine learning and how many smartphone voice-control systems collect data on user speech. This varied risk landscape means policymakers must be clear, explicit and specific about the different alleged security risks posed by foreign software.

Give Cost-Benefit Context on Security Risks

Finally, the U.S. government should make clear to the public the costs and benefits that a foreign actor might weigh in using that software to spy. Just because a foreign government might hypothetically collect data via something like a mobile app—whether by directly tapping into specific devices or by turning to the app’s corporate owner for data hand-overs—doesn’t mean that the app is necessarily an optimal vector for espionage. It might not yield useful data beyond what the government already has, or it might be too costly relative to using other active data collection vectors. Part of the U.S. government’s public messaging on cyber risk management should therefore address why that particular vector of data collection would be more attractive than some other vector, or what supplementary data it would provide. In other words, what is the supposed value-add for the foreign government? This could also include consideration of controls offered by the software’s country of origin—for example, transparency rules, mandatory reporting for publicly traded companies, or laws that require cooperation with law enforcement or intelligence services—much like the list of trust criteria under development as part of Lawfare’s Trusted Hardware and Software Working Group.

In the case of the Trump administration’s TikTok executive order, for example, there was much discussion by Trump officials about how Beijing could potentially use the app for espionage. But administration officials spoke little about why the Chinese intelligence services would elect to use that vector over others, or what about TikTok made its data a hypothetical value-add from an intelligence perspective.

If the risk concern is about targeted espionage against specific high-value targets, then the cost-benefit conversation needs to be about what data that foreign software provides, and how easily it provides that benefit, relative to other methods of intelligence collection. If the risk concern is about bulk data collection on all the software’s users, then the cost-benefit conversation needs to be about why that data is different from information that is openly available, was stolen via previous data breaches, or is purchasable from a U.S. data broker. That should include discussing what value that data adds to what has already been collected: Is the risk that the foreign government will develop microtargeted profiles on individuals, supplement existing data, or enable better data analytics on preexisting information?

The point again is not that TikTok’s data couldn’t add value, even if it overlapped with what Chinese intelligence services have already collected. Rather, the Trump administration did not clearly articulate Beijing’s supposed cost-benefit calculus.

Whatever the specific security concern, managing the risks of foreign espionage and data collection through software applications is in part a matter of assessing the potential payoff for the adversary: not just the severity of the potential event, or the actor’s capabilities, but why that actor might pursue this option at all. Policy messaging about these questions speaks to the government’s broader risk calculus and whether the U.S. government is targeting the most urgent areas of concern. For instance, if the only concern about a piece of foreign software is that it collects data on U.S. persons, but it then turns out that data was already publicly available online or heavily overlaps with a foreign intelligence service’s previous data theft, would limiting that foreign software’s spread really mitigate the problems at hand? The answer might be yes, but these points need to be articulated to the public.

Conclusion

A key part of designing federal policies on software supply chain security is recognizing the globally interconnected and interdependent nature of software development today. Developers working in one country to make software for a firm incorporated in a second may sell their products in a third country and collect data sent to servers in a fourth. Software applications run in one geographic area may talk to many servers located throughout the world, whether a Zoom call or Gmail—and the relatively open flow of data across borders has enabled the growth of many different industries, from mobile app gaming to a growing number of open-source machine-learning tools online.

If the U.S. government wants to draw attention to security risks of particular pieces or kinds of foreign software, or software coming from particular foreign sources, then it needs to be specific about why that software is being targeted. Those considerations go beyond the factors identified here. The WeChat executive order, for instance, wasn’t just unclear in specifying the national security concerns ostensibly motivating the Trump administration; it also failed to discuss what a ban on WeChat in the United States would mean for the app’s many users. Hopefully, greater attention paid to these crucial details will help better inform software security policies in the future.


Justin Sherman is a contributing editor at Lawfare. He is also the founder and CEO of Global Cyber Strategies, a Washington, DC-based research and advisory firm; a senior fellow at Duke University’s Sanford School of Public Policy, where he runs its research project on data brokerage; and a nonresident fellow at the Atlantic Council.

Subscribe to Lawfare