Global Technology Products, U.S. Security Policy, and Spectrums of Risk
Some policymakers are declaring non-U.S. tech companies, products, and services a risk to U.
Published by The Lawfare Institute
in Cooperation With
While TikTok is at the center of current media coverage of U.S.-China tech security policy, including its potential deal with the Committee on Foreign Investment in the United States (CFIUS), it hardly stands alone in the realm of tech-driven security questions for the U.S.—and those questions go beyond China. Different parts of the U.S. government, from executive branch agencies to congressional offices, are advancing or proposing different policies and actions to identify and address risks associated with non-U.S. technology companies, products, and services, from previous bans on Huawei equipment usage to current Commerce Department rulemaking to review information and communications technology transactions for security risks.
While the companies, products, and services of concern do not emanate exclusively from China, media coverage has heavily emphasized risks associated with technology originating from China. The mixing of legitimate security concerns with general mistrust of Chinese products, and anti-China sentiments broadly, has generated concerning proposals and obscured attention to actual risks. Policies, policy proposals, and proposed actions meant to mitigate these risks are evolving quickly—which makes it essential to step back and consider the United States’s longer-term vision for identifying and mitigating possible security risks associated with non-U.S. tech companies, products, and services.
There are at least two questions policymakers should ask when considering how to best design nuanced, sustainable, and effective U.S. policy on non-U.S. technology companies, products, and services. Which policy approaches lend themselves to a spectrum of risk identification and mitigation? And what kind of process lies behind these policy approaches?
The U.S. government is going to have to tackle digital security risk questions over and over again in the coming decades. While there is arguably some balance to be struck between responding promptly to urgent risks and deliberating overarching strategy, barreling ahead without considering some of these questions will put the U.S. in a worse position to identify digital security risks in a nuanced fashion, tailor policy responses accordingly, build and resource the appropriate processes, and communicate decisions clearly and transparently to allies, partners, the private sector, and the public.
Which Policy Approaches Lend Themselves to a Spectrum of Risk Identification and Mitigation?
Digital technologies can pose a variety of risks to U.S. security interests, including based on the technology product or service in question (from telecom equipment to mobile apps), its presence in or connection to the United States (from physical infrastructure to online ad entanglements), and the country of incorporation of its owner, among many others. Point being: Not every tech company, product, and service poses the exact same set of risks. The risk scenarios themselves might vary, such as the risk of a backdoor installation versus the risk of internet traffic hijacking. And the likelihood and severity of those risk scenarios can vary too.
Because of this inherent variation, policy approaches that categorically deem every technology company, product, and service from a country a “risk” often erase these distinctions. Policymakers need to consider whether their proposed policy approaches allow for the identification of a variety of possible security risks—and the development of a range of possible risk mitigation responses.
How policymakers are responding currently to concerns about non-U.S. technology often loses this consideration. In response to Huawei’s 5G market share, TikTok’s presence in the U.S., and the activities of other non-U.S. technology companies, products, and services, some U.S. policymakers have proposed total bans on the company or product in question. During the Trump administration, many senior officials consistently advocated for a complete ban on 5G technology equipment from Huawei. After months of investigating Chinese state-owned telecom China Telecom, the Federal Communications Commission in 2021 reached a similar conclusion: that it needed to completely bar China Telecom from providing telecommunications services in the United States. Of course, many voices in the D.C. debate about TikTok propose a similar action, arguing that a complete prohibition on TikTok use in the U.S. is the only way to appropriately mitigate the security risks.
It is possible that a complete ban on a specific product or service, following a thorough and nuanced security review, is the best action to address a particular security risk. For example, in Huawei’s case, many countries have imposed prohibitions on using Huawei 5G equipment, including the U.S., the U.K., Sweden, and Australia. Some defend these bans under the belief that the Chinese government is necessarily going to leverage Huawei’s 5G infrastructural presence abroad to spy—and that measures to limit Huawei’s market share or constrain its infrastructural presence (for example, blocking use around military bases) will not eliminate or sufficiently mitigate that risk. The only acceptable response, this argument goes, is therefore to ban the use of Huawei 5G equipment in the country.
However, arguing that a complete ban is the best response in a particular case is different from developing a policy framework that leads to bans in every case where a risk is identified. The former is one action targeted in one case (whether one agrees with it or not). The latter is a policy framework—meaning that every time that risk is identified, it would theoretically trigger a ban. Policymakers have frequently leapt from the former to the latter when discussing or proposing policies around the risks posed by non-U.S. tech companies, products, and services, especially from China. The ANTI-SOCIAL CCP Act, introduced by two Republicans and one Democrat last December, does this in its current form. While it calls on the president to invoke the International Emergency Economic Powers Act (IEEPA) to ban transactions with TikTok and ByteDance, essentially redoing President Trump’s executive order, it also establishes a list of criteria that could trigger the same action in the future against other, non-U.S. social media companies. Among other criteria, a company is a “deemed company” under the bill if it simply is “domiciled in, headquartered in, has its principal place of business in, or is organized under the laws of a country of concern.” To define this term, the act points to the definition of “foreign adversary” in the Secure and Trusted Communications Networks Act of 2019 and explicitly lists China, Russia, Iran, North Korea, Cuba, and Venezuela. In other words, the bill effectively asserts that if a social media company of a certain size operates in a “country of concern,” it poses an undue national security risk that should trigger the president invoking IEEPA in accordance with Section 2(a), resulting in a complete ban on transactions. Hence, a bill that is nominally about TikTok becomes a bill that would have wide-ranging implications and could potentially go beyond a TikTok ban.
This policy approach does not allow for identifying the spectrum of risks and crafting the spectrum of possible responses discussed above. The bill does mention different scenarios—the risks that a foreign government could influence a company to share data on U.S. citizens or manipulate its content moderation practices. But if just one of these risks is present, the same outcome results. The distinction between risk scenarios on paper does not translate into distinct actions or even a spectrum of possible distinct actions in practice.
Policymakers must push for this granularity because there simply are too many technology companies, products, and services in the world, and there are too many for which it’s possible to imagine a hypothetical scenario in which they hypothetically pose some kind of risk to U.S. security interests. At some level, every single mobile app in the world raises cybersecurity policy questions, for example. This goes for apps in the U.S. and apps not in the U.S., many of which exploit the United States’s incredibly weak privacy regime to collect, aggregate, and sell or share data on Americans. The same goes for every single company on the planet that builds or manages telecommunications infrastructure, which could hypothetically pose some risks in a hypothetical scenario. There consequently need to be frameworks in place to identify these different types and levels of risk. It’s not playing what-about-ism, pretending that all risks and technological activities are the same, but just the opposite—that the risks are different and that policymakers and decision-makers should understand them. It is also essential because different risks may require different policy responses—and overlooking, misidentifying, or conflating risks could lead to ineffective or even harmful actions.
Banning an app for concerns over content manipulation, for instance, is different from banning an app to address data security concerns; even if the action taken is the same, the action’s ability to address the two risks may be different. And identifying different types and levels of risk is essential because some risks are more urgent to address than others. Risks vary in likelihood and severity, and organizations like CFIUS, “Team Telecom” (officially, a committee with the acronym CAFPUSTSS), and the Commerce Department’s Bureau of Industry and Security do not have unlimited time and resources. Policymakers promoting a blanket approach that reduces to “every technology company in China is a national security risk” are not helping these organizations to identify the most urgent places for action—and in fact could exacerbate issues of national security creep by encouraging security reviews and actions to target wide swathes of overseas investments and non-U.S. technology companies, products, and services.
Further, policymakers should consider employing a risk spectrum because policies on non-U.S. technology companies, products, and services come with trade-offs. Bans on apps used by millions of Americans, for example, may implicate speech and First Amendment concerns in ways that bans on niche products used in critical energy infrastructure may not. If policymakers have many action options available—such as requiring independent third-party certifications of a technology product, establishing a routine government technical review of a product, banning the federal government from contracting with the product vendor, introducing financial incentives for state and local governments to avoid using the product, or imposing a complete market ban after a long, thorough review process—they have room to factor in speech, market competition, and other questions as they select a response.
This is not to say, necessarily, that analysts conducting national security reviews should begin factoring in political and broader policy questions about technology use; often, their task is merely to assess risks and then provide decision-makers with said assessment and potentially recommendations. But for those policymakers who do make decisions, having a spectrum of available responses provides important and necessary flexibility to consider other issues that matter in U.S. technology policymaking, like speech, competition, privacy, information access, and connectivity.
What Kind of Process Lies Behind These Policy Approaches?
Many policymakers are so fixated on the outcome of their particular proposals around non-U.S. tech companies, platforms, and services—Huawei should be banned; TikTok cannot stay in the U.S. market; every tech company from China is operating under the state’s thumb—that they are skipping past the question of process. That is, what kind of process underlies these proposals to identify security risks and pair those particular, identified risks with U.S. government responses? And if the proposed policy is implemented, what is the process going forward to identify and, as necessary, mitigate risks? An examination of several prominent efforts to restrict non U.S.-operated tech companies, products, and services in the U.S. makes clear that many of them do not establish a mechanism to comprehensively vet risks on a continuous basis. Focusing on specific outcomes without regard to process will not prepare the U.S. for a better long-term approach to such tech companies, platforms, and services, especially when so many tech companies, products, and services exist—and when so many possible risks could be identified in whack-a-mole fashion.
Trump’s executive orders to ban TikTok and WeChat in the United States, signed in August 2020 and revoked by President Biden in June 2021, did not establish any kind of process for identifying security risks in non-U.S. tech companies, platforms, and services; determining the likelihood and severity of those risks; and pairing specific risks with specific, possible mitigation actions. Instead, they were an approach akin to whack-a-mole: They collectively named two companies, declared them national security risks, and invoked IEEPA to ban them. Putting aside various issues with the orders—including the fact that Trump said in July 2020 that banning TikTok would be a way to get back at Beijing for the coronavirus outbreak—they did not seek to establish a special review process vis-a-vis TikTok. They did not call on existing interagency committees and agencies to conduct a targeted security review. The orders took two immediate actions (invoking IEEPA to ban TikTok and WeChat), and that was it. Even if the supposed security motivations of a few White House officials were to be believed, the executive orders did not answer the question of how to identify and mitigate similar security risks going forward. This captures the problem of focusing on specific outcomes and completely disregarding the process.
In several ways, the ANTI-SOCIAL CCP Act proposing a complete ban on TikTok improves on the Trump administration’s executive order. The bill spells out different risks more clearly, such as the risks of foreign government-compelled data access and content manipulation. It also tries to look beyond whack-a-mole to establish a mini security risk framework for assessing non-U.S. social media platforms. This makes it more of a policy approach—in contrast to Trump’s TikTok executive order, which was not a policy per se because it took action against a single company without a decision framework in place for why it was doing so and what kinds of scenarios would trigger a similar action in the future.
On the flip side, however, the bill does not go so far as to lay out a highly detailed set of criteria for evaluating digital security risk, as with the Commerce Department’s proposed criteria for information and communications technology and services (ICTS) security reviews called for in Executive Order 13873 (which Trump signed in May 2019). The Commerce Department’s notice of proposed rulemaking in November 2021 laid out a much more comprehensive and granular set of considerations to consider with ICTS transactions, including:
- Ownership, control, or management by persons that support a foreign adversary's military, intelligence, or proliferation activities;
- use of the connected software application to conduct surveillance that enables espionage, including through a foreign adversary's access to sensitive or confidential government or business information, or sensitive personal data;
- ownership, control, or management of connected software applications by persons subject to coercion or cooption by a foreign adversary;
- ownership, control, or management of connected software applications by persons involved in malicious cyber activities;
- a lack of thorough and reliable third-party auditing of connected software applications;
- the scope and sensitivity of the data collected;
- the number and sensitivity of the users of the connected software application; and
- the extent to which identified risks have been or can be addressed by independently verifiable measures.
The bill is not as detailed or nuanced in describing the different sets of risks that could be at play.
These differences speak to broader questions about process. While plenty of congressional tech action is important and much needed, congressional hearings, bill writing, and oversight actions on technology opportunities and risks are also headline driven; they are more responsive to particular issues raised in the news, by researchers or whistleblowers or advocates, than they are proactive about continuously seeking out new issues on which to legislate or otherwise act. Congressional scrutiny of companies (broadly) also tends to be very political, even if real issues are identified. This calls into question its suitability to continuously and comprehensively identify and mitigate security risks. Certainly, in some cases, a headline-driven congressional approach ends up calibrating roughly to the likelihood and severity of the risks. In the case of Huawei 5G equipment, for example, members of Congress paid and drew attention to potential security risks. Simultaneously, there were in fact substantial espionage and cyber operations risks associated with using Huawei equipment in 5G infrastructure. But this alignment will not necessarily happen every time.
Compared to Congress, there are some executive branch organizations with a more established process for identifying security risks on a continuous basis. With CFIUS, for example, parties that believe they are engaging in a “covered transaction” must decide whether to file a joint voluntary “notice” with the committee. CFIUS is chaired by the secretary of the treasury and has members from the Departments of Justice, Homeland Security, Commerce, Defense, State, Energy, and more engaged in its review and deliberation process. The Commerce Department’s rulemaking on securing the ICTS supply chain, per Executive Order 13873, clarifies that the department can initiate a review on its own, based on a referral from another agency, or based on a referral from a private party, such as a company. There are defense and intelligence agencies with similar processes for identifying risks with non-U.S. tech companies, products, and services, as the Defense Department did when it banned the use of software from Russian cybersecurity firm Kaspersky, which took effect in September 2019.
Of course, many observers would say these processes are imperfect, inefficient, or outdated. In January 2020, for instance, the Business Software Alliance trade group commented on the Commerce Department’s ICTS supply chain rulemaking with concerns that:
Under this proposal, the Secretary of Commerce (Secretary) would have unbounded discretion to review commercial ICT transactions, applying highly subjective criteria in an ad hoc and opaque process that lacks meaningful safeguards for companies. It would be impossible for companies to create responsive compliance programs or to conduct business with a predictable and reliable understanding of the risks.
Recently, a former CFIUS lawyer wrote for Semafor that CFIUS’s growing number and scope of legal reviews were causing the U.S. to lose foreign capital and innovative businesses. “There is a small but useful role for CFIUS,” he said, but “the review process takes a minimum of a month, but more often many months” and involves the possibility for involved individuals to speculate too wildly about potential security risks. Many parts of the defense apparatus, for their part, are far behind the technological cutting-edge in understanding the risks to the U.S. government associated with modern data collection, storage, analysis, sharing, and targeting.
Policymakers cannot bypass these process questions when designing new actions or policies concerning non-U.S. tech companies, products, and services and alleged security risks. If the U.S. is going to adequately identify security risks and mitigate them—tackling real issues, keeping in mind the spectrum of risks and responses, and not overreaching or shooting the U.S. in the foot in the process—policymakers need to have processes behind risk reviews. These processes should include a clear set of risk criteria to evaluate, such as an app’s direct ties to another country’s foreign intelligence service, a private company’s ownership by a state-owned enterprise, or a software product’s data collection, storage, and sharing activities. The processes should also entail building a coordinated, transparent, and overseen set of steps for identifying a company, product, or service of possible concern, conducting a risk assessment, and then deciding what action to pursue.
These questions of process remain an open but often bypassed issue.
Looking Forward
It is almost always possible that there could be a security risk associated with a non-U.S. tech company, product, or service, but possible does not equal probable. If risk is a matter of likelihood and severity, policymakers need to develop or promote more granular ways of identifying spectrums of risk and offering spectrums of responses. Labeling every company from a country a potential cybersecurity risk, for instance, does not help those conducting security reviews to identify the most urgent areas for action. Proposing a total ban in every case in which there is a “risk” loses that spectrum of risk type, risk likelihood, and risk severity and does not give decision-makers the necessary flexibility to consider speech, privacy, connectivity, market concentration, competition, and other factors important vis-a-vis technology.
Policymakers must also consider whether their proposals are compatible. Right now, for example, several competing actions around TikTok are incompatible with one another. CFIUS is still negotiating with TikTok, reportedly nearing a final agreement to mitigate the committee’s security concerns while allowing TikTok to stay in the U.S. market. Simultaneously, some members of Congress are calling for a complete ban on TikTok in the U.S. by telling the president to invoke IEEPA to ban TikTok and ByteDance. This means there is also disagreement between the executive branch and Congress on how to identify and address perceived risks as well as within Congress about what needs to be done. The unanswered questions here include the mix of possible legal authorities at play (like when Trump attempted to ban TikTok), what members of Congress will do if a CFIUS-TikTok agreement is finalized, and what position, if any, the Biden White House will take on a proposed CFIUS agreement with TikTok, one that is bound to draw Republican criticism. All-or-nothing approaches to the risks make these questions of compatibility even more urgent.
Finally, policymakers cannot overlook process. There are certainly many challenges associated with executive branch security reviews. For example, transparency is one area where executive branch security review processes have room to evolve and improve. The Federal Communications Commission has been doing as much recently, pushing for more transparency in response to concerns about review opacity. At the same time, congressional actions against tech companies, products, and services tend to be far more politically driven than executive branch security review processes. Several executive branch agencies and committees also have more established processes—and dedicated staff and budget—for individuals to continuously identify situations of concern and conduct risk assessments on them, as opposed to tending, generally, to focus on companies in the headlines. Policymakers who genuinely wish for the U.S. to have a better long-term approach to identifying and mitigating security risks in this space cannot just focus on specific actions and outcomes; they must consider process.
The coming months of government actions or policy developments around TikTok will undoubtedly keep this discussion at the top of the U.S. technology policy debate. Yet, the questions are much broader than one company, and the U.S. still has a long way to go in answering them.