What’s in a New Bill to “Warn” Americans Downloading Foreign Apps?
Tik Tok panic continues. Rep. Jim Banks introduced a bill to require app marketplaces and developers to “warn” Americans before they download apps linked to certain countries. What’s in the bill?
Published by The Lawfare Institute
in Cooperation With
On April 21, Rep. Jim Banks introduced a bill into Congress, the Online Consumer Protection Act of 2020. Its purpose? “To require software marketplace operators and developers of covered foreign software to provide to consumers a warning prior to the download of such software.”
The press release accompanying the bill noted, “Some phone apps are fun and useful, others are counterintelligence threats.” “Parents and consumers,” Banks added, “have a right to a warning that by downloading some apps like Russia’s FaceApp or China’s TikTok, their data may be used against the United States by an adversarial or enemy regime.”
This proposal doesn’t come entirely out of the blue. For months, American policymakers have been raising concerns about TikTok, a video-sharing app popular in the U.S. and owned by Beijing-incorporated ByteDance, and other foreign-made smartphone apps (like the Russian app FaceApp). Banks himself has been quite vocal on this front; for instance, he wrote an op-ed for Fox News last November that argued for further government investigation of TikTok’s potential national security risks, including through the Committee on Foreign Investment in the United States (CFIUS). “I would be disturbed, but not shocked, to learn that TikTok shares secrets with the Chinese government,” he wrote. “Huawei, ZTE and their ilk have already proved that the Chinese Communist Party is a master of the Trojan horse.”
But this proposal marks a significant departure from previous legislative efforts targeting TikTok and other foreign-made apps. While the most notable bill of the collection—a March 2020 bill introduced by Sen. Josh Hawley—would just prohibit use of TikTok on government employee devices, Banks’s proposal goes so far as to mandate that app marketplaces warn American citizens looking to download apps made in certain countries. So far, the bill doesn’t appear to have attracted much backing.
Even speculative bills like Banks’s, however, merit serious analysis. As the U.S. government builds out nascent policy around mobile apps and U.S. data security, bills help give a window into lawmakers’ thinking. And the bills can help walk the public through the risks of certain mobile apps and highlight potential avenues for response.
An unpacking of this particular bill’s text makes clear that it is uninterested in technical nuances and instead rests its risk assessment on a single question: What authorities might a given country have to force apps to hand over data or spy on users on the state’s behalf?
What the Bill Says
The warning that the bill proposes would have to be displayed to prospective users by “software marketplace operators” (for example, the Apple App Store) and “developer[s] of covered foreign software” (when there is a separate download on the developer’s site, like with a computer program). This warning must be displayed separately from other language or disclaimers that may already be provided, like a privacy policy or terms of service. Marketplaces and app developers would have to prompt users—before they download the application—to either cancel the download or click to acknowledge the warning and proceed (much like the “I agree” or “cancel” options often found on privacy policies or terms of service agreements).
The bill specifies the warning’s appearance as follows:
“Warning: [Name of Covered Foreign Software] is developed by [Name of Developer of Covered Foreign Software], which [is controlled by a company that] [is organized under the laws of]/[conducts its principal operations in]/[is organized under the laws of and conducts its principal operations in] [Name of Covered Country]. Please either [insert description of how to acknowledge the warning and proceed with the download] if you wish to proceed with the download or [insert description of how to cancel the download] if you wish to cancel the download.”
The bill defines “covered foreign software” as software made by those firms “organized under the laws” of a covered country, “whose principal operations are conducted in a covered country,” or those that are “directly or indirectly” “controlled” by an entity covered in the first two categories (presumably, for example, if a software developer in Sweden is owned by a company in a covered country).
The bill lists the covered countries as China, Russia, North Korea, Iran, Syria and Sudan. (Banks’ press release on the bill says that Venezuela is a covered country, but it isn’t listed in the bill at the time of writing.) In addition to the six listed nations, covered countries also include those the secretary of state has determined repeatedly provide support for international terrorism. Other countries can be added if the attorney general or the Federal Trade Commission (FTC) deems them to be “sources of dangerous software,” subject to secretary of state approval. Countries can be removed by the joint action of the attorney general and the FTC, with the secretary of state deciding in the event of a dispute.
That China and Russia made the list is likely unsurprising to many. To date, congressional concerns about foreign-made apps (and, indeed, the current administration’s concerns) have centered on apps developed in or potentially influenced by governments in China and Russia. But some of the other listed countries have had much less of a presence, or practically none at all, in previous U.S. policymaker discussions about mobile apps and data security.
The bill defines “software marketplace operators” as one might intuit—platforms through which internet users download software products, such as the App Store for iOS or the Google Play Store for Android. But beyond a generic definition of this group, the bill leaves out critically important information about which software marketplace operators would be covered and in what cases. It says nothing about countries of incorporation or operation for those software marketplace operators.
Here the bill reveals the unending questions posed by efforts to target mobile apps. More broadly, too, these kinds of challenges play out when attempting to regulate tech in a world where global digital supply chains are intertwined in complicated ways.
One could assume that the bill is designed to at least place restrictions on U.S.-based software marketplace operators, but what about foreign-incorporated ones operating in the United States? Would, say, an EU-based software marketplace selling software from Russia and internet-accessible in the U.S. have to put up warnings for Americans? What if the country in which that marketplace is incorporated disagreed with the national security threat label? And what about U.S.-incorporated firms that also operate overseas? Would they be in violation of the bill if they warned U.S. users about a covered app but didn’t warn their users located in Canada or Japan or India? Do warnings have to be global? Here, the bill raises the question of whether the bill would effectively export U.S. classification of national security risks beyond American borders. For any government attempting to regulate mobile apps for data security purposes, this is a key determinant: Just how broadly would the reach of U.S. legislation be?
These questions become even more important when looking at the practical enforcement mechanisms available when covered firms violate the bill’s requirements. Per the text of the bill, software marketplace operators and covered foreign software developers that don’t comply are subject to a range of penalties. Those in “violation” of the warning requirement are to be investigated by the FTC for violating the Federal Trade Commission Act’s provisions on unfair or deceptive acts or practices.
There are also criminal penalties—a covered software marketplace operator or foreign software developer that “knowingly violates” the warning requirements “shall be fined $50,000 for each violation.” (It’s worth noting that here the bill uses the term “knowingly[,]” but when discussing violations triggering FTC enforcements, it doesn’t use that word.) Officers of covered firms that violate the requirements “with the intent to conceal the country in which software is developed, shall be fined under title 18, United States Code, imprisoned not more than 2 years, or both.”
Finally, the bill contains language establishing liability. If a software marketplace operator doesn’t warn users about the developer’s geographic location because the developer didn’t say its application was covered under the bill, both the software developer and the software marketplace operator are liable—“the developer (as well as the software marketplace operator) shall be considered to have committed the violation.”
Within eight years of the bill’s passage, the FTC (in consultation with the attorney general) would have to submit a report to Congress on implementation and enforcement of the download warnings. The law would sunset its own provisions 10 years after its enactment.
How the Bill Fits Into the Risks
Part of the challenge of regulating apps like TikTok is that they pose a host of intermingled risks. As I have written previously, with TikTok, there are five distinct general risks: collection of data on federal government employees or contractors; collection of data on citizens not in those two categories; censorship of information within China; censorship of information worldwide, beyond China; and disinformation. The first two pertain to national security, but the last three, while perhaps connected, are not as directly related.
This bill appears most concerned with the data collection risks of foreign software, given both the press release accompanying the bill and Banks’s past focus on data security concerns around Chinese mobile apps. It is possible that censorship concerns could factor into the thinking behind the bill as well—the warning label could also serve to let a user know that an application (particularly any platform for speech) was developed in a country that has strong online censorship laws.
Assessing the severity of a data collection risk entails complicated analysis of legal, political and technical questions.
For example, on the legal front, policymakers may want to consider the laws a particular country has on the books to make a company incorporated in its borders turn over data on its users, or provide a platform’s global reach as a tool for intelligence collection. On the political front, policymakers may want to consider that a lack of predefined legal authorities may not preclude a government from marching into a company’s building and forcing a data hand-over anyway. On the technical front, policymakers may want to consider such factors as what kind of data an app collects and how and where its data is stored.
These can be tough assessments. There isn’t always a lot of public information about other countries’ use of political and legal authorities to access data for intelligence reasons. The mere existence of laws on the books, while a potential indicator of risk, doesn’t mean those authorities are always or equally employed in all cases. But where there may be information about the use of those authorities to access data for espionage and other state security purposes, it’s likely that those details are classified and sit with elements of the U.S. intelligence community. It’s plausible the intelligence community could provide this relevant information to a committee charged with conducting these kinds of reviews, like the executive branch currently does with CFIUS and the soon-to-be-created executive committee for screening foreign telecoms for national security risks. This potential lack of clear public information, though, means that whether a member of the public thinks a particular country is a data collection risk is often nothing more than a sort of “Rorschach Test” for whether one is concerned with that country’s posing of cybersecurity and national security risks writ large.
And technologically speaking, many apps collect far more information than the average user might suspect. Especially in a country like the U.S. that doesn’t have a strong federal privacy law, companies may not be disclosing or may not have to disclose everything they collect. And they certainly may not be disclosing how they store data (that is, if that data is encrypted but encryption keys are available to state authorities). This makes answers to questions of data collection and storage a bit murkier.
Banks’s press release clearly articulates that the risk assessment turns on whether countries have the ability to access data from apps. Nowhere in the bill is there any hint of grounding a labeling decision in technical considerations about the type of data collected by a given app or how that data is stored.
For instance, regarding China, the press release accompanying the bill states that “under the Chinese National Intelligence Law of 2017, the Chinese Communist Party has access to all data stored within its national boundaries.” The risk assessment here is a determination (in the bill’s case, one left up to the FTC and the attorney general for countries not already listed) that either puts a whole country in the bucket of risk or leaves it out entirely. Decisions are made on a country-by-country rather than app-by-app basis. Why might Banks have taken this approach? For one, he’s likely trying to score political points through being tough on China. But this bucketing of risk by country is also a far less labor-intensive process of classification than individually examining the technical side of every app of concern.
Take a step back and consider the implications of this framework of risk assessment. The type of analysis proposed by Banks’s bill concludes that if a state has authorities to access data and systems, whether on paper or in practice, it renders literally every single piece of software developed in the country untrustworthy. There are two principal issues with this view. One, this is a black-or-white paradigm that overlooks potential gray areas in government access to data, even if there are cases that do present real national security risks. Second, and more problematic, it again ignores technical considerations—for instance, could encrypted handling of user data make a difference in a “trustworthiness” determination? There is no room in this scenario for a software developer in a country of concern to employ technical mitigation measures to address U.S. national security concerns and avoid the “warning” label.
It’s also not immediately clear why Banks proposes adding a country to the list if it is found to have “repeatedly provided support for international terrorism.” If a country has provided support for international terrorism, what does that have to do with the cybersecurity risks of an app developed by a company incorporated in its borders? Perhaps the support for international terrorism assessment is intended as a sort of proxy for a country’s rule of law. Perhaps, maybe more likely, there is a concern that states that sponsor terrorism could hand over app-collected data to terror groups. But the state sponsoring of terrorism language still seems a bit disconnected from the cybersecurity risks at hand. That’s especially true when you consider that under this bill the FTC and the attorney general would have the authority to designate countries as covered on software security grounds even if this terror-related provision didn’t exist. So, if there was a concern about a state using its authorities to access app data and then handing it over to terrorists (certainly a clear national security risk and something the U.S. should act on), that could be a reason for the FTC and attorney general to designate the country as covered. In other words, the state sponsoring of terrorism language feels a bit redundant. Perhaps, then, that language is intended to make the bill another lever for the executive branch to apply pressure on a foreign country—again, it’s not totally clear.
And then there are the aforementioned questions that remain unanswered in the bill’s text, like whether an app marketplace incorporated in, say, Germany would have to issue these warnings to users in the United States, and whether a U.S. marketplace would have to provide these warnings just to American users or more broadly to their global user base. These questions deserve further clarification.
Another odd aspect of the bill: The warning prescribed in the bill doesn’t mention national security or cybersecurity risks. It would only clarify to potential users that an app is developed in a particular country. Considering that most Americans rarely read privacy policies, and that most are unlikely to have knowledge about cybersecurity or national intelligence laws in other countries, will that kind of warning do anything to discourage people from downloading these allegedly risky software products? Informing consumers of the facts is a good thing, but it’s unclear whether a warning delivered in this way is an effective solution. If this bill were to be passed and a warning issued, I think it’d be worth articulating to consumers the reasons why they’re seeing the warning: for example, saying something like “the U.S. government has deemed this country to be a source of digital national security risks, as the government has unchecked authorities to retrieve this app’s collected data for intelligence purposes.”
The Hill accurately points out that “similar bills aimed at apps from foreign adversaries have seen little success.” Hawley’s proposal to ban TikTok on government devices, for example, has not been passed—though it’s worth noting that a recent Roll Call survey of 143 congressional staffers found that 129 of them don’t use the app (so maybe Hawley’s attacks on TikTok are having desired effects). The Hill piece also points out that scrutiny of Chinese apps in particular has especially come from China hawks. But U.S. policy on these questions of mobile apps and data security is nascent. Looking at proposals of this kind—unpacking everything from the underlying assumptions, to the ways a risk assessment is made, to the tools for addressing those risks—is an important exercise in thinking through and building out different risks and different approaches to data and national security concerns.
It also helps for playing out potential ramifications and collateral effects of data security policies around mobile apps, which deserve much more thought in many of these cases. For instance, the U.S. must recognize that if it’s trying to get other countries on board with such rules, there are others who will raise eyebrows—fairly or unfairly—at any U.S. argument about a state’s digital snooping via apps, given that leaks of classified information have demonstrated U.S. data-access practices for intelligence purposes. The U.S. may therefore need clearer evidence of malfeasance on another country’s behalf before labeling them a data-collection bad actor.
The U.S., to give another example, should also avoid replicating digitally protectionist policies enacted and enforced in countries like China. This would overly engage the state in regulating digital technologies and risk justifying digital protectionism abroad. It must therefore carefully consider the implications of further influencing foreign access to the U.S. market for national security reasons without clearly articulated processes and criteria.
In the end, this bill largely disappoints. It forgoes any analysis of technical nuances. Instead, it makes risk assessments on a broad, whole-of-country basis: Once a country is deemed a source of risky software, all software made in that country is effectively labeled untrustworthy and no technical measures a covered developer takes can change that.
Putting up download warnings on an app store might not be the kind of “decoupling” many people think of, which more likely conjures images of strict foreign investment reviews and technical blocking of cross-border data flows. It’s also not the kind of decoupling many think of because it’s not just focused on the United States and China. But in defining sources of dangerous software on a country-by-country basis, this flavor of risk assignment—even if ultimately justified in particular cases—certainly falls into the bucket of broad-strokes remedies proposed in the name of decoupling.