A New Tool for Tech Companies: International Law

Ashley Deeks
Thursday, May 30, 2019, 11:49 AM

These days, many people see technology companies as indifferent to law, or at least interested in remaining under-regulated. When Mark Zuckerberg called on Congress to regulate how social media companies should handle challenges such as harmful content and data privacy, the request was unusual enough to make headlines. This real or perceived disinterest in legal regulation has troubled a host of people, including those worried about protecting privacy and freedom of expression.

Then-Secretary of Defense Ash Carter visits Facebook headquarters in 2015. (Source: Department of Defense)

Published by The Lawfare Institute
in Cooperation With
Brookings

These days, many people see technology companies as indifferent to law, or at least interested in remaining under-regulated. When Mark Zuckerberg called on Congress to regulate how social media companies should handle challenges such as harmful content and data privacy, the request was unusual enough to make headlines. This real or perceived disinterest in legal regulation has troubled a host of people, including those worried about protecting privacy and freedom of expression.

But there may be another story to be told here too—at least the start of one. In the past two years, a number of companies have invoked international law justifications to decline to make their products available to states that, in their view, will use those products to violate international law. Put another way, a number of corporate actors have made decisions that effectively enforce international law against states, or at least make it harder for those states to undertake acts that violate international law. Because people don’t tend to think of corporations as actors that monitor and regulate international law compliance, these corporate examples are worth analyzing.

Take the example of Google and Project Maven. Project Maven is a Department of Defense program that uses artificial intelligence (AI) to sort and analyze video imagery (such as that from drone feeds). Google worked with the Defense Department on the program, but in the summer of 2018, some 4,000 Google employees signed a petition objecting to the project. Although the employees’ letter did not specifically argue that the U.S. military was violating international law, that concern is implicit. The petition asserted that “[b]uilding this technology to assist the US Government in military surveillance—and potentially lethal outcomes—is not acceptable.” Then-Google Chairman Eric Schmidt linked that concern to the legality of the killing when he stated, “[T]here’s a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly, if you will.”

In the wake of the Maven dispute, Google adopted a set of principles committing not to pursue certain types of AI applications. That list includes “technologies that gather or use information for surveillance violating internationally accepted norms” and “technologies whose purpose contravenes widely accepted principles of international law and human rights.” While reasonable people disagree about whether the U.S. use of targeted killings violates international law, Google’s practice reflects new attention by a U.S. company to international legal norms and to whether their state customers are complying with those norms.

Microsoft is also talking the language of human rights in explaining why it has declined to sell facial recognition software (FRS) to governments. President and chief legal officer Brad Smith told the press that the company has “turned down business when we thought there was too much risk of discrimination, when we thought there was a risk to the human rights of individuals.” Microsoft recently made news for declining to sell FRS to a California law enforcement agency, and Smith said that the company also turned down a deal to install FRS cameras in the capital city of a country that Freedom House had designated as “not free” because it worried that the country would use the tool to suppress freedom of assembly.

Here’s another example: At a lecture I attended a few years ago, a Facebook policy official described how Facebook deals with law enforcement requests from countries around the world. The official stated that, before turning data over, Facebook assesses whether sharing information with the state that has made the request for content would be consistent with the International Covenant on Civil and Political Rights. That apparently includes an analysis of whether the state provides basic due process rights to defendants. More generally, Facebook has said that when it regulates speech on its platform it “look[s] for guidance in documents like Article 19 of the International Covenant on Civil and Political Rights (ICCPR), which sets standards for when it’s appropriate to place restrictions on freedom of expression.” (It’s worth noting that Article 19 is in some ways less protective than the First Amendment, so relying on the ICCPR may be a way for Facebook to legitimize decisions that some Facebook employees or users see as insufficiently protective of speech.)

There’s another, less clear-cut example that also involves Facebook. In August 2018, as the Myanmar military was engaged in extensive violence against the Rohingya, Facebook removed the accounts of the Myanmar army chief and other military officials because they were spreading “hate and misinformation.” As a practical matter, the ban made it much harder for the military to communicate with the public. Here, the company sought to prevent state actors engaged in rights violations from using its product, though it did so only after learning that United Nations investigators had accused the army of carrying out mass killings and gang rapes with “genocidal intent” and had identified Facebook as facilitating the violence.

Consider, too, a more obscure example related to anti-Chinese hackers. Though not a company, a group of private actors called Intrusion Truth decided to publicly identify Chinese government hackers who were working for the Ministry of State Security. Their reason for doing so? These hackers were violating the U.S.-China memorandum of understanding prohibiting economic espionage. There are other indications that cybersecurity firms might be more inclined to disclose information about the state cyber operations they discover where the state actor is violating international law.

For a long time, corporations have played a role in states’ efforts to enforce international law against other states. When the United States imposes sanctions on corporations in Iran, those sanctions have bite because they preclude U.S. companies from doing business with the Iranian companies. Efforts to force South Africa to abandon apartheid relied in part on corporate divestment. And of course companies try to enforce international law when they themselves are the victims (as when they bring cases against a state under a bilateral investment treaty). But it is less common to encounter cases in which companies make commercial decisions that function to protect others who have suffered or who may suffer an international law violation (e.g., the Rohingya, corporate targets of Chinese espionage).

This development has clear parallels to corporate social responsibility (CSR) efforts. One key idea behind CSR is that corporations should voluntarily respect human rights and should not, for instance, tolerate human rights abuses in their supply chains. Like more traditional CSR efforts, the tech companies’ invocation of human rights norms has two effects: It limits certain corporate opportunities that might facilitate rights abuses, and it also may reduce the opportunity for states to engage in international law violations, as when a corporation commits not to work with or hire actors in a state that are known to have engaged in arbitrary killings.

There are a few questions worth asking about this trend—if it is a trend at all.

(1) What is motivating the corporations? Corporations are not necessarily enforcing international law against states because they are true believers in that body of law. True, in some cases, the corporations may affirmatively support the underlying international norms (such as opposition to genocide, freedom of expression or international humanitarian law) as a matter of their corporate values. In other cases, though, corporations undoubtedly are pursuing their self-interests in a way that happens to align with an underlying international legal norm. Specifically, they may see efforts to enforce international law as helping—or at least avoiding harm to—their reputations and, thus, bolstering their bottom lines.

There are at least three other reasons why companies may be invoking international law as the source of guiding norms. First, they may see international law as a reflection of a “consensus” approach to what the rules should be, particularly in the absence of domestic regulation or enforcement. Second, for planning purposes the companies may like the certainty of having some guardrails, absent clear domestic rules. Third, they may be invoking international law to signal to legislators that there are sufficient existing and legitimate rules and that there is therefore no need for further government regulation. In most settings, though, the international laws that the companies are invoking don’t actually bind them. This means that they can use international law as a tool to set and justify their policies without facing sanctions if they choose to disregard that law.

(2) How are corporations interpreting international law? We might find corporate decisions to disempower state “bad actors” intuitively appealing. But this approach places international law interpretation squarely in corporate hands, and companies might interpret the rules in a way that states disagree with. One possible example of this is the Boycott, Divestment and Sanctions movement, pursuant to which some companies are boycotting Israel on the grounds that it is engaged in international law violations, a position that is in tension with U.S. foreign policy. There may also be parallels between corporate enforcement of international law and plaintiffs’ efforts in Alien Tort Statute (ATS) cases. In the latter context, the United States has argued that ATS cases can create foreign policy challenges because certain lawsuits create diplomatic friction at a time when the U.S. government is trying to work cooperatively with a state whose officials are being sued. Corporate decisions to refuse to do business with certain governments could encounter this same issue, though tech company actions to date do not seem to conflict with U.S. policy goals.

(3) Why now? Companies now have the capacity to do things that typically only states could do, such as detect foreign spying and cyber operations. Companies also are the ones making the tools that foreign governments use to engage in rights violations. In some cases, tech and cyber companies may have even more data than governments when it comes to assessing other states’ compliance with international law. As long as companies dominate the production of these national security tools, they are positioned to make choices that disempower international law violators. If more of the production moves inside the government, this leverage will disappear. This suggests that the states most likely to remain vulnerable to this “enforcement” are those that are less technologically sophisticated and that need to purchase their national security tools from companies.

(4) More international law or less? If some companies had their way, there would be even more international law to enforce in the future. Microsoft has called for a Digital Geneva Convention. It also joined forces with the French government to develop the “Paris Call,” a declaration urging states to reaffirm the applicability of existing international law in cyberspace and to cooperate to suppress cyberattacks and election interference. Microsoft’s spokesperson said that the company welcomes “actions that help build greater consensus with regard to cybersecurity, particularly around the need for binding, international norms of nation-state behavior in cyberspace.” As Lawfare readers know, the world is a long way from establishing new global cyber norms, but the interest by some companies in the substance and legitimizing power of international law might suggest an appetite to continue to develop these norms informally, through corporate action against norm violators.

(5) Does this suggest new strategies for nongovernmental organizations (NGOs)? If corporations really are becoming more attentive to and savvy about using international law to bolster their reputations, that makes them a ripe target for human rights groups and other NGOs, which constantly seek new tools by which to enforce human rights norms. In other words, if international law is a new tool for tech companies, tech companies are a new tool for human rights groups and other international law advocates. Some groups are already aware of this: The Global Network Initiative has developed principles urging companies to respect freedom of expression and privacy when faced with pressure by states to take steps in tension with those norms. A number of tech companies have signed on, including Google, Facebook, Microsoft and Nokia.

There is not enough evidence to claim that an international law wave is sweeping through tech companies. Indeed, tech companies are just as well positioned to facilitate state abuses of international law as they are to enforce it. (Think of the NSO Group, an Israeli company that reportedly created malware that enabled states to spy on WhatsApp users, including human rights advocates and journalists such as Jamal Khashoggi. And a shareholder push to force Amazon to decline to sell facial recognition software to governments unless the board concludes that the technology doesn’t facilitate human rights violations just failed.) But as long as states and their officials remain avid tech consumers, these companies have the potential to shape state behavior in ways that more closely track with the states’ international law obligations, and they will do so when it is in their financial interest. For better or worse, the companies also have the potential to shape the popular understanding of what these obligations are. Either way, tech companies’ attention to international law is growing.


Ashley Deeks is the Class of 1948 Professor of Scholarly Research in Law at the University of Virginia Law School and a Faculty Senior Fellow at the Miller Center. She serves on the State Department’s Advisory Committee on International Law. In 2021-22 she worked as the Deputy Legal Advisor at the National Security Council. She graduated from the University of Chicago Law School and clerked on the Third Circuit.

Subscribe to Lawfare