Two New Democratic Coalitions on 5G and AI Technologies
The coalitions aim to focus on norm development, market supply and other issues around 5G telecommunications technology and artificial intelligence.
Published by The Lawfare Institute
in Cooperation With
In May and June, two groups of democracies forged separate coalitions seeking to increase security and promulgate shared values in cyberspace. The coalitions aim to focus on norm development, market supply and other issues around 5G telecommunications technology and artificial intelligence.
These two initiatives are additions to the already crowded space of global digital norms formulation. Several multilateral, multi-stakeholder and subject-specific norms formulation processes are already at play. These include multilateral efforts such as the Group of Governmental Experts (GGE) and Open Ended Working Group (OEWG) processes underway at the United Nations, multi-stakeholder agreements such as the Paris Call for Trust and Security in Cyberspace and a number of ethical AI frameworks being posited by private sector and civil society. So it’s fair to ask what it is that these two new coalitions will bring to the table. But despite being in their formative stages, both coalitions are worth watching for the abilities to shape norms and their strategic positioning—specifically, the exclusion of China and Russia.
The first coalition—dubbed the “D10” by the British government—is an alliance of ten democracies led by the United Kingdom. As The Times (UK) reported, the aim is to create alternate suppliers of 5G equipment and other technologies in order to avoid relying on the Chinese telecommunications giant Huawei. Meanwhile, the Global Partnership on Artificial Intelligence (GPAI), announced on June 16, will focus on charting out rules of the road for governing artificial intelligence technologies. Armed with a secretariat hosted by the Organisation for Economic Cooperation and Development (OECD) in Paris and two “centres of expertise” in Montreal and Paris, the consortium seeks to “support the responsible and human-centric development and use of AI.”
These coalitions’s exclusion of China and Russia is notable, as is their ambition of increasing digital cooperation among democracies. Further, the “D10” 5G alliance is focused on one technological use case—5G deployment in wireless networks—while the GPAI focus on AI means that it will have to engage with multiple use-cases, such as facial recognition, autonomous weapon systems and algorithmic content moderation. How effective are these coalitions going to be?
The D10 includes the G7 countries—Canada, France, Germany, Italy, Japan, the United Kingdom and the United States—along with Australia, India and South Korea. As one of us (Sherman) has noted, the members of the D10 have a wide range of responses to Huawei 5G technology—from those that have joined the U.S. in opposing the Chinese telecom (e.g., Australia) to those that have not implemented the ban on Huawei proposed by the U.S. (e.g., South Korea). These member nations also have differing trade relationships with China, none of which are as acrimonious as the U.S.-China dynamic. Notably excluded from the group are China and Russia—per the D10’s aspirations to be a democratic coalition. But this 5G club also does not include many other democracies that might theoretically be interested in such a venture (e.g., Israel or Norway).
The explicit purpose of the D10 is to explore alternatives to 5G technology developed by Huawei. Worries about Huawei’s market dominance—and global concerns about Chinese espionage through Huawei—are the formative component of the 5G club of democracies.
As of now, it’s not clear what actions the coalition might take on 5G beyond the broad objective of exploring alternatives to Huawei 5G technology. One can imagine proposals within the coalition ranging from industrial policy formulation efforts to more active engagement in standard-setting bodies to contest Huawei 5G proposals.
Unlike the D10, the GPAI’s official motives for formation cannot be explicitly pinned to a concrete objective. Laying down global rules of the road for governing AI has been on the OECD’s agenda since 2016-17, which helped drive international discussion on artificial intelligence issues. In May 2019, this culminated in the OECD Principles on Artificial Intelligence—the first set of multilateral AI principles that were signed onto by governments, including non-OECD members such as Argentina and Brazil. The GPAI seems to be the next step in OECD’s AI efforts as they attempt to champion AI grounded in “human rights, inclusion, diversity, and economic growth.”
Like the D10, the GPAI membership comprises all G7 member states. But it also includes India, the Republic of Korea, Singapore, Slovenia and the European Union. Like the D10, the GPAI excludes Russia and China—which are both active players in the research, development and deployment of AI. Reporting indicates that the U.S. was initially skeptical of the group, fearing that it would only replicate existing initiatives and lead to further bureaucracy, but ultimately dropped its opposition in a bid to counter authoritarian uses of AI by China.
With the exception of India and Slovenia, all GPAI members have also signed onto the “Osaka Track” on the digital economy, which advocates for a global data governance framework of “data free flow with trust”—that is, the free flow of data across borders but with certain trust safeguards in place. It was launched by Japanese Prime Minister Shinzo Abe at the 2019 G20 summit in Osaka to foster discussions on international rule-making for e-commerce at the World Trade Organization (WTO). It is worth watching closely how this divergence of opinion on cross-border data flows plays out at the GPAI. Questions of data availability, portability and access are central to devising cooperative frameworks on responsible and innovative AI, and some common ground will need to be reached if the coalition is to bear fruit.
The GPAI seeks to foster a multi-stakeholder approach bringing together leading experts from industry, civil society, governments and academia to collaborate across four working group themes: 1) Responsible AI; 2) Data Governance; 3) The Future of Work; and 4) Innovation & Commercialization. Crucially, it will also investigate how AI can be leveraged to deal with the global COVID-19 onslaught. The GPAI will work through what it calls “technical work” and “international AI policy leadership” with administrative and research support from the Centres of Expertise.
At this stage, it appears that the GPAI is designed to serve as a forum for discussion and coordination on technical and policy research that furthers the shared democratic vision of the member state. There is not yet an apparent mission statement on short, medium- or long-term goals that this initiative seeks to achieve; no clear plan of action or work plan; and no clear sense of how the GPAI will add to other collaborative efforts in the AI space, such as the similarly named but presently unconnected Partnership on Artificial Intelligence (PAI). PAI has a very similar high-level research agenda but no government membership. Instead, it has over 100 members from civil society and industry from over 13 countries. Baidu left PAI back in June, leaving the coalition with no member firms from China.
Multilateral organizations have often been dismissed as mere talking shops that are unable to attain any sustainable goals due to a lack of diversity, commitment, capacity or alignment of objectives among members. This critique has been levied both at large organizations like the United Nations and smaller regional organizations such as the Association of Southeast Asian Nations (ASEAN). But some multilateral coalitions have proved successful.
For instance, to use a non-tech example, the coalition forged through the Lysoen Declaration efforts—which started off as an undertaking between Canada and Norway but was subsequently extended to include 11 other like-minded countries and several NGOs—aimed to create a cooperative framework for enhancing human security. This process resulted in hugely significant global policy outcomes such as the banning of landmines, the formation of the Kimberley process preventing the flow of blood/conflict diamonds and the setting up of the International Criminal Court. Unlike the rigid and hierarchical U.S.-controlled coalition that came up with purported reasons for invading Iraq (the multilateral effort some may think of when they hear “coalition”), the Lysoen Declaration thrived on group interaction, issue-specific leadership and a bottom-up multi-stakeholder process.
As the GPAI and D10 processes continue, the Lysoen Process and other successful multilateral efforts offer important lessons.
For one, limited membership and a clear alignment of objectives may render these organizations more agile and incisive. Like the Lysoen Process, GPAI’s broad mandate and multi-stakeholder approach allows for it to ferment dialogue and research on a wide range of issues and thereby cultivate and diffuse norms for responsible AI across multilateral fora and jurisdiction. In other words, having a larger number of stakeholders increases the potential for effective action at scale even if it does raise questions about effectiveness, like with collective action challenges. The key to success lies in clearly defining goals and objectives, rather than simply issuing statements and not driving action.
It is also worth noting that in order to influence global norm-setting on AI, the GPAI may need to consider expanding its membership at some point once the founding members have agreed on key principles and mechanisms of engagement. Given the significant economic, technological and demographic clout that both China and Russia exercise, the GPAI might need to at least consider modes of engagement with them and other autocracies. This is not to say that their involvement should be the same as with other countries, but that some engagement may yield positive results for the current members. If meaningful engagement is deemed unfeasible, then the GPAI will need to work out ways in which their goals can be accomplished even without this engagement.
The D10, on the other hand, has the clearly defined objective of thwarting Huawei—but the concern here is whether all member states have the economic capacity and political desire to do so. One might argue that the D10 initiative has better potential now than it did six months ago given its members are increasingly taking a harder line on Huawei 5G equipment. The United Kingdom has banned Huawei from its 5G telecom network, revising a January decision that had allowed Huawei to play a limited role in 5G development in the U.K. Operators such as BT and Vodafone have been given time until 2027 to remove any existing Huawei equipment from their UK networks. Meanwhile, Singapore has chosen to go with Nokia and Ericsson instead of Huawei to build its 5G networks. And following recent clashes between India and China in the disputed territory of the Galwan Valley, India is also reportedly trying to identify and curtail investment approvals for Chinese technology companies with direct or indirect links to the Chinese government or military—a U-turn from its 2019 decision to allow Huawei to participate in Indian 5G trials.
Banning Huawei 5G equipment may be a useful, short-term strategic goal, but to be truly effective, the members of the D10 will need to work together to effectively shape outcomes at global standard-setting fora and offset China’s standard-setting clout. This may be a more effective solution over the long term than unilateral and reactive action by individual countries. Rather than unilaterally banning Huawei 5G technology, it is more productive to incentivize domestic 5G development through effective industrial policy formulation and robust international cooperation from the start.
Ultimately, the success of both these initiatives will depend both on how clearly the organizations articulate their objectives and what they do to reconcile internal differences as they attempt to accomplish said goals. They will certainly never serve as adequate replacement for the robust and inclusive multilateral and multi-stakeholder processes currently shepherded by the likes of the United Nations. However, given the continuing deadlock at these larger negotiations, both the D10 and the GPAI could help members attain short-term strategic goals—such as limiting the proliferation of Chinese AI and 5G technologies—while also laying the edifice for the diffusion of global norms and standards. As authoritarians push to undermine global standards processes and advance technological policies, practices and norms harmful to free speech and human rights, it is more important than ever for democracies to cooperate on these digital issues, ensure their own tech sectors are in line with democratic values and use their combined force to propel change at a global scale.