Cybersecurity & Tech Surveillance & Privacy

The Policy Dimension of Leading in AI

Elsa Kania
Thursday, October 19, 2017, 10:30 AM

The artificial intelligence (AI) revolution is creating new challenges for law, policy, and governance at domestic and international levels.

Published by The Lawfare Institute
in Cooperation With
Brookings

The artificial intelligence (AI) revolution is creating new challenges for law, policy, and governance at domestic and international levels. Although advances in AI could cause productivity growth and spark a new industrial revolution, current trends in robotics and automation will likely cause unprecedented economic dislocation, particularly by replacing low-skilled jobs. Projected to increase GDP by 14% as of 2030, AI is on track to become a critical accelerant of global economic growth. Yet the greatest economic benefits from AI will likely go to China and the United States, with more modest gains in growth and productivity for developing countries. Such uneven distribution of AI’s benefits could exacerbate inequality, resulting in higher concentrations of wealth within and among nations. In addition, algorithmic bias tends to compound human and systemic biases, compounding societal inequities. Concurrently, international competition to leverage military applications of AI has provoked concerns of an “AI arms race.” Looking forward, these trends could disrupt domestic and international politics alike.

Despite dedicated attempts to ensure “AI for good,” the AI revolution could thus intensify existing disparities of power. Russian President Vladimir Putin recently declared, “Whoever becomes the leader in [AI] will become the ruler of the world.” As I’ve written before, advances in AI could transform, or even revolutionize, military capabilities. For this reason, China has articulated its ambitions to “lead the world” in AI by 2030, intending to leverage AI as a “new engine for economic growth” and guarantor of national defense. Such ambitions threaten the U.S. private sector’s dominance in AI, as China has started to outperform the U.S. across a number of metrics, including numbers of publications and patents. (Beyond mere quantitative superiority, Chinese research teams dominated the last ImageNet competition, an AI contest for image recognition, and, in the inaugural WebVision challenge, the successor to ImageNet, Chinese AI start-up Malong Technologies bested over 100 competitors.) The continued competition among superpowers—and, perhaps more importantly, among the world’s top technology companies—to advance in AI could spur innovation for positive purposes, like enhancing human well-being, or could cause unanticipated negative consequences.

Going forward, national policy choices and international engagement will inherently influence the trajectory of AI’s development. The potential for AI policy—as well as the underlying question of whether AI should be regulated—has started to provoke debate, despite sometimes being dismissed as premature. At a domestic level, national governments looking to enhance their respective nations’ competitiveness in the AI revolution might look to policies that target strategic investments in cutting-edge research and development (R&D), education and recruitment of leading talent in the field, and open-sourcing and availability of data and platforms for AI development. To mitigate adverse effects of AI, governments might focus on issues like workforce adjustment, even universal basic income, and measures to ensure the safety of and correct for potential bias in AI systems.

At the international level, nation-states, the private sector, and civil society will all have roles to play in establishing norms and devising legal and ethical frameworks involving the use of AI-enabled and autonomous systems. For instance, the UN Group of Government Experts, working under the Convention on Certain Conventional Weapons, has explored the risks of lethal autonomous weapons systems (LAWS). This summer, the AI for Good Global Summit also highlighted the potential for international cooperation to ensure that AI enhances “humanity’s grand challenges,” such as advancing the UN’s sustainable development goals.

Despite a promising start, the U.S. government has achieved only limited progress in these dimensions of AI leadership. The Obama administration notably published three reports on the future of AI in late 2016, considering factors like economic issues, workforce challenges, and R&D. These reports highlighted critical policy issues, including the importance of greater funding for R&D, the imperative of expanding the AI workforce, and the overarching objective to take advantage of the economic opportunities that the AI revolution offers, while mitigating the adverse impacts on the U.S. workforce. However, it is unclear whether these policy issues will remain a priority under the present administration. To date, the Office of Science and Technology Policy (OSTP), which played a leading role in these efforts, has remained almost empty, thus perhaps depriving the administration of critical expertise to carry forward a national strategy in AI. In the meantime, these same recommendations may have inspired Chinese policymakers.

At present, U.S. engagement with these issues remains fairly nascent, and it is unclear whether the current administration will build upon those reports from late 2016. The latest proposed budget would cut AI research at the National Science Foundation by 10%, to a mere $175 million, as the continued decline in U.S. government funding for basic research has provoked concerns of an “innovation deficit.” Despite predictions that automation could cause levels of unemployment exceeding the Great Depression, U.S. Treasury Secretary, Steve Mnuchin has stated that AI workforce issues are “not even on our radar screen.” However, already, technology has become (subscription required) a more powerful driver of job losses than trade and globalization. As today’s trends towards greater automation exacerbate these dynamics, the U.S. education system continues to underperform in STEM and is not adequately educating today’s students for future job opportunities.

Beyond questions of competitiveness, the U.S. also has yet to create a clearer framework of laws and policies to address AI safety and bias. These issues will become more acute as self-driving cars hit the roads and algorithms become ever more pervasive. It also remains unclear whether the U.S. government will remain diplomatically engaged with the international dimension of AI issues, as through the GGE on LAWS, or seek to lead the creation of norms or frameworks for the use of AI in warfare.

Concurrently, beyond its quest for technological leadership in AI, China government is actively seeking to create a comprehensive framework for national policy and strategy in AI. In July 2017, the Chinese government released the New Generation AI Development Plan, which formulates an ambitions and relatively comprehensive agenda for advances in AI. Although elements of the plan remain vague and aspirational, its release reflects the high-level focus on these issues and prioritization of continued progress. China’s National Development and Reform Commission has approved the establishment of China’s National Engineering Laboratory of Deep Learning Technology, led by Baidu. The Ministry of Science and Technology will create (link in Chinese) an AI Plan Promotion Office to oversee and advance the plan’s implementation, likely including a multibillion-dollar funding effort for next-generation research and development. The Ministry of Industry and Information Technology (MIIT) also seeks (link in Chinese) to take a leading role in China’s AI policy, such as through developing industry-specific AI action plans that are supplemented by policies at the provincial, and even municipal, levels. In addition, China’s Ministry of Education will take responsibility for the creation of new educational programming to create a pipeline of AI talent and retrain workers displaced by AI in order to mitigate issues of social instability that might arise.

Although these efforts are only in their early stages, the plan highlights a number of priorities for China’s future AI policy. Of note, the plan addresses issues of AI safety, calling for the establishment of an AI safety supervision and evaluation system focusing on both issues of employment and social ethics. This system will include a robust regulatory mechanism that allows for oversight, such as through standards and testing methods, to ensure the safety and performance of AI products and systems. The plan also calls for China to develop laws, regulations, and ethical norms that promote the development of AI, while calling for greater research on these legal, ethical, and social issues. Certainly, these dimensions of AI policy are, in the abstract, positive and important objectives for the Chinese government to pursue. However, the ethical concerns guiding the development of AI in China will be informed by the interests and imperatives of the Party-State, thus likely diverging from the focus on these issues in the U.S. The same plan also calls for the use of AI to enhance “social management,” such as automating censorship and surveillance. It is clear that big data has already become a tool for social control. The emergence of AI-enabled mechanisms of social control in China could reinforce the regime’s control and might even proliferate to other authoritarian systems, potentially undermining human rights worldwide in the process.

Looking to the future, it is clear that becoming a world “leader” in AI will require not only technological innovation but also policy choices. Since the release of its new plan, it is clear that China is actively formulating a national strategy to leverage AI, while planning to actively participate in the future “global governance” of AI. The U.S. must work towards implementing its own national strategy to ensure economic competitiveness and national security with a focus on human capital, strategic investments, and greater public-private partnership. At a time when the perceived U.S. retreat from international engagement has created an opening for China to fill, Chinese leadership plans to deepen international cooperation on AI laws, regulations, and global norms. At the international level, the U.S. should also look to exercise leadership on AI governance issues, including the risks of AI to military and strategic stability. If the U.S. seeks to remain a global leader in AI, these challenges will merit serious and continued consideration—and rapid action.


Elsa B. Kania is an adjunct senior fellow with the Technology and National Security Program at the Center for a New American Security. Her research focuses on U.S.-China relations, China’s military strategy, defense innovation and emerging technologies. She has been invited to testify before the House Permanent Select Committee on Intelligence, the U.S.-China Economic and Security Review Commission, and the National Commission on Service. Her views are her own.

Subscribe to Lawfare