Cybersecurity & Tech Surveillance & Privacy

Principles of AI Governance and Ethics Should Apply to All Technologies

Herb Lin
Friday, April 12, 2019, 11:59 AM

Despite Google’s recent dissolution of its artificial intelligence (AI) ethics board, IT vendors (including Google) are increasingly defining principles to guide the development of AI applications and solutions. And it’s worth taking a look at what these principles actually say.

Published by The Lawfare Institute
in Cooperation With
Brookings

Despite Google’s recent dissolution of its artificial intelligence (AI) ethics board, IT vendors (including Google) are increasingly defining principles to guide the development of AI applications and solutions. And it’s worth taking a look at what these principles actually say. Appended to the end of this post are the principles from Google and Microsoft, thoughts from Salesforce.org (closely aligned with Salesforce), and AI principles from three groups not aligned with specific companies.

Viewed from a high level of abstraction, three major points stand out for me:

  • As articulated, the principles are unobjectionable to any reasonable person. Indeed, they are positive principles that are valuable and important.
  • They are broadly framed and highly subjective in their interpretation, a point that should focus attention on precisely who will be making those interpretations in any given instance in which the principles could apply. The senior management of a company? The developers and coders of particular applications? The customers? Elected representatives? Career civil servants? The United Nations? A representative sample of the population? One could make an argument—or counterargument—that any of these actors should be in a position to interpret the principles.
  • Perhaps most importantly, none of the principles is particularly related to artificial intelligence. This can be shown by simply replacing the term “autonomous” or AI (when used as an adjective) with the term “technology-based.” When AI is used as a noun, simply replace it with the word “technology.”

I conclude from this high-level examination of these principles that they are really a subset—indeed a fully contained subset—of ethical principles and values that should always be applied across all technology development and applications efforts, not just those related to AI. In the future, I’d like to see technology companies—of all types, not just those using AI—make explicit commitments to the broader set of principles for technology governance.

Of course, questions would remain about about subjectivity of interpretation and the locus of decision making. But even lip service to principles of technology governance is better than the alternative—which is disavowal of them through silence.

AI Governance Principles From Various Companies and Organizations

AI principles from Microsoft:

Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values.

  • Fairness: AI systems should treat all people fairly
  • Inclusiveness: AI systems should empower everyone and engage people
  • Reliability & Safety: AI systems should perform reliably and safely
  • Transparency: AI systems should be understandable
  • Privacy & Security: AI systems should be secure and respect privacy
  • Accountability: AI systems should have algorithmic accountability

AI principles from Google:

We will assess AI applications in view of the following objectives. We believe that AI should:

  • Be socially beneficial.
  • Avoid creating or reinforcing unfair bias.
  • Be built and tested for safety.
  • Be accountable to people.
  • Incorporate privacy design principles.
  • Uphold high standards of scientific excellence.
  • Be made available for uses that accord with these principles.

Salesforce (and salesforce.org):

AI holds great promise — but only if we build it and use it in a way that’s beneficial for all. I believe there are 5 main principles that can help us achieve beneficial AI:

  • Being of benefit
  • Human value alignment
  • Open debate between science and policy
  • Cooperation, trust and transparency in systems and among the AI community
  • Safety and Responsibility

European Commission:

AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Asilomar AI Principles:

Artificial intelligence has already provided beneficial tools that are used every day by people around the world. Its continued development, guided by the following principles, will offer amazing opportunities to help and empower people in the decades and centuries ahead.

Ethics and Values

  • Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
  • Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
  • Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
  • Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
  • Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
  • Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
  • Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
  • Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
  • Shared Benefit: AI technologies should benefit and empower as many people as possible.
  • Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
  • Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
  • Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
  • AI Arms Race: An arms race in lethal autonomous weapons should be avoided.

Attendees at the the New Work Summit, hosted by the New York Times, worked in groups to compile a list of recommendations for building and deploying ethical artificial intelligence:

  • Transparency: Companies should be transparent about the design, intention and use of their A.I. technology.
  • Disclosure: Companies should clearly disclose to users what data is being collected and how it is being used.
  • Privacy: Users should be able to easily opt out of data collection.
  • Diversity: A.I. technology should be developed by inherently diverse teams.
  • Bias: Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
  • Trust: Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.
  • Accountability: There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.
  • Collective governance: Companies should work together to self-regulate the industry.
  • Regulation: Companies should work with regulators to develop appropriate laws to govern the use of A.I.
  • “Complementarity”: Treat A.I. as tool for humans to use, not a replacement for human work.

Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare