Cybersecurity & Tech Surveillance & Privacy

The EU’s White Paper on AI: A Thoughtful and Balanced Way Forward

Mark MacCarthy, Kenneth Propp
Thursday, March 5, 2020, 8:00 AM

The proposed framework represents a sensible and thoughtful basis to guide the EU’s consideration of legislation to help direct the development of AI applications.

The European Commission Building in Brussels (By: Guilhem Vellut, https://flic.kr/p/dsCqny; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

On Feb. 19, the European Commission released a White Paper on Artificial Intelligence outlining its wide-ranging plan to develop artificial intelligence (AI) in Europe. The commission also released a companion European data strategy, aiming to make more data sets available for business and government to promote AI development, along with a report on the safety of AI systems proposing some reforms of the commission’s product liability regime.

Initial press reports about the white paper focused on how the commission had stepped back from a proposal in its initial draft for a three- to five-year moratorium on facial recognition technology. But the proposed framework is much more than that: It represents a sensible and thoughtful basis to guide the EU’s consideration of legislation to help direct the development of AI applications, and an important contribution to similar debates going on around the world.

The key takeaways are that the EU plans to:

  • Pursue a uniform approach to AI across the EU in order to avoid divergent member state requirements forming barriers to its single market.
  • Take a risk-based, sector-specific approach to regulating AI.
  • Identify in advance high-risk sectors and applications—including facial recognition software.
  • Impose new regulatory requirements and prior assessments to ensure that high-risk AI systems conform to requirements for safety, fairness and data protection before they are released onto the market.
  • Use access to the huge European market as a lever to spread the EU’s approach to AI regulation across the globe.

The commission’s urging of new regulations and ex ante reviews is a response to widespread concerns that many automated decision systems have slipped into use without being properly vetted. Examples are well known: facial recognition bias detected after the fact by independent researchers and confirmed by the U.S. National Institute of Standards and Technology, bias in systems to detect welfare fraud determined after implementation by a Dutch court to violate human rights, bias in recidivism scoring systems widely used in the United States discovered by an investigative journalist, similar bias in a proposed Spanish recidivism score, and racial bias in a health care delivery algorithm discovered by an independent researcher after the algorithm had been in use for years.

The commission intends its report to assert a distinctive European position on AI regulation. It sets out, as the companion data strategy proclaims, a “vision [that] stems from European values and fundamental rights and the conviction that the human being is, and should remain at the center.” In other words, the EU approaches AI with privacy and other human rights front of mind, in contrast with the less comprehensive approach taken by the U.S.—not to mention China’s lesser scruples about human rights.

The commission also seeks to stake a claim to global leadership in developing AI based on industrial and government data. It sees “new opportunities for Europe, which has a strong position in digitized industry and business-to-business applications, but a relatively weak position in consumer platforms.”

While Americans may caricature the commission’s position as just the latest example of its “rules first” approach to technology regulation, in fact it favors selective, sector-specific regulation over the comprehensive, top-down method pursued in the EU’s 2016 General Data Protection Regulation (GDPR), for example. To examine the commission’s approach, we will focus on several key features of the new regulatory framework suggested in the white paper: the paper’s definition of AI, its proposed method for assessing high risk, its proposed requirements for high-risk AI applications, its requirement for prior conformity assessment, its suggestions for changes to liability regimes, and its proposal for enforcement.

Definition of AI

The white paper’s definition of artificial intelligence reflects a standard textbook notion of AI as a set of computational techniques designed to allow a computer system to interact with its environment in order to achieve a given objective. This expansive definition would seem to include systems that use statistical analysis such as machine learning as well as systems that use other statistical techniques such as linear regression. But the commission’s rhetoric and discussion suggest that it is principally focused on the threats posed by the use of techniques like machine learning.

However, to respond effectively to real risks, the new requirements and prior testing regime should apply broadly. Existing recidivism scoring systems, for instance, create a substantial risk of discrimination against African Americans, but they use traditional methods of statistical analysis like regression analysis. It makes sense to apply the new requirements and prior testing regime, even though they don’t use new machine learning techniques.

The best way forward for the commission would be to apply the new requirements and prior review to all high-risk automated decision-making systems. Using a broad notion of automated decision-making systems would clearly include both machine learning and regression analysis. Several model definitions already exist, including the Canadian directive on automated decision-making and the proposed Algorithmic Accountability Act of 2019.

Assessing High Risk

The white paper distinguishes high-risk AI applications from all other AI applications, with the aim of applying the new regime of regulation and conformity assessment only to the high-risk applications. According to the paper, high-risk AI applications are those used in a sector where “significant risks can be expected.” The legislation establishing the new regulatory framework should “specifically and exhaustively” list the high-risk sectors, which might initially include “healthcare; transport; energy and parts of the public sector.” The list should be “periodically reviewed and amended where necessary.”

In addition to being used in a high-risk sector, a high-risk AI application also must be “used in such a manner that significant risks are likely to arise.” These high-risk uses might include “uses of AI applications that produce legal or similarly significant effects for the rights of an individual or a company; that pose risk of injury, death or significant material or immaterial damage; that produce effects that cannot reasonably be avoided by individuals or legal entities.”

Beyond these sector-based high-risk applications, the commission foresees “exceptional instances [in which] … the use of AI applications for certain purposes is to be considered as high-risk as such[.]” Three examples of applications that could be considered “high-risk as such” are “the use of AI applications for recruitment processes as well as in situations impacting workers’ rights, … specific applications affecting consumer rights” and facial recognition technology.

This risk framework has already been criticized for lack of nuance. For instance, William Crumpler of the Center for Strategic and International Studies noted that this division of applications into high risk and all others means the new regulatory requirements either apply in full or don’t apply at all. He has helpfully suggested that a more refined risk assessment would create modest new requirements for moderate-risk systems.

European observers also have proposed more nuanced risk assessments. The opinion of the German Data Commission, which was provided to the commission for its consideration, called for a risk-based system to create licensing procedures for AI applications “which are associated with regular or significant potential for harm,” enhanced oversight and transparency obligations for applications “associated with serious potential for harm[,]” and “a complete or partial ban … on applications with an untenable potential for harm.”

Requirements for High-Risk AI Applications

The white paper does not seek to establish additional rights for individuals affected by the use of AI applications, saying that “consumers expect the same level of safety and respect of their rights whether or not a product or a system relies on AI.” But it nevertheless proposes certain new measures to remedy “some specific features of AI (e.g. opacity) [that] can make the application and enforcement of this legislation more difficult.” The new requirements relate to training data, record-keeping, information to be provided about the AI application, robustness and accuracy, human oversight, and specific requirements for facial recognition.

The training data requirements would aim at “providing reasonable assurances that the subsequent use of the products or services that the AI system enables” would be safe, nondiscriminatory and protective of privacy. Developers or deployers of AI systems would need to demonstrate, for instance, that their proposed AI systems “are trained on data sets that are sufficiently broad and cover all relevant scenarios needed to avoid dangerous situations.” In addition, the data sets must be “sufficiently representative, especially to ensure that all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected.”

The white paper recommends no new specific requirements to ensure that proposed AI systems adequately protect personal data, instead suggesting that these issues can be addressed under the GDPR and its companion law enforcement data privacy directive. Record-keeping rules would be imposed “in relation to the programming of the algorithm, the data used to train high-risk AI systems, and, in certain cases, the keeping of the data themselves.”

Developers and deployers of high-risk AI systems must provide information concerning the “AI system’s capabilities and limitations, in particular the purpose for which the systems are intended, the conditions under which they can be expected to function as intended and the expected level of accuracy in achieving the specified purpose.” In addition, “citizens should be clearly informed when they are interacting with an AI system and not a human being.”

The commission also recommends requirements ensuring that the AI systems are “robust and accurate, or at least correctly reflect their level of accuracy[, and] … that outcomes are reproducible … and can adequately deal with errors or inconsistencies during all life cycle phases.” And it calls for “human oversight” of high-risk AI systems, recognizing that “the appropriate type and degree of human oversight may vary from one case to another.”

The paper recognizes that the use of biometric information for facial recognition is already subject to the legal framework provided by the GDPR and the law enforcement data protection directive. But since “the fundamental rights implications of using remote biometric identification (facial recognition) AI systems can vary considerably depending on the purpose, context and scope of the use,” the commission separately “will launch a broad European debate on the specific circumstances, if any, which might justify such use, and on common safeguards.”

These requirements fall on both the developers and the deployers of AI systems, depending on who is “best placed to address any potential risk.” They are “applicable to all relevant economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.” In other words, the EU will extend its regulatory reach to AI products utilized in the EU, irrespective of their origin outside the union. Companies may find it easier to standardize their AI products in order to comply with more rigorous EU rules than to vary them on the basis of other, more liberal approaches being taken elsewhere in the world.

Prior Conformity Assessments

The white paper requires “an objective, prior conformity assessment … to verify and ensure that certain of the … requirements applicable to high-risk applications … are complied with.” These ex ante reviews would be mandatory for all developers and deployers of high-risk AI systems, “regardless of their place of establishment.” They might need to be “repeated” in the case of AI systems that “evolve and learn from experience.” Moreover, if the AI system does not satisfy “the requirements relating to the data used to train it,” the remedy might be “re-training the system in the EU.”

Liability

The white paper points out that the use of AI technologies may complicate the application of product safety laws when harms occur. Difficulty in identifying whether the AI technology was the cause of the harm, in whole or in part, “in turn may make it difficult for persons having suffered harm to obtain compensation under the current EU and national liability regime.” The commission suggests, as a general rule, apportioning liability according to the actor best placed to have addressed the risk of harm, while noting that some of those actors may be located outside the EU. More specifically, it proposes that strict liability be imposed where a product contains defective software or other digital features. It also invites comment on the possibility of reversing the burden of proof, so that a plaintiff would not be responsible for proving the chain of causation when a product relying on AI applications malfunctions.

Enforcement

Although the white paper suggests that the AI requirements will be set out in EU law, it envisions delegating enforcement responsibilities to member states, with an advisory and coordinating role for a “European governance structure.” Such decentralization is a relatively typical arrangement in the EU—privacy enforcement works similarly, for example. The white paper allows member nations to put enforcement in the hands of existing regulators or to create new national agencies. In particular, conformity assessments would be “entrusted to notified bodies designated by Member States.” The EU-level governance structure could be located within an existing European Commission directorate or lodged in an entirely new one.

The white paper seeks to maintain the jurisdictional status quo, stating, “The governance structure relating to AI and the possible conformity assessments at issue here would leave the powers and responsibilities under existing EU law of the relevant competent authorities in specific sectors or on specific issues (finance, pharmaceuticals, aviation, medical devices, consumer protection, data protection, etc.) unaffected.” This suggests that the commission is encouraging sector-specific enforcement of AI regulations along the lines of the proposed U.S. Guidance for Regulation of Artificial Intelligence Applications, which the U.S. Office of Management and Budget and the White House Office of Science and Technology Policy issued to guide sectoral regulatory agencies in their approach to regulation of AI products and services.

The EU-level governance structure would provide “a regular exchange of information and best practice, identifying emerging trends, advising on standardisation activity as well as on certification. It should also play a key role in facilitating the implementation of the legal framework, such as through issuing guidance, opinions and expertise.”

The Way Forward

The commission has begun a public consultation on its white paper, which will be open for comments until May 19. Key enforcement questions must be resolved as part of the upcoming European debate—such as who decides if a new algorithm is high risk and thus subject to heightened scrutiny. Concerns raised by industry (see this letter from the Information Technology Industry Council) relating to source code and data storage in Europe will also need to be resolved in this process.

The outlines of a forthcoming commission proposal for AI legislation seem clear enough: new requirements and prior review for high-risk algorithms. The details need to be worked out and crucial policy decisions made, especially in areas such as the scope of the new rules and the way of assessing high-risk algorithms. The EU legislative process is deliberate and slow: The debate over GDPR lasted five years. But it also proceeds inexorably. The ideas spelled out in the commission’s February package of documents are likely to shape the final product.


Mark MacCarthy is nonresident senior fellow at the Brookings Institution, senior fellow at the Institute for Technology Law and Policy at Georgetown Law and adjunct professor in Georgetown’s Communication, Culture & Technology Program.
Kenneth Propp is senior fellow at the Europe Center of the Atlantic Council, senior fellow at the Cross-Border Data Forum, and adjunct professor of European Law at Georgetown Law. From 2011-2015 he served as Legal Counselor at the U.S. Mission to the European Union in Brussels, Belgium.

Subscribe to Lawfare