Cybersecurity & Tech

Algorithmic Surveillance Takes the Stage at the Paris Olympics

Amarins Laanstra-Corn, Tia Sewell
Friday, August 9, 2024, 12:30 PM
Exceptional situations require exceptional means, say French officials. Others claim the smart camera experiment poses privacy concerns.
Paris Olympics. July 2024. (Credit: William Casas)

Published by The Lawfare Institute
in Cooperation With
Brookings

It’s France’s first summer Olympic Games in a century, and the pressure is on. This year, the spotlight has not only fallen on the athletes, but also on a distinctly 21st-century evolution in securing the Olympic Games: France’s experimental deployment of advanced artificial intelligence (AI) surveillance systems, effective until March 31, 2025. Under France’s Olympics and Paralympics Games Law, this pilot initiative permits the deployment of algorithmic video surveillance during high-risk sporting, recreational, and cultural events to bolster security against terrorism and significant safety threats.

The experiment is carefully structured to align with European and French privacy laws, notably eschewing the use of biometric identification, facial recognition, or any automated data matching with other personal data systems. Utilizing algorithmic processing, the systems are designed to detect and flag a set of specific scenarios for human review, including abandoned bags, the presence or use of weapons, unusual crowd movements, and fires.

While this move by the French government aims to ensure the safety of the Games’ athletes and attendees, it has sharpened the debate over the balance between security and privacy—and now, many spectators are questioning the precedent France will set as the first European country to legalize a large-scale algorithmic surveillance system.

An Exceptionally Challenging Security Landscape

Just hours before the opening ceremony on July 26, parts of France’s high-speed rail network were paralyzed by a “massive attack” of “coordinated” arson incidents, affecting 800,000 travelers. The attacks were set against the backdrop of an exceptionally challenging security landscape shrouding this year’s Games. Indeed, when Tony Esganguet, the president of the Paris Olympics Organizing Committee, claimed back in June 2023 that ongoing security measures would turn the French capital into the “safest place in the world,” the world looked very different than it does today. Hosting the Olympics has always presented formidable security challenges,  but for France, a nation where terrorist attacks live on in recent memory, recent global events have heightened an already challenging security question for the upcoming Games. 

Since the Oct. 7 Hamas-led attack on Israel, protests and civil unrest have gripped the global stage as concerns of rising antisemitism grow and the death toll in Gaza steadily climbs. Beyond the Israel-Palestine conflict, state-sanctioned Russian hackers have been sowing seeds of disinformation, promoting fake stories of violence and terrorism attacks at the Olympics. While the threat of Russian interference was expected following the decision to ban Russian athletes from competing at this year’s Olympics, the extent of the problem has largely been uncertain given AI’s increasing capacity to generate disinformation.

By early July, French authorities had already reported that they had thwarted at least two terrorist attack attempts on the Olympics. Since May, they have arrested an 18-year-old Chechen man under the suspicion of organizing a suicide mission at Saint-Etienne’s soccer stadium on behalf of the Islamist State, an alleged neo-Nazi sympathizer accused of planning an attack against the Olympic torch relay, and a man wielding a knife and threatening to kill a taxicab driver while expressing support for Hamas. What’s more, Islamic State extremists have reportedly circulated manuals explaining how to modify commercial drones to carry explosive devices, with the intent of conducting attacks during the Games. 

The security challenge was exacerbated by the Paris Olympic Committee’s monumental decision to host most of the Games in the epicenter of Paris. This decision departs from the precedent of host countries constructing Olympic event venues outside city centers. Most notable was the decision to stage the opening ceremony—an event that brought in crowds totaling in the hundreds of thousands—along the open-air banks of the Seine. Despite its early claims that the ceremony would be “open to all,” the French government began scaling back its grandiose plans for the opening ceremony in the months prior, slashing proposed attendance to 300,000 rather than 600,000.

Given the multitude of security challenges facing the Games, France has deployed a myriad of safe-keeping strategies. Officials have imposed “what amounts to house arrest” on 155 people living in Paris deemed to be potential threats, conducted background checks on about one million people involved in Olympics operations, and set the largest military camp in Paris since World War II. The government is also relying on over 45,000 police and gendarmes stationed around Paris and the surrounding suburbs, armed with high-tech surveillance capabilities and weaponry. For the opening ceremony, France deployed a no-fly zone spanning 93 miles, shutting down Parisian airspace for six hours.

Against these already extraordinary security measures is the French government’s decision to integrate novel AI technology into their security plan, taking the unprecedented step of legalizing algorithmic video surveillance. The 2023 Olympic and Paralympic Games Law, which includes a series of measures to ensure the smooth running of the 2024 Games in terms of security, care, anti-doping, and transport, notably expanded the state’s surveillance powers by authorizing the use of AI-powered cameras “on an experimental basis” to support law enforcement efforts at the Games. The legislation is not without contention—garnering criticism and raising privacy concerns from political groups and civil society organizations alike—but in the words of French Minister of the Interior Gérald Darmanin, “Exceptional circumstances require exceptional means.

Rules and Red Lines

These “exceptional means” were codified by Article 10 of the Olympic and Paralympic Games Law, which set out the parameters for France’s use of algorithmic surveillance. The article includes a range of safeguards putting the deployment of augmented cameras in compliance with recommendations set by France’s National Commission for Information Technology and Civil Liberties (CNIL)—an independent public body responsible for the protection of personal data which has overseen both the development and implementation of the experiment. 

Article 10 allows for the use of intelligent video surveillance until March 31, 2025 to enhance security at large-scale sporting, recreational, and cultural events that are particularly vulnerable to terrorism or serious safety threats. The surveillance cameras, including those installed on aircraft, will collect images in event venues, surrounding areas, public transport, and access routes. These systems will use algorithmic processing to detect predetermined events in real-time and alert relevant security services such as the national police, gendarmerie, fire and rescue services, municipal police, and internal security of public transport operators.

The experiment’s measures must adhere to the EU’s General Data Protection Regulation (GDPR) and French data protection laws, and Article 10 explicitly prohibits the surveillance systems from using biometric identification, facial recognition, or automated data matching with other personal data systems. It further provides that they are designed solely to flag “predetermined events,” and will remain under human supervision to prevent any form of automated decision-making or prosecution based on the data.

Regular monitoring and reporting are mandated, with data controllers required to maintain logs and provide weekly updates to state officials. As the primary entity overseeing the experiment, CNIL is responsible for supervising its continued compliance with data protection regulations. Further, by Dec. 31, 2024, the government is required to present a publicly-available evaluation report to Parliament detailing the implementation and effectiveness of the experiment.

Before deployment, the specified surveillance systems require a decree that “sets out the essential characteristics of the processing” after consultation with CNIL. This decree, which was issued on Aug. 28, 2023, provides official implementation guidance for the experiment, including the types of events to be detected, the security services involved, and training requirements for operators. Article 10 further required that the decree be accompanied by a data protection impact analysis, sent to CNIL, to outline the expected benefits and potential risks of the processing, as well as measures to mitigate these risks.

Per the decree, algorithmic processing may be used to detect eight predetermined events during the experiment:

  1. presence of abandoned items;
  2. presence or use of weapons;
  3. failure of a person or vehicle to follow the designated direction of traffic;
  4. the crossing or presence of a person or vehicle in prohibited or sensitive areas;
  5. presence of a person on the ground after a fall;
  6. unusual crowd movement;
  7. excessive crowd density;
  8. outbreaks of fire.

The decree notes that training will involve a sample of images collected under conditions similar to those of the experiment. It stresses that these images will be processed to correct identified biases or errors, with full adherence to data protection requirements, including possible pseudonymization or blurring of data if technical quality can be maintained. Access to the processing during the design phase is restricted only to “duly designated and authorized agents of the Ministry of the Interior… by reason of their attributions and within the limit of the need to know.” 

The decree reiterates that recorded images “are kept for a strictly necessary period which may not, in any case, exceed twelve months” and states that they “may not, during this period, be used for purposes other than those provided for” in the experiment. 

When the Olympics and Paralympics draft law was first introduced back in December 2022, the CNIL acknowledged that the use of systems allowing for the automatic analysis of images in real time “raises new and substantial issues in terms of privacy” and further, that the “deployment, even experimental, of these devices constitutes a turning point that will contribute to defining the general role that will be attributed to these technologies, and more generally to artificial intelligence.” With this, it stressed that the proposed experiment’s framework included adequate protections and red lines to ensure the fair and legal use of algorithmic surveillance tools in public spaces, complying with “the recommendations made by the CNIL in its position on augmented cameras of July 2022.”

France’s highest constitutional court also found that the provisions of Article 10 were constitutional and did not infringe upon civil liberties, despite some contention in Parliament. After the bill’s adoption in April, several parliamentary groups referred the text in its entirety to the Constitutional Council, arguing that the bill “infringes upon several principles of constitutional value” including the right to privacy. The council did not uphold objections from the referral, emphasizing the law’s intent to prevent breaches of public order and highlighting the stringent safeguards in place. However, it stressed that any authorization of algorithmic video surveillance must be immediately terminated if the conditions justifying issuance are no longer met. Additionally, it required that the public be informed in advance about the use of algorithmic processing on collected images, unless prohibitive circumstances or conflicts with the objectives arise. And further, the council stipulated, the constitutional conformity of the experiment will be re-examined upon its completion.

The Constitutional Council’s decision on algorithmic video surveillance had been much anticipated, both in France and abroad, given the controversy that had shrouded the augmented camera experiment throughout the entirety of the parliamentary process. In March 2023, a group of 38 civil society organizations had released an open letter calling upon the National Assembly to reject the article concerned, arguing that the measures pose unacceptable risks to fundamental rights” and further, constitute “a step towards the normalisation of exceptional surveillance powers.” After the National Assembly passed the bill, Amnesty International’s Advocacy Advisor on AI Regulation warned that the decision “risks permanently transforming France into a dystopian surveillance state, and allowing large-scale violations of human rights elsewhere in the bloc.”

Defenders, on the contrary, pointed to the 28 guarantees provided for by the text—including CNIL’s continuous oversight—and highlighted that beyond the extensive protections built into Article 10, its objective use cases were of critical importance to public safety. “No one saw the truck arrive on the Promenade des Anglais, no one—despite the video surveillance,” stated law commission chairman Sacha Houlié, referencing the Nice attack of 2016. “If we had access to algorithmic processing then, we would have recognized the abnormal behavior, which would have prevented that attack.”

Trials and Tribulations

In preparation for the Games, French police officers began officially testing the new technology in March. Specifically, the French Ministry of the Interior announced that it would be using cameras powered by software called Cityvision created by Wintics—one of four companies that the interior ministry selected to facilitate the experiment. With support and oversight from CNIL, Wintics, Videtics, Chapsvision, and Orange Business will share responsibilities for “the provision of an algorithmic solution,” “installation and dismantling of the solution,” “training and support for field stakeholders,” and ongoing support for implementation across public transport systems and different regions of France. In full, their contracts are totaled at a maximum of €8 million euro.

In addition to some technical obstacles (the detection of abandoned objects has reportedly not met expectations), the rollout has encountered numerous legal hurdles. La Quadrature du Net—a French association for the defense of digital freedoms that has consistently opposed Article 10—filed a complaint with the CNIL in May challenging the SCNF’s use of Wintics’ algorithmic video surveillance software. La Quadrature du Net argues that the continuous recording and analysis of individuals’ behavior without using facial recognition still constitutes biometric data processing, violating legal frameworks both in the EU and in France. 

At its core, this legal challenge highlights much of the criticism that has landed on France’s experiment since it was first introduced. While it’s clear that the widespread and continuous processing of certain image traits would constitute an explicit violation of the GDPR and the EU AI Act—such as facial recognition, analysis of emotion, and the biometric categorization of people, to list a few—it is less clear whether France’s use of AI-powered surveillance to specifically flag a set of predetermined events necessarily includes the processing of biometric data. Some privacy advocates say that it does, while others warn that it at a minimum opens the door to abuses in the future.

The GDPR, which safeguards individual privacy rights, has often been cited as the gold standard in privacy regulation. The GDPR imposes strict requirements on processing personal and biometric data, defined as “personal data resulting from specific technical processing relating to the physical, physiological or behavioral characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data.”

In the open letter by civil society organizations published while the Olympics and Paralympics bill was still under consideration back in March 2023, the groups argued that the French experiment was in violation of the GDPR because it would require gathering physiological characteristics and behaviors from images. The letter expressed direct concern with collection of data encompassing “body positions, gait, movements, gestures, or appearance,” all of which, they allege, are traits from which an individual is personally identifiable:

If the purpose of algorithm-driven cameras is to detect specific suspicious events in public spaces, they will necessarily capture and analyse physiological features and behaviours of individuals present in these spaces, such as their body positions, gait, movements, gestures, or appearance.

Beyond existing legislation, the letter also asserted that “this measure risks colliding with the future AI Act.” In conjunction with GDPR, the since-published EU AI Act, set to enter into force on Aug. 1, stresses similar issues related to the collection of biometric data and privacy. The act follows a risk-based approach to regulating the emerging technology, where AI systems are subject to differing severity of scrutiny based on their level of risk. One level is “unacceptable risk,” which encompasses systems considered a threat to people—to be banned automatically. These include, for example, systems that use social scoring, biometric categorization of people, and real-time and remote biometric identification systems.

The second level of risk categorization for AI systems is “high risk.” These systems negatively affect safety or fundamental rights, including AI systems that are used in the management and operation of critical infrastructure, access to and enjoyment of essential private services and public services and benefits, or law enforcement. It seems that AI systems such as the one being deployed at the Paris Olympics belong to this category.

The legislation notably allows for a set of limited exceptions for law enforcement in urgent situations concerning public security purposes—exceptions lobbied for by France. They include exemptions to the ban of “real-time” remote biometric identification systems in select cases (such as preventing substantial and imminent threats to safety) and permit the use of retroactive biometric identification systems, “where identification occurs after a significant delay,” to prosecute serious crimes only after court approval. France’s surveillance experiment could arguably fall into the act’s carve-out exception for using such systems in the case of a “genuine and foreseeable threat of a terrorist attack.” But given Article 10’s safeguard prohibiting any and all processing of biometric data, the question may be altogether irrelevant to France’s Olympic experiment.

The act also includes an exemption for all AI systems that are used exclusively for military, defense, and national security purposes. The act makes a distinction between “law enforcement or public security purposes” and national security ones, suggesting that the system deployed under France’s Olympics experiment falls within the scope of the AI Act. Nonetheless, because “national security” is not clearly defined under EU law, member states may apply different interpretations to this notion, which could raise further uncertainty about the act’s authority over counterterrorism applications of AI.

That being said, the act stipulates that before deploying a high-risk AI system in public services, entities must assess its impact on fundamental rights. The regulation also mandates increased transparency in the development and use of such systems. Additionally, high-risk AI systems and certain public entities using them must be registered in the EU database for high-risk AI systems. It should be noted that the majority of the provisions outlined in the EU AI Act will not be enforced until Aug. 2, 2026, while the ban on unacceptable risk will be enforceable beginning in February 2025.

France’s experiment does contain measures geared towards oversight and transparency. CNIL mandates that all organizations involved in high-risk processing of personal data demonstrate their compliance with the GDPR by completing and submitting a Data Protection Impact Assessment. The interior ministry did this with a framework impact analysis in August 2023–though this analysis does not appear to be publicly available. The decree outlining implementation guidance further specifies that data controllers will be required to send “an impact analysis on the protection of personal data of the specific characteristics of each of the processing operations implemented.” And the French government is required to produce a publicly-available evaluation report on the implementation of the experiment by the end of the year, which will assess the technical performance of systems, the results with regard to operational objectives, and the societal impacts of the experiment. For its part, CNIL released a Q&A on the use of augmented cameras during the Olympics, which includes information on how individuals may exercise their “right to access, delete, rectify, or limit the use of images and alerts generated by the system.”

Critics have nonetheless decried the lack of transparency in the development of France’s systems, claiming that it has obfuscated serious privacy and legal questions surrounding the system’s function and use, thus introducing uncertainty about the AI’s bias, ethics, and accuracy. They worry about the French citizens being used as private industry’s “guinea pigs.” They fear that France’s experiment is the first step on a slippery slope of function creep, using the Olympics as “a pretext for accelerating a policy of generalized surveillance.” They contend that the interior ministry has previously concealed illegal misuse of algorithmic video surveillance by French police—including facial recognition software. And they stress that the Olympics and Paralympics Games Law could set a dangerous precedent in the EU, tilting the bloc “in a more surveillance-drenched, repressive direction.”

While Article 10 of the Olympic and Paralympic law may highlight the new lengths to which states are willing to venture in the novel age of artificial intelligence, there has long been international discord on the national security-privacy tradeoff. This debate has vexed both privacy watchdogs and national security actors for more than a decade. And in today’s world, where AI systems have grown more powerful, accessible, and cheaper, the balance of this tradeoff has become exponentially more complex. The security-privacy debate is likely to continue to intensify, especially as more information on the effectiveness and ethical implications of France’s experiment comes to light. One thing is certain: Many are keenly watching this year’s Olympics, and not just for the sports. And while the Games are set to come to a close on Aug. 11, the world could be feeling the impact of Paris 2024 for years to come.


Amarins Laanstra-Corn focuses on the societal implications of emerging technologies at the Emerson Collective. She studied international security and political science at Stanford University.
Tia Sewell is a former associate editor of Lawfare. She studied international relations and economics at Stanford University and is now a master’s student in international security at Sciences Po in Paris.

Subscribe to Lawfare