Endorsements for Human Civilization (November 2024)Sep 23
a san francisco voter guide for people who aren’t insane
The Pirate Wires Editorial BoardIt’s been a busy past few weeks in France. Days of strikes and violent protests spurred by a pension reform have left the country’s cities strewn with uncollected trash piles, smashed storefront windows, and firebombed cars. Most in the media and general public have little on their minds beyond the now-infamous “réforme des retraites,” which seeks to raise the national retirement age from 62 to 64 and slightly restructure pension distribution — measures deemed “painful but necessary” by French president Emmanuel Macron and “abominable and injust” by the opposition. Fortunately, between near-daily no-confidence votes and shouting matches about the excesses of their ‘tyrannical’ president, members of the country’s National Assembly (the lower chamber of parliament) have managed to squeeze in some time to discuss other legislation, including a broad package of laws related to the administration and security of the 2024 Paris Olympics, passed on March 24 by a 59-14 preliminary vote.
The National Assembly’s decision to greenlight the bill followed months of debate about one section in particular — Article 7 — which permits the use of AI-assisted video surveillance technology by law enforcement during and up to six months after the Games. The article has been subject to an intense international campaign demanding its rejection; Amnesty International warned that its provisions would turn France into a “dystopian surveillance state,” and an open letter spearheaded by the Netherlands-based European Center for Not-for-Profit Law denounced the “structural discrimination” and “over-criminalization of racial, ethnic and religious minorities” that, it claimed, inevitably arise when using AI-powered algorithms to fight crime. In the National Assembly, lawmakers proposed over 770 amendments to Article 7 alone, and discussion over these lasted for a couple of days until everyone abruptly lost interest. Squabbling about pension reform resumed, and only 73 of the almost 600 members of the lower chamber showed up to the preliminary vote on the bill. As Félix Tréguer, a member of the French privacy watchdog association La Quadrature du Net put it: “The largest acceleration of algorithmic governance [in France]…was approved in almost complete indifference and against the backdrop of social upheaval.”
But the story of Article 7 is not just about an eclectic moment in French politics; it is about what happens when the concerns of privacy watchdogs, tech-incompetent politicians, and a disaffected public converge on novel technologies that enable powerful uses and abuses. Let’s dive in.
In its current form, the bill (“Projet de loi relatif aux jeux Olympiques et Paralympiques de 2024”) comprises 19 articles, most of which are short paragraphs related to fine-print details about the organization and administration of the Games — where the main health center will be located, for instance, or how many wheelchair accessible taxis will be on-site. Article 7, which is significantly longer than the others, deals with the following provision:
As an experimental measure in place until December 31, 2024, and for the sole purpose of ensuring the security of sporting, recreational, or cultural events that, due to their scale or circumstances, are particularly exposed to the risk of acts of terrorism or serious threats to people’s safety, images collected through the video surveillance systems authorized under Article L. 252-1 of the Internal Security Code and cameras installed on aircraft authorized under Chapter II of Title IV of the same code in the locations hosting these events and their surroundings, as well as in public transport vehicles and premises and on the roads serving them, may be subject to algorithmic processing. This processing is aimed solely at detecting, in real time, predetermined events that may present or reveal risks and reporting them for necessary measures to be implemented by the national police and gendarmerie services, fire and rescue services, municipal police services, and the internal security services of the SNCF [France’s state-owned railway company] and the Autonomous Parisian Transport Authority within the scope of their respective missions. (Section I, Article 7 — emphasis is mine)
Other provisions in the article specify what this “algorithmic processing” will not entail:
The article does not specify what these ‘pre-determined events’ are, but in a speech before the National Assembly, Interior Minister Gérald Darmanin mentioned a few use cases for the AI-assisted surveillance technology:
It is unclear whether the above is an exhaustive list. Darmanin argued that the surveillance provisions in Article 7 represent “exceptional measures for an exceptional situation,” saying that the Paris Olympics are “the most important and perilous security challenge in French history.” Other members of the mainly center-right presidential majority echoed this claim, with some invoking the specter of past terrorist attacks. “No one saw the truck pulling up to the Promenade des Anglais [in Nice] despite the CCTV cameras. But if we’d had intelligent surveillance, we would have recognized the abnormal activity, and this would have allowed us to stop the [2016 terrorist attack in Nice],” said Sacha Houlié, a MP from Macron’s Renaissance party. Maxime Minot, a member of the liberal-conservative party Les Républicains, agreed and noted that “the SNCF has already implemented certain surveillance algorithms of this type.”
Doubtlessly looming in the background of the push for Article 7 was the havoc wreaked at last May’s Champions League final between Liverpool and Real Madrid, when huge crowds outside the Stade de France, where the match was set to take place, descended into chaos and violence. Pepper spray and tear gas filled the air, free-flying punches sent hundreds to hospitals, and anecdotal reports of ‘mass sexual assaults’ swirled in the press for weeks to come. A 150-page report released this February by the Union of European Football Associations (UEFA) lambasted the “responsibility of French authorities in the endangerment of thousands of British and Spanish supporters, almost all of them peaceful, who were threatened with being crushed on several occasions, attacked by gangs of delinquents, and sprayed with gas by police,” noting that the fiasco was widely seen as an international embarrassment for France and an indictment of “the negligence of [its] public authorities in matters of security.”
Stade de France | Image credit: Darthvadrouw
Will the AI-powered surveillance provisions in Article 7 suffice to prevent a Champions League-style debacle or terrorist attack at the Paris Olympics? Those opposed to the article say ‘no’ and argue, in more or less coherent ways, that the privacy risks posed by the use of such technology far outweigh the security benefits. On the more coherent end of the spectrum are the members of French privacy watchdog group La Quadrature du Net, which came to prominence in 2008 when it opposed the so-called ‘HADOPI’ online copyright law. Since then, QDN has opposed a number of bio-surveillance measures proposed by the French government, including those implemented in the wake of the pandemic — the mandatory ‘health pass’ (passe sanitaire) showing vaccination status, for example, or the use of facial recognition technology to detect compliance with mask-wearing regulations. QDN claims that the arguments used in favor of the AI-assisted surveillance sanctioned by Article 7 are premised on three lies:
While it is unclear that algorithmic video surveillance is, as QDN claims, “one of the most dangerous technologies ever deployed,” it is undeniable that the politicians charged with regulating such technology are, in most cases, dangerously incompetent and ill-suited to their role. The debate on Article 7 was rife with false statements and misplaced alarmism, both from the text’s supporters and defenders. Those opposed to algorithmic transparency spoke often, for instance, about the importance of safeguarding ‘the code.’ “If there’s anything that could be useful to the terrorists, it would be to have the code and the means to crack it […] giving the codes of these algorithms, making them public, is to give the necessary tools to all who would like to hack the algorithms and hijack them,” said Guillaume Vuilletet, an MP from Renaissance. But this language misses the point. Calls for transparency rarely hinge on releasing the exact code used to implement a program. Instead, when people advocate for AI transparency, they are usually advocating for making available information about the training sets, algorithms used (many of which are open source), and parameterization of these algorithms.
Similar hysteria dominated the statements of politicians opposed to Article 7. Jean-Félix Acquaviva, an MP from the catch-all party LIOT, submitted an amendment to the article written entirely by ChatGPT-3. From the amendment:
“The implementation of algorithmic processing […] should be regulated in order to preserve the rights and fundamental liberties of the people involved, notably by guaranteeing that human decisions guide each step of the processing.”
According to Acquaviva, this insipid output from ChatGPT showed “the risks that this new type of technology can entail.” Presumably by this he meant to suggest that the risks posed by a large language model like ChatGPT are closely related to those posed by AI-powered video surveillance. If there’s something to be said for this suggestion — which there well may be — we shouldn’t hope to hear it from Acquaviva; he spent his podium time pontificating (a bit like a drunk uncle at a family reunion) about how the ChatGPT amendment showed that “artificial intelligence is capable of the best and the worst.” In the end, the speech made only one thing clear: AI still has a long way to go before it can match the average politician’s ability to generate and deliver meaningless platitudes.
In any event, Article 7 has now passed a preliminary vote in both the French Senate and National Assembly, and will almost certainly receive final approval in the coming weeks. The public is, in general, too occupied with the anti-pension reform protests and too indifferent to the use of its personal data to care. A takeaway from this — most people seem to have little problem with the prospect of an AI-powered surveillance state, as long as it leaves them alone in their day-to-day life. “They already have my online data, my passport, my CCTV footage, everything,” a friend said to me recently, when I asked for his thoughts on Article 7. “Whatever. Privacy doesn’t exist anymore, anyway.”
-Sanjana Friedman
0 free articles left