MEP Letter to the Commission on Artificial Intelligence and Biometric Surveillance

15. April 2021

To the European Commission
Ursula von der Leyen, President
Margrethe Vestager, Executive Vice-President “A Europe Fit for the Digital Age”
Věra Jourová, Vice-President “Values and Transparency”
Thierry Breton, Commissioner “Internal Market”
Helena Dalli, Commissioner “Equality”
Didier Reynders, Commissioner “Justice”

We are happy to see that the draft AI legislation as leaked on Tuesday1 addresses the urgent issue of mass surveillance. People who constantly feel watched and under surveillance cannot freely and courageously stand up for their rights and for a just society. Surveillance, distrust and fear risk gradually transforming our society into one of uncritical consumers who believe they have “nothing to hide” and – in a vain attempt to achieve total security – are prepared to give up their liberties. That is not a society worth living in! For these reasons the processing of personal data for indiscriminate surveillance, profiling which threatens personal integrity, the targeted exploitation of vulnerabilities, addictive designs and dark patterns, and methods of influencing political elections that are incompatible with the principle of democracy should be banned.

The inclusion of ‘Prohibited Artificial Intelligence Practices’ (Article 4(1)) in principle establishes a powerful basis for refusing to permit discriminatory and harmful applications of AI. While the draft proposal prohibits the use of AI systems for indiscriminate surveillance, no such system can possibly affect “all natural persons” in the world (8 billion people). This needs to be re-worded to cover all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system.

We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf “in order to safeguard public security”. Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.

This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance. The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.

Articles 42 and 43 then aim at regulating biometric mass surveillance in public spaces, for example to identify citizens or analyse their behaviour and sensitive characteristics (e.g. gender, sexuality, ethnicity, health) without their consent. Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed. The proposed Articles 42 and 43, however, not only fail to ban biometric mass surveillance. They could even be interpreted to create a new legal basis and thus actively enable biometric mass surveillance where it is today unlawful (e.g. under Article 9 GDPR).

We urge you to make sure that existing protections are upheld and a clear ban on biometric mass surveillance in public spaces is proposed. This is what a majority of citizens want.

Likewise, the automated recognition of people’s sensitive characteristics, such as gender, sexuality, race/ethnicity, health status and disability, is not acceptable and needs to be excluded. Such practices reduce the complexity of human existence into a series of clumsy, binary check-boxes, and risk perpetuating many forms of discrimination. Furthermore, such inferences often form the basis of both discriminatory predictive policing, and the widescale and indiscriminate monitoring and tracking of populations using their biometric characteristics. This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups. The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics.

Dear Vice-Presidents, dear Commissioners, if we want AI systems to be worthy of the public’s trust, we need to ensure that unethical technologies are banned. Please use this opportunity to defend our liberty and right to self-determination – for the sake of our future and that of our children.

Yours sincerely,
Patrick BREYER MEP
Alviina ALAMETSÄ MEP
Rasmus ANDRESEN MEP
Pernando BARRENA MEP
Nicola BEER MEP
Benoit BITEAU MEP
Malin BJÖRK MEP
Saskia BRICMONT MEP
Tudor CIUHODARU MEP
David CORMAND MEP
Rosa D‘AMATO MEP
Gwendoline DELBOS-CORFIELD MEP
Cornelia ERNST MEP
Eleonora EVI MEP
Claudia GAMON MEP
Evelyne GEBHARDT MEP
Alexandra GEESE MEP
Andreas GLÜCK MEP
Markéta GREGOROVÁ MEP
Francisco GUERREIRO MEP
Svenja HAHN MEP
Marcel KOLAJA MEP
Mislav KOLAKUŠIĆ MEP
Kateřina KONEČNÁ MEP
Moritz KÖRNER MEP
Sergey LAGODINSKY MEP
Philippe LAMBERTS MEP
Idoia LETONA CASTRILLO MEP
Karen MELCHIOR MEP
Ville NIINISTÖ MEP
Jan-Christoph OETJEN MEP
Mikuláš PEKSA MEP
Manuela RIPA MEP
Michèle RIVASI MEP
Ivan Vilibor SINČIĆ MEP
Tineke STRIK MEP
Paul TANG MEP
Kim VAN SPARRENTAK MEP
Tom VANDENDRIESSCHE MEP
Tiemo WÖLKEN MEP


1 Politico Pro, https://pro.politico.eu/news/european-commission-high-risk-ai-ban-tech