10. Mai 2021

MEP Letter to the Commission on Artificial Intelligence and Biometric Surveillance

DigitalesParlamentarische Aktivitäten

Answer 10.05.2021

Honourable Member,

I would like to thank you and the co-signatoriesfor your letter expressing your concerns about our legislative proposal on artificial intelligence (AI).

Let me clarify at the outset that the leaked document you mention in your letter was not published by the Commission and does not reflect our final position on the use of AI systems.

I agree that the use ofmass surveillance technologies risks having a chilling effect on our democratic principles. The unconditional use of AI systems for remote biometric identification in publically accessible spaces is considered particularly intrusive for the rights and freedoms of the persons concerned. Such unconditional use may affect the private life ofa large part ofthe population and their rights to personal data protection, evoke a feeling ofconstant surveillance and indirectly dissuade from making use of the freedom ofassembly and otherfundamental rights. On the other hand, I believe that such
systems may also have a beneficial impact that should not be overlooked. When used to help visually impaired persons, to find missing children or to act against a specific and imminent terrorist threat, these technologies may be of a great help.

As you mention in your letter, we already have strong rules in place for data protection. Processing biometric data for remote identification systems is in principle prohibited under data protection rules and can only be allowed under very specific conditions. The Commission ’s intention with the legislative proposal on AI is not to replace or weaken existing regulation, but to complement this solidframework ofrules in view of providing further transparency and legal certainty for the protection of European citizens.

The objective ofthe legalframework proposed on 21 April 2021 is to ensure the effective protection offundamental rights in the EU.

The use ofreal-time remote biometric identification in public placesfor law enforcement purposes should therefore in principle be prohibited with a few, narrowly defined exceptions, where the use is strictly necessary to achieve a sufficiently substantialpublic interest. These exceptions include situations involving the targeted search for specific potential victims ofcrime, including missing children; a response to the threat ofa terror attack or the detection and identification ofperpetrators ofserious crimes exhaustively enumerated in the EU legislation on the European Arrest Warrant. In addition, the use of those systems should be based on clear indications as regards the aforementioned situations and should be subject to limits in time and space, as well as to an express and specific authorisation by a judicial authority or an independent administrative authority ofa Member State. Finally, such use presupposes rules in the national law of a Member State that decides to deploy real-time remote biometric identification systemsforsome or all ofthe situations circumscribed narrowly in the legislative proposal on AI.

The Commission also proposes under the new rules to consider allAIsystems intended to be usedfor remote biometric identification ofpersons as high-risk and subject to ex ante third-party conformity assessments as well as to ‘4-eyes ’ human control. Among the mandatory requirements applicable to high-risk AI systems, high quality data sets and testing will help to make sure such systems are accurate and that there are no discriminatory impacts on the affected population.

Finally, all emotion recognition and biometric categorisation systems will be subject to specific transparency obligations. In addition, they will be considered high-risk applications if they fall under the use cases identified as such in Annex III of the proposal, for example, when they are used in the areas of employment, education, law enforcement, migration and border control.

Our Union is based on a set of core values and principles that we have to defend at all times. Iremainfully committed to work together with the Parliament to defend European citizens ’ liberties and fundamental rights as well in the digital age.

Yoursfaithfully,
Ursula von der Leyen


15.04.2021

To the European Commission
Ursula von der Leyen, President
Margrethe Vestager, Executive Vice-President “A Europe Fit for the Digital Age”
Věra Jourová, Vice-President “Values and Transparency”
Thierry Breton, Commissioner “Internal Market”
Helena Dalli, Commissioner “Equality”
Didier Reynders, Commissioner “Justice”

We are happy to see that the draft AI legislation as leaked on Tuesday1 addresses the urgent issue of mass surveillance. People who constantly feel watched and under surveillance cannot freely and courageously stand up for their rights and for a just society. Surveillance, distrust and fear risk gradually transforming our society into one of uncritical consumers who believe they have “nothing to hide” and – in a vain attempt to achieve total security – are prepared to give up their liberties. That is not a society worth living in! For these reasons the processing of personal data for indiscriminate surveillance, profiling which threatens personal integrity, the targeted exploitation of vulnerabilities, addictive designs and dark patterns, and methods of influencing political elections that are incompatible with the principle of democracy should be banned.

The inclusion of ‘Prohibited Artificial Intelligence Practices’ (Article 4(1)) in principle establishes a powerful basis for refusing to permit discriminatory and harmful applications of AI. While the draft proposal prohibits the use of AI systems for indiscriminate surveillance, no such system can possibly affect “all natural persons” in the world (8 billion people). This needs to be re-worded to cover all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system.

We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf “in order to safeguard public security”. Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.

This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance. The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.

Articles 42 and 43 then aim at regulating biometric mass surveillance in public spaces, for example to identify citizens or analyse their behaviour and sensitive characteristics (e.g. gender, sexuality, ethnicity, health) without their consent. Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed. The proposed Articles 42 and 43, however, not only fail to ban biometric mass surveillance. They could even be interpreted to create a new legal basis and thus actively enable biometric mass surveillance where it is today unlawful (e.g. under Article 9 GDPR).

We urge you to make sure that existing protections are upheld and a clear ban on biometric mass surveillance in public spaces is proposed. This is what a majority of citizens want.

Likewise, the automated recognition of people’s sensitive characteristics, such as gender, sexuality, race/ethnicity, health status and disability, is not acceptable and needs to be excluded. Such practices reduce the complexity of human existence into a series of clumsy, binary check-boxes, and risk perpetuating many forms of discrimination. Furthermore, such inferences often form the basis of both discriminatory predictive policing, and the widescale and indiscriminate monitoring and tracking of populations using their biometric characteristics. This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups. The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics.

Dear Vice-Presidents, dear Commissioners, if we want AI systems to be worthy of the public’s trust, we need to ensure that unethical technologies are banned. Please use this opportunity to defend our liberty and right to self-determination – for the sake of our future and that of our children.

Yours sincerely,
Patrick BREYER MEP
Alviina ALAMETSÄ MEP
Rasmus ANDRESEN MEP
Pernando BARRENA MEP
Nicola BEER MEP
Benoit BITEAU MEP
Malin BJÖRK MEP
Saskia BRICMONT MEP
Tudor CIUHODARU MEP
David CORMAND MEP
Rosa D‘AMATO MEP
Gwendoline DELBOS-CORFIELD MEP
Cornelia ERNST MEP
Eleonora EVI MEP
Claudia GAMON MEP
Evelyne GEBHARDT MEP
Alexandra GEESE MEP
Andreas GLÜCK MEP
Markéta GREGOROVÁ MEP
Francisco GUERREIRO MEP
Svenja HAHN MEP
Marcel KOLAJA MEP
Mislav KOLAKUŠIĆ MEP
Kateřina KONEČNÁ MEP
Moritz KÖRNER MEP
Sergey LAGODINSKY MEP
Philippe LAMBERTS MEP
Idoia LETONA CASTRILLO MEP
Karen MELCHIOR MEP
Ville NIINISTÖ MEP
Jan-Christoph OETJEN MEP
Mikuláš PEKSA MEP
Manuela RIPA MEP
Michèle RIVASI MEP
Ivan Vilibor SINČIĆ MEP
Tineke STRIK MEP
Paul TANG MEP
Kim VAN SPARRENTAK MEP
Tom VANDENDRIESSCHE MEP
Tiemo WÖLKEN MEP


1 Politico Pro, https://pro.politico.eu/news/european-commission-high-risk-ai-ban-tech