Pre-Market Requirements, Prior Authorisation and Lex Specialis: Novelties and Logic in the Facial Recognition-Related Provisions of the Draft AI Regulation

The draft Artificial Intelligence Regulation proposed by the European Commission on 21 April 2021 was eagerly anticipated. Its provisions on facial recognition to an even greater degree, given the heated debate going on in the background between those who support a general ban of this technology in public spaces and those who consider that it has “a lot to offer as a tool for enhancing public security” provided that rigorous red lines, safeguards and standards are introduced. NGOs (such as those who support the “Reclaim Your Face” campaign) and political groups (such as the Greens) have been calling for a total ban of “biometric mass surveillance systems in public spaces”. Contrary to these calls, in their submissions to the public consultation on the White paper, some countries (e.g. France, Finland, the Czech Republic and Denmark) claimed that the use of facial recognition in public spaces is justified for important public security reasons provided that strict legal conditions and safeguards are met (see the Impact Assessment Study, at 18). The results of the public consultation on the White Paper on AI are mixed on the issue of the ban (see here, at 11), but an overwhelming majority of respondents are clearly calling for new rules in this field.

Whilst the idea of a complete ban has been rejected (as we will discuss later in this paper), leading to reactions of the European Data Protection Supervisor (EDPS) and NGOs, the Commission’s draft Regulation attempts to deliver on the idea of introducing new rules for what it calls “remote biometric identification” (RBI)[i] systems, which include both facial recognition but also other systems for processing biometric data for identification purposes such as gait or voice recognition.

The objective of this paper is to present the basic features of this proposed set of rules; to decipher the “novelties” among these when compared with existing rules related to the processing of biometric data, especially Article 9 of the General Data Protection Regulation (GDPR) and Article 10 of the Law Enforcement Directive (LED); and to explain the logic behind the new mechanisms and constraints that have been introduced. Part 1 of this paper includes a table that we have produced in order to enable an understanding of the facial-recognition-related provisions of the draft AI Regulation “at a glance”. Part 2 focuses on the rules proposed in the draft to regulate the use of RBI in publicly accessible spaces for the purpose of law enforcement.

The analysis below is based on certain highlights of a first high level discussion on this topic organised on April 26, 2021 by the Chair on the Legal and Regulatory Implications of Artificial Intelligence (MIAI@Grenoble Alpes), with the cooperation of Microsoft. The workshop, which was held under Chatham House rules, included representatives of three different directorates-general of the European Commission (DG-Connect, DG-Just and DG-Home), the UK Surveillance Camera Commissioner, members of the EU Agency for Fundamental Rights (FRA) and Data Protection Authorities (CNIL), members of Europol and police departments in Europe, members of the European and the French Parliaments, representatives of civil society and business organisations, and several academics. A detailed report of this workshop and a list of attendees will be published in the coming days on AI-Regulation.Com, where we have already also posted the materials distributed during this workshop that could be very useful for the readers of this blog.

I. Table: RBI Rules, At A Glance

 In order to present the Commission’s proposal in a structured and more accessible way, we have produced the following table giving a visual overview of the basic RBI rules and mechanisms in the draft AI Regulation.

 


The table is divided into two parts (indicated by the blue dotted line) to represent the distinction made by the draft Regulation between obligations for “providers” of RBI systems (i.e., any natural or legal person who develops an RBI system in order to place it on the market and make it available for use); and “users” (i.e., any person or authority that deploys or uses an RBI system which is already available on the market).

1) The Upper Section: Important Pre-Market Requirements for RBI Developers and Providers

When one focuses on the upper section, it is immediately apparent that the draft Regulation proposes some remarkable novelties in relation to the obligations and pre-market requirements for providers that develop RBI systems.

Firstly, these new obligations concern all RBI systems, not only “real-time” RBI systems[ii], the regulation of the use of which by law enforcement authorities (LEAs) is shown in the lower section of the table. This is very important because it means that these pre-market obligations will also cover “post” RBI systems[iii], used for instance by LEAs to aid in the identification of a person who has committed a crime, using photos or video stills. Such identification/forensic methods are already used for instance in France in accordance with Article R 40-26 (3) of the French Code of Criminal Procedure. In 2019, a person who was identified, using such a system, after committing a burglary in Lyon, tried, unsuccessfully, to challenge the use and reliability of such post-RBI systems (the Court followed the prosecutor who explained that the facial recognition system was just one of the several tools used by LEAs during the investigation) . The Commission suggests that henceforth the development of post-RBI systems should be subject to exactly the same kind of strong pre-market requirements as those that concern “real-time” RBI.

Secondly, these RBI systems, in common with all other “high-risk AI systems” (that the Commission lists in Article 6 and Annex III of the Regulation), will be subject to a series of strict requirements and obligations (Articles 8-15) before they can be put on the market. These include:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system, to minimise risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation which provides all the necessary information about the system and its purpose so that authorities can assess whether it complies with requirements;
  • Information that can clearly and adequately be read by the user;
  • Appropriate human oversight measures to minimise risk;
  • High level of robustness, security and accuracy.

Thirdly, RBI systems will be subject to stricter conformity assessment procedures than those of all other high-risk AI systems in order to ensure that they meet these requirements. Whereas with other high-risk AI systems, the conformity assessment could be conducted by the system provider based on an ex ante assessment and by means of internal checks, RBI will have to undergo an ex ante third-party conformity assessment, because of the particularly high risks that fundamental rights might be breached. The only exception to this would be if RBI providers fully comply with the harmonised standards that are to be adopted by the EU standardisation organisations in this field. If this were the case, RBI systems providers could replace the third-party conformity assessment with an ex ante internal conformity assessment (Article 43(1)). In addition to ex ante conformity assessments, there would also be an ex post system for market surveillance and supervision of RBI systems by competent national authorities designated by the Member States.

During the April 26 workshop, several very interesting issues have been discussed by the participants in relation with the obligations of providers under the draft Regulation, the requirements set for RBI systems and the way the conformity assessment should be conducted. Due to space restrictions we cannot elaborate on these issues here, but they will be discussed in extenso in the Workshop’s Report to be published shortly.

2) The Lower Section: Constraints for LEAs Users of “Real-Time” RBI in Public Spaces

The lower section of the table focuses on the RBI related provisions in the draft Regulation which concern the use of such RBI systems. Once an RBI system has obtained certification, it can be put on the market and be used by public or private actors in accordance with existing, binding EU Law, in particular the GDPR and the LED. However, the draft Regulation intends to introduce new rules and constraints which concern a specific way in which RBI systems are used, namely employing “real-time” RBI in publicly accessible spaces for the purpose of law enforcement (terms defined in Article 3 and also reproduced in our materials accompanying this blog). The draft Regulation announces that using RBI in such a way is to be prohibited, unless it meets the criteria for three exceptions which appear in pink/coral in our table (and in Article 5(1)(d)). One of these exceptions allows for the use of RBI for the “detection, localisation, identification or prosecution of a perpetrator or suspect” who commits one of the 32 categories of criminal offences listed in the Framework Decision on the European Arrest Warrant (in our table, in grey, on the right) on the condition that such offences are punishable in the Member State concerned by a custodial sentence of at least three years.

When one compares these proposals with Article 10 of the LED, which already prohibits processing of biometric data by LEAs unless where “strictly necessary”, subject to “appropriate safeguards” and “where authorized by Union or Member State Law”, one may wonder whether they add anything new to the existent legal framework. The answer is, clearly, yes, and this for two main reasons.

Firstly, the draft Regulation intends to entirely prohibit certain ways in which RBI is used in publicly accessible spaces for the purpose of law enforcement, such as when the police use facial recognition to identify persons participating in a public protest or persons who committed offences others than the 32 that appear in our table.

Secondly, and most importantly, the draft AI Regulation aims to introduce an authorisation procedure that does not yet exist in law. Article 5(3) provides that such real-time uses of RBI by LEAs in publicly accessible spaces shall require prior authorisation “granted by a judicial authority or by an independent administrative authority of the Member State” (most probably the relevant Data Protection Authority). LEAs that intend to use the Article 5(1)(d) exceptions will thus need to submit a “reasoned request” based on a Data Protection Impact Assessment (DPIA) which determines whether all the conditions and constraints of the new instrument and the existing data protection legislation, as well as national law, are met.

Having presented our table and the basic structure of the RBI-related provisions of the draft AI Regulation, let’s now look at some interesting issues concerning the use of RBI systems.

II. Use of RBI and Facial Recognition:

“Nationalising” the “ban” debate – and other interesting issues

Here are some of the highlights of the issues discussed and clarifications given during the April 26 workshop mentioned above.

1) What Happens When Facial Recognition is Used in Other ways? The Draft AI Regulation as Lex Specialis

The participants of the 26 April workshop agreed that the prohibition in Article 5(1)(d) of the draft AI Regulation does not cover a series of other ways in which RBI and facial recognition is used. In particular the draft does not intend to prohibit:

a) Real-time use of RBI in publicly accessibly spaces by public authorities for purposes other than “law enforcement” (as defined in Article 3(41)). This means, for instance, that local governments are not prohibited under the draft Regulation from using such systems in order to control access to a venue for purposes other than law enforcement (for instance in order to facilitate and accelerate access by people).

b) Real-time use of RBI in publicly accessible spaces by private actors, such as private security companies (unless they are entrusted by the State to exercise public powers for law enforcement purposes). This means that retailers, transport companies or stadiums are not prohibited under the draft Regulation from using real-time RBI for any purpose, including scanning shoppers entering supermarkets to reduce shoplifting and abuse of staff, or preventing fans that have been banned from entering a stadium.

c) “Post” RBI, including when it is used by LEAs for the purpose of helping identify, using a photo or video-still, a person who has committed a crime.

d) Use of real-time RBI by all actors (including LEAs) in non-publicly accessible spaces, as defined in Article 3(39) and Recital 9.

e) Any use of facial recognition technologies that does not equate to “RBI” in terms of the meaning given in Article 3(36) and recital 8. This covers, for instance, the use of facial recognition for authentication purposes in security access protocols, where the system is able to determine a one to one match in order to confirm that a person is who they claim to be (example: comparing a passport photo to a passenger or a badge to a person who tries to enter a building).

The fact that all these uses of facial recognition are not prohibited by the draft AI Regulation, and are not subject either to “authorisation” under Article 5(3), does not mean, however, that this Regulation does not cover them, nor that these uses are not otherwise regulated by EU law (infra).

On the one hand, it must be emphasised, once again, that the draft Regulation contains an important number of novelties vis-à-vis all RBI systems, whether real-time or ex-post, and whether used by public authorities or private actors, in publicly accessible spaces or not. These novelties concern all the pre-market requirements and demanding conformity assessment procedures explained in Part 1 above. These should ensure, for instance, that the RBI systems that are put on the market cater for strict conditions around accuracy, risk management, robustness and cybersecurity, etc. They should also help develop systems that do not contain bias based on ethnic, racial, gender, and other human characteristics. This means that in all the scenarios mentioned in (a), (b), (c) and (d) above, only systems that are in principle certified by third bodies to meet these requirements could be used.

On the other hand, it must be stressed that all the uses of RBI and facial recognition mentioned above are already regulated by existing law, namely the GDPR and, when relevant, the LED. Article 9 of the GDPR prohibits, in principle, the processing of biometric data and provides for certain strict exceptions, subject to a series of conditions and safeguards, including the principles of necessity and proportionality. Data protection authorities (DPA) and courts around Europe have already taken the opportunity to declare that the use of facial recognition systems in some cases is illegal because using it in certain ways cannot meet the GDPR requirements. A good example is the February 27, 2020 decision of a French Court, which considered that the “experimental” use of facial recognition in two high schools in the South of France to grant or refuse access to students, did not meet the “consent” requirements of Article 9(2)(a) of the GDPR and did not meet either the “less intrusive means” requirement from the “strict” proportionality test under the GDPR.

The participants of the 26 April workshop stressed that the prohibition regarding LEAs that appears in Article 5(1)(d) of the draft AI Regulation is intended to apply as lex specialis with respect to the rules on the processing of biometric data contained in Article 10 of the LED. The Commission therefore decided to focus on the most problematic and dangerous uses of RBI in terms of human rights, namely real-time use by public authorities, in publicly accessible spaces, for law enforcement purposes. While Article 10 of the LED already poses serious obstacles with regard to the processing of biometric data by LEAs (stating that it “shall be allowed only where strictly necessary” and subject to conditions and appropriate safeguards), the Commission sought to go further down this road by regulating such use and the processing of biometric data involved in an exhaustive manner, while also imposing the fundamental condition of prior authorisation by a Court or a DPA when LEAs intend to use real-time RBI. But in any case, for all conceivable uses of facial recognition by public or private actors, existing data protection law still applies.

2) To Ban or Not to Ban? That is NOT a Question for the EU, But for Member States

An issue that was discussed extensively during the 26 April workshop was why the Commission did not accept the invitation of some stakeholders to ban the use of RBI by LEAs altogether in publicly accessible spaces. Several responses were given to this by participants. These responses focused on the fact that law enforcement authorities in several EU States consider that the use of RBI could be a useful tool for enhancing public security in some circumstances and if subject to appropriate red lines and safeguards. While EU Member States conferred competences to the EU in relation to data protection and respect of fundamental rights, they did not concede powers relating to the maintenance of national security and public order. National security (including the fight against terrorism) remains a competence of the Member States. Identity checks and police controls are also an exclusive competence of the Member States. The fight against crime and preservation of public order are also mainly competences of the Member States, despite the emergence of EU criminal law, which boosts cooperation between LEAs and deals with certain practical problems at an EU level.

Against this background, it was probably difficult for the Commission to consider the use of RBI systems by LEAs exclusively through the data protection prism and to ignore that its proposals will directly affect how the police acts on Member State territories, an issue that remains a prerogative of Member States.

It is our understanding that the Commission is therefore attempting, to a certain degree, to “nationalise” the debate around the opportunity of banning the use of RBI by LEAs in publicly accessible spaces. Its draft proposals set strong pre-market requirements and conformity assessments for the development of RBI software. They also impose a general ban on the use of real-time RBI for law enforcement purposes which can only be overturned if Member States adopt clear and precise rules of national law in order to authorise such use under one or more of the three possible exceptions. If they do so, they will need to respect both the conditions that already exist under Article 10 of the LED and the additional requirements and safeguards introduced by Article 5 of the draft Regulation, including the need for express and specific authorisation by a judicial authority or by an independent administrative authority (most probably a DPA). As Recital 22 explains:

“Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation”.

Each State will therefore be able to engage debate on these issues and decide whether it wishes to enact some of the exceptions provided in Article 5. Some EU Member States might refrain from doing so, in which case they will remain under the ban. Others might decide to enact “rules of national law” in order to authorise some uses by LEAs, in which case they will only be able to use software that has been “certified” as meeting the draft Regulation’s requirement and they will also have to respect the strict additional conditions set by the future AI Regulation (once, of course, it enters into force).

3) The Meaning of “National Law” in the Draft AI Regulation

The previous discussion brings us to another important issue that needs to be clarified. During the April 26 discussions, some participants stressed that the use of RBI by LEAs in publicly accessible spaces is already prohibited in principle by existing EU Law, so the “ban” in Article 5(1)(d) does not seem, as such, to constitute a big novelty – although the additional conditions and requirements brought in by the draft Regulation certainly do.

Indeed, Articles 8 and 10 of the LED in principle prohibit processing of biometric data by LEAs unless such processing is “strictly necessary” and is “authorised by Union or Member State law”.

The draft Regulation clearly explains in Recital 23 that it “is not intended to provide the legal basis for the processing of personal data” under Articles 8 and 10 of the LED. This means that Member States that wish to use the exceptions provided by Article 5(1)(d) of the draft Regulation will not be able to rely on the “authorised by Union law” clause. They will only be able to use such exceptions if they adopt clear and “detailed rules of national law” (Recital 22).

This, in turn, raises the question of what the draft Regulation means when it refers to “rules of national law”. Does this necessarily mean legislative measures adopted by national parliaments? Or could it mean simple non-statutory, regulatory measures adopted by competent bodies (for instance the Prime Minister or the Ministers of the Interior, Home or Justice) in EU Member States?

It is striking that the draft Regulation does not explain this issue and does not define what is meant by “rules of national law”. This is clearly an oversight. However, as was stressed during the April 26 workshop, the LED fully applies and its Recital 33 (drafted in the same way as recital 41 of the GDPR) gives a clear answer to this question. According to Recital 33:

“Where this Directive refers to Member State law, a legal basis or a legislative measure, this does not necessarily require a legislative act adopted by a parliament, without prejudice to requirements pursuant to the constitutional order of the Member State concerned…”

Recital 33 of the LED also explains that such a legal basis in relation to a Member State “should be clear and precise and its application foreseeable for those subject to it, as required by the case-law of the Court of Justice and the European Court of Human Rights” and moves on to highlight a series of specific requirements. The draft AI Regulation introduces additional requirements that “national laws” should enact, which concern both the prior authorisation mechanism and the specific limitations (including temporal and geographical) that the “reasoned request” by LEAs should take into consideration in order to obtain such authorisation.

Conclusion

All participants in the April 26 workshop acknowledged that the process of adoption of the draft AI regulation will be long. As a first step, the European Data Protection Board and the EDPS have been asked to provide a joint opinion on the draft Commission’s proposals in the next eight weeks, a period during which the draft is also open to public consultation. It will be interesting to see, in the next step, to what extent the Council of the EU and the European Parliament (several Committees of which are competing to instruct the AI legislative proposal) will be able to find common ground on the issues of RBI and facial recognition. As shown by the initial reactions to the draft AI proposal, these issues will be crucial in the discussions about the draft AI regulation.

The debate has begun and it is essential to have a good understanding of what is proposed by the Commission…

The authors would like to thank Stephanie Beltran Gautron and Maeva El Bouchikhi for their help in drafting this paper.

 

[i] According to Article 3 (36) of the draft Regulation, “remote biometric identification system” means an “AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified”.

[ii] According to Article 3 (37) of the draft Regulation, ‘‘real-time’ remote biometric identification system’ means “a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention”.

[iii] According to Article 3 (38) of the draft Regulation, ‘‘post’ remote biometric identification system’ means “a remote biometric identification system other than a ‘real-time’ remote biometric identification system”.