Why the proposed Artificial Intelligence Regulation does not deliver on the promise to protect individuals from harm

In April 2021 the European Commission circulated a draft proposal called ‘Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (hereinafter AI Regulation). The purpose of the proposed AI Regulation is to harmonise the rules governing artificial intelligence technology (hereinafter AI) in the European Union (hereinafter EU) in a manner that addresses ethical and human rights concerns (p. 1, para. 1.1). This blog post argues that the proposed AI Regulation does not sufficiently protect individuals from harms arising from the use of AI technology. One of the reasons for this is that policy makers did not engage with the limitations in international human rights treaties and the EU Charter regarding the protection of fundamental rights in the digital context. If policy makers want to achieve their objective to develop ‘an ecosystem of trust’ by adopting a legal framework on ‘trustworthy’ AI (p. 1, para. 1.1), then they need to amend the draft AI Regulation. Individuals will find it hard to place trust in the use of AI technology if the Regulation does not sufficiently safeguard their interests and fundamental rights. This contribution will use the prohibition of discrimination to illustrate these concerns. First, it will be shown that international human rights law inadequately protects human diversity. By overlooking this issue policy makers failed to detect that the representation of individuals in AI mathematical models distorts their identities and undermines the protection of human diversity. Second, it will be demonstrated that the definition of discrimination by reference to adverse treatment of individuals on the basis of innate characteristics leads to insufficient protection of individuals in the digital context.

Context

The draft AI Regulation arguably does not deliver on its promise ‘to ensure a high level of protection’ for fundamental rights (p. 11, para. 3.5), even if it is designed to be consistent with the European Union Charter of Fundamental Rights (hereinafter EU Charter) and existing EU legislation (p. 4, para. 1.2). Admittedly, the draft AI Regulation states that ‘the proposal complements existing Union law on non-discrimination with specific requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to the design and the quality of data sets used for the development of AI systems complemented with obligations for testing, risk management, documentation and human oversight throughout the AI systems’ lifecycle’ (p. 4, para. 1.2). But the drafters do not engage with the current widely shared concern among scholars that the prohibition of discrimination as it is formulated in international human rights treaties and domestic law prior to the development of AI does not sufficiently protect individuals against all relevant harms. The experts serving on the Council of Europe Ad hoc Committee on Artificial Intelligence brought the insufficiency of existing fundamental rights guarantees up in a report released in 2020 (p. 21-22, para. 82). The concerns about the gaps in the legal protection in international and domestic human rights provisions are also relevant for the EU Charter as it is modelled on the European Convention on Human Rights and other international human rights treaties. All EU Member States are party to the United Nations human rights treaties which contain provisions on the prohibition of discrimination including the Convention on the Rights of Persons with Disabilities (p. 6-7 European Parliament), International Covenant on Civil and Political Rights, International Covenant on Economic Social and Cultural Rights, Committee on the Elimination of Discrimination against Women (hereinafter CEDAW), International Convention on the Elimination of All Forms of Racial Discrimination and the Convention on the Rights of the Child. (p. 24 European Union Agency for Fundamental Rights and Council of Europe).

Problem 1: insufficient protection of human diversity

There is longstanding awareness among human rights scholars and practitioners about the limitations of international human rights law. Oddný Mjöll Arnardóttir explains that in the international human rights law context ‘the development of the principle of equality has occurred in three phases or eras’ (p. 41). Each development responded to a shortcoming in the ‘traditional construction of equality’ (p. 47-48). Here I will discuss one example of an ongoing critique that the operation of international human rights law results in representational harm. Ekaterina Krivenko criticizes international human rights law for defining sex and gender by reference to a binary category of man/woman (p. 140). To illustrate, the CEDAW Committee defines the term sex as referring to the biological difference between men and women (p. 2, para. 5). It defines gender as referring to the characteristics which society ascribes to such biological differences (p. 2, para. 5). We can see this additionally in Article 21 of the EU Charter which prohibits ‘any discrimination on any ground such as sex.’ This critique is relevant for the AI Regulation because the Regulation is designed to be consistent with the EU Charter. The concern stems from the fact that according to medical science it is impossible to define with scientific certainty the line which separates the categories of male and female (p. 140). Thus, it is artificial to separate people into men and women (p. 140). The treaties fail to recognise that individuals can perceive their gender identity in complex ways which is not linked to how they are embodied in their physical bodies (p. 142). Consequently, the treaties function in a manner that excludes individuals whose experiences of gender and sexual diversity do not fit onto the binary definition (p. 143). An example of such individuals are people with transgender and intersex identities (p. 143). Krivenko also criticizes the United Nations Committee which interprets the CEDAW for referring to intersex individuals as women. (p. 76) To address the inadequacy of the binary theories of gender Surya Monro developed a ‘gender pluralist theory’ (p. 19). Her theory argues that sex and gender are on a spectrum (p. 19). This means that there are spaces for identities which are fluid, shifting and which elide categorisation (p. 19). Similarly, the use of the term sex can lead to paradoxical outcomes when describing how multigender individuals experience their identities. There is no single way of experiencing a multigender identity. To illustrate, some multigender individuals experience their identities as both female and male while others view themselves as not having a gender at all. A literal interpretation of the term sex can result in individuals who regard themselves as having no gender identity being defined as either a male or a female. Thus, when the term sex is defined as referring to the biological difference between men and women, multigender individuals become classified in a manner which does not correspond to how they experience their embodiment. In order to reflect human diversity international human rights law should define sex and gender as being fluid.

Had EU policy makers taken this shortcoming into account when drafting the AI Regulation, they would have realised that it is impossible to represent individuals in a mathematical model in a manner which corresponds to how individuals define their identities. Currently, Article 1(b) of the draft AI Regulation merely lays down ‘specific requirements for high-risk AI systems and obligations for operators of such systems’ (p. 38) instead of banning such systems. Indeed, when developers programme AI systems, they represent information using mathematical equations or vectors (p. 312). For instance, a mathematical equation defines each object or category. One needs distinct mathematical equations to define each gender identity for example. The use of vectors to express definitions of gender in AI mathematical models perpetuates the use of binary gender definitions and variations in such definitions. This stems from the fact that the AI system uses distance in the mathematical space between two vectors as a means to measure how different two categories or objects are (p. 1). The process of labelling various gender identities and expressing them as mathematical equations is incompatible with recognising that individuals have plural gender identities. This approach reproduces the practice of creating human difference through using categories and a system of classification (p. 1).  To give effect to the aforementioned gender pluralist theory, one would need to represent gender as infinity in mathematical space. Yet, the representation of gender as infinity cannot be meaningfully implemented in the context of the AI systems. A related concern is that the representation of individuals using mathematical models necessarily changes their identities. Russ White argues that AI systems ‘flatten’ individuals because they do not treat them in their entirety. AI systems alter individual identities because the AI ascribes meaning to the data about a particular individual on the basis of data about other individuals. The AI places individuals into groups based on similarity in their data (p. 24) and uses group data to make the prediction about an applicant (p. 107).

The representation of individuals in the AI mathematical models can undermine the achievement of the goals of inclusion and of the protection of human diversity. Anja Bechmann explains that individuals whose activity does not appear ‘average’ can ‘light up’ in the AI mathematical models (p. 85). She criticizes the use of AI technology for subjecting individuals from underrepresented groups to the logics of normalisation (p. 88). The upshot is that individuals whose behaviour patterns, preferences or life experiences do not parallel that of the average individual as constructed through the AI system will not have access to certain opportunities. To illustrate, imagine that one defines the possession of communication skills as the ability to establish a large professional network through attending professional events. According to Psychology Today introvert individuals lose energy in social gatherings and may experience a crowded cocktail party as torture. There is a spectrum regarding the degree to which individuals are introvert. The use of the definition of a good employee in the present example can result in individuals who have a high degree of introversion appearing in the mathematical model as lacking communication skills and as deviating from the average. The highly introvert candidate experiences harm when the use of AI by the employer nudges that individual to adopt patterns of interaction which more closely resemble the average. Furthermore, the individual experiences harm when depicted as deviating from the average.

There is a concern that the employment of AI will make it more difficult to challenge the status quo and oppressive societal practices. Sara Ahmadi pointed out to me in a private conversation that AI systems often entrench existing definitions and categories pertaining to our lived reality. The problem of a distortion of individual identities through representation in AI mathematical models cannot be remedied by removing certain personal identity attributes from the model. Anya Prins and Daniel Schwarcz explain that an AI system seeks out all information which it can employ to make a prediction (p. 7-8). It can use information which appears neutral but which is in fact linked to the characteristic which the programmer does not want the AI to utilise (p. 8). For instance, the input zip code may indicate an applicant’s race (p. 206). This creates a risk that discrimination remains hidden (p. 4). To address this concern policy makers can prohibit the use of AI through the AI Regulation in contexts where it is necessary to represent the individual’s identity in the mathematical model to enable the system to perform its function. They can amend the AI Regulation to prohibit the use of AI in education, vocational training, employment, law enforcement, biometric identification, banking services, migration and administration of justice contexts. Currently, Annex III to the AI Regulation treats these uses of AI as high-risk. Moreover, it is desirable to introduce a provision in the EU Charter protecting human diversity.

Problem 2: overly narrow scope of protection

International human rights treaties and the EU Charter confer limited protection in the context of the employment of AI because they prohibit adverse treatment based on the possession of an innate or inherited characteristic which it is difficult to change. But individuals may also receive a negative assessment by AI due to having a particular subjective preference rather than due to an innate characteristic, as Sandra Wachter explains (p. 56). This could be ownership of a dog or the fact of being a video gamer (p. 56). There is little difference between excluding individuals on the basis of their innate characteristics and their preferences. The individuals’ preferences are intimately linked to their identities and to their ability to pursue their vision of a good life. Wachter believes that this gap in protection can be remedied through legal interpretation (p. 66). One would need to extend the prohibition of discrimination against individuals who have an association or affinity with a protected group in the case law of the European Court of Human Rights as applying to the case at hand (p. 66-68). For instance, if an applicant visits jazz websites and if certain ethnic groups are more likely to listen to jazz music then it is unlawful to issue a negative decision to the applicant on the basis of music preferences (p. 40). Wachter’s solution does, however, not address all harms. It may be impossible to detect in the context of the use of AI an association between a negative decision and a particular preference or conduct. I argue that there may be degrees of correlation between the decision outcome and the possession of a particular trait, preference or pattern of conduct (p. 13). This stems from the fact that the AI will use thousands of inputs which relate to the preferences and conduct of the individual as a basis for the decision.

Moreover, it should be noted that the EU Charter, the EU Directives prohibiting discrimination and the AI Regulation do not regulate harms which arise at societal level. They only address some harms which manifest themselves at the level of the individual and a group. This stems from the fact that these instruments make the individual the focus of the protection. To illustrate, Article 2(1) of the EU Racial Equality Directive defines prohibited conduct in terms of direct and indirect discrimination. Article 2(2)(a) defines direct discrimination as ‘where one person is treated less favourably than another is, has been or would be treated in a comparable situation on grounds of racial or ethnic origin.’ According to Article 2(2)(b), indirect discrimination is the use of ‘an apparently neutral provision, criterion or practice [which] would put persons of a racial or ethnic origin at a particular disadvantage compared with other persons, unless that provision, criterion or practice is objectively justified by a legitimate aim and the means of achieving that aim are appropriate and necessary.’

Because the two tests for the prohibition of discrimination in the EU Racial Equality Directive require that one evaluate what impact a practice has either on an individual with a protected characteristic or the protected group as a whole, they oblige the analyst to examine the relationship between the design of an AI system and the decision outcome(s). The principle of indirect discrimination can be used to examine the effects which the use of AI produces on societal level to a limited extent because it requires a relationship between a detrimental outcome for the group and the practice. This explains why the provisions in the draft AI Regulation focus on the AI system and its design. For instance, Articles 61(1) and 61(2) require the provider to set up a post-market monitoring system which collects data on the performance of the high-risk AI system and evaluates its continuing compliance with the draft Regulation (p. 75). Article 65(2) obliges a market surveillance authority to evaluate whether the AI system complies with the draft Regulation (p. 77). In case of non-compliance, this authority can order the ‘operator’ to either withdraw the system from the market or to make the system compliant under Article 65(2) (p. 77).

It is problematic that the AI Regulation focuses on the AI system in the abstract. Computer scientist Moshe Vardi argues that the harmful impacts of AI technology cannot be detected by focusing on the system as such. There is a need to consider how the technology operates in society. The deployment of AI technology produces effects on societal level. Florian Eyert, Florian Irgmaier and Lena Ulbricht maintain that the automation of decision-making processes using AI will result in a new social order (p. 48). My research findings are that the use of AI to represent human identities can trigger psychological processes in the mind of the developer and the user that are similar to those involved in individuals subjecting others to unjust treatment. Moreover, the use of AI can change how individuals form their identities to their detriment. The shortcoming of the definitions of direct and indirect discrimination is that they concentrate on a distinct moment in time when the adverse treatment takes place. Consequently, they ignore the fact that the use of AI can influence internal psychological processes and produce big changes over time. It is, however, beyond the scope of this blog contribution to consider all effects that the employment of AI can produce on societal level.

Conclusion

In order to address the above concerns, EU policy makers can adopt the following steps:

1) They can investigate what effects the use of AI has on individuals in light of the fact that it is embedded in social processes.

2) They can produce an inventory of findings regarding what types of effects the use of AI technology produces on the societal level. They should pay particular attention to the impact on individuals who experience disadvantage, exclusion and social barriers.

3) They can produce a study of how the way in which fundamental rights are formulated in international human rights treaties, the EU Charter and EU legislation results in limited protection of individuals in the digital context.

4) They can propose an amendment to the EU Charter to introduce more generous protections of rights in light of the digital context.

5) They can use a wider definition of harm which extends beyond the protection of fundamental rights as a basis for defining the scope of the AI Regulation.

6) They need to revise the draft AI Regulation to addresses gaps in protection arising from the use of the EU Charter and the EU law as a benchmark.

7) They need to regulate all uses of AI that can affect the enjoyment of fundamental rights instead of confining regulation to high-risk systems.

8) The use of AI in certain contexts should be prohibited.