German headscarf cases at the ECJ: a glimmer of hope?

On 15 July 2021, the ECJ handed down its judgment in two joined cases, both referred by German courts, regarding the wearing of Islamic headscarves or hijabs at work: IX v Wabe eV and MH Müller Handels GmbH v MJ. These cases have been widely reported under headlines stating that European employers can dismiss employees for wearing a headscarf (e.g. France24, Aljazeera, the Times). The judgment has been criticised for fuelling Islamophobia in Europe. Although the judgment does, indeed, allow employers to ban the wearing of hijabs at work, it does so under certain conditions and contains some positive developments in clarifying its previous judgments in headscarf cases (Achbita and Bougnaoui). In this sense, the judgment presents a small indication that the ECJ is moving – even if very slowly – towards more protection of Muslim hijab-wearing employees.

1. The facts

In Wabe, a Muslim employee of a company running a number of nurseries, was asked, when returning from parental leave, to no longer wear a headscarf. During her leave, the company had introduced a neutrality policy prescribing that employees refrain from wearing any visible signs of political, ideological or religious beliefs. The employee refused to remove her headscarf and, after two official warnings, was released from work. The company’s neutrality policy did not apply to employees who did not come into contact with customers. The claimant, IX, challenged this as direct religion or belief discrimination and as discrimination on the grounds of gender and ethnic origin.

Müller concerned a Muslim employee of a company which runs a number of chemist shops. On her return from parental leave, the employee wore a headscarf, which she had not done before. Her employer asked her to remove the headscarf as it was against company rules not to wear any prominent and large-scale signs of religious, philosophical and political convictions. This rule applied to all shops and aimed to preserve neutrality and avoid conflicts between employees. After twice refusing to do so, she received instructions to come to work without the headscarf. The claimant, MJ, also challenged these instructions as discrimination.

Continue reading

Why the proposed Artificial Intelligence Regulation does not deliver on the promise to protect individuals from harm

In April 2021 the European Commission circulated a draft proposal called ‘Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (hereinafter AI Regulation). The purpose of the proposed AI Regulation is to harmonise the rules governing artificial intelligence technology (hereinafter AI) in the European Union (hereinafter EU) in a manner that addresses ethical and human rights concerns (p. 1, para. 1.1). This blog post argues that the proposed AI Regulation does not sufficiently protect individuals from harms arising from the use of AI technology. One of the reasons for this is that policy makers did not engage with the limitations in international human rights treaties and the EU Charter regarding the protection of fundamental rights in the digital context. If policy makers want to achieve their objective to develop ‘an ecosystem of trust’ by adopting a legal framework on ‘trustworthy’ AI (p. 1, para. 1.1), then they need to amend the draft AI Regulation. Individuals will find it hard to place trust in the use of AI technology if the Regulation does not sufficiently safeguard their interests and fundamental rights. This contribution will use the prohibition of discrimination to illustrate these concerns. First, it will be shown that international human rights law inadequately protects human diversity. By overlooking this issue policy makers failed to detect that the representation of individuals in AI mathematical models distorts their identities and undermines the protection of human diversity. Second, it will be demonstrated that the definition of discrimination by reference to adverse treatment of individuals on the basis of innate characteristics leads to insufficient protection of individuals in the digital context.

Context

The draft AI Regulation arguably does not deliver on its promise ‘to ensure a high level of protection’ for fundamental rights (p. 11, para. 3.5), even if it is designed to be consistent with the European Union Charter of Fundamental Rights (hereinafter EU Charter) and existing EU legislation (p. 4, para. 1.2). Admittedly, the draft AI Regulation states that ‘the proposal complements existing Union law on non-discrimination with specific requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to the design and the quality of data sets used for the development of AI systems complemented with obligations for testing, risk management, documentation and human oversight throughout the AI systems’ lifecycle’ (p. 4, para. 1.2). But the drafters do not engage with the current widely shared concern among scholars that the prohibition of discrimination as it is formulated in international human rights treaties and domestic law prior to the development of AI does not sufficiently protect individuals against all relevant harms. The experts serving on the Council of Europe Ad hoc Committee on Artificial Intelligence brought the insufficiency of existing fundamental rights guarantees up in a report released in 2020 (p. 21-22, para. 82). The concerns about the gaps in the legal protection in international and domestic human rights provisions are also relevant for the EU Charter as it is modelled on the European Convention on Human Rights and other international human rights treaties. All EU Member States are party to the United Nations human rights treaties which contain provisions on the prohibition of discrimination including the Convention on the Rights of Persons with Disabilities (p. 6-7 European Parliament), International Covenant on Civil and Political Rights, International Covenant on Economic Social and Cultural Rights, Committee on the Elimination of Discrimination against Women (hereinafter CEDAW), International Convention on the Elimination of All Forms of Racial Discrimination and the Convention on the Rights of the Child. (p. 24 European Union Agency for Fundamental Rights and Council of Europe).

Continue reading

EU lawmaking in the Artificial Intelligent Age: Act-ification, GDPR mimesis, and regulatory brutality

A few months ago the authors identified, in respective posts that were kindly hosted on this blog, two phenomena observable in recent EU law, namely its ‘act-ification’ and its ‘GDPR mimesis’. The first denoted the tendency of the EU legislator, perhaps affected by its US counterpart, to introduce eponymous ‘Acts’ rather than anonymous, sequentially numbered pieces of legislation. The second aimed to describe the GDPR’s heavy-handed influence on all new pieces of EU law that aim to protect individuals from perceived perils of technology. Before long both trends were vindicated, and are visible, in the Commission’s recent release of a draft Artificial Intelligence (‘AI’) Act. After a discussion of both trends, we introduce a reflection about regulatory brutality and the – apparent – lack of concern from the EU side about legal coherence in domestic law systems.

Act-ification in the Artificial Intelligence Act 

The ‘act-ification’ of EU law continues unhindered: The new draft follows the pattern of its predecessors, introducing in a parenthesis a short title with the word ‘Act’ in it. Its full title is ‘Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’. Once again, the Commission deliberately chose, first, to introduce a short title to refer to its legislative initiative, and, second, to include the word ‘Act’ in it instead of, for example, ‘Regulation’, which would have been the obvious choice (see, for example, the GDPR). The draft AI Act is the latest addition to a long series of other EU law ‘Acts’, as outlined in our previous blog post. The authors welcome this development, because of the proximity and the intimacy it creates between EU law and Europeans, and look forward to a point in the hopefully not so distant future where EU law will require Popular Name Tools, as is the case also in US law (where Popular Name Tools, or Tables, are by now necessary, in order to translate the short title given to many laws (e.g. PATRIOT Act or CLOUD Act) into the citations that will help locate them in the correct section of the U.S. Code).Continue reading

The EU regulates AI but forgets to protect our mind

After the publication of the European Commission’s  proposal for a Regulation on Artificial Intelligence (AI Act, hereinafter: AIA), in April 2021, several commentators have raised concerns or doubts about that draft. Notably, on the 21st of June the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) released a Joint Opinion on the AIA, suggesting many changes to the European Commission’s proposal.

We consider the AIA one of the most advanced and comprehensive attempts to regulate AI in the world. However, we concur with the EDPB and the EDPS that the AIA, in its current version, presents several shortcomings. We identify three main areas where amendment is needed: (i) the regulation of emotion recognition, (ii) the regulation of biometric classification systems and (iii) the protection against commercial manipulation. Partly building upon the thoughtful and comprehensive joint opinion of the European supervisory authorities, we will try to make a case for greater regulatory focus on the nexus between artificial intelligence and the human mind.Continue reading

The Palimpsest of Conformity Assessment in the Proposed Artificial Intelligence Act: A Critical Exploration of Related Terminology

The European Commission’s (EC) Proposal for a Regulation laying down harmonized rules on artificial intelligence (AI Act) has drawn extensive attention as being the ‘first ever legal framework on AI, resuming last year’s discussions on the AI White Paper. The aim of the Proposal and the mechanisms it encompasses is the development of an ecosystem of trust through the establishment of a human-centric legal framework for trustworthy AI (Explanatory Memorandum of the Proposal, p.1). Only on the first page of the Memorandum, the word trust appears five times, whereas the actual text of the proposal refers at least ten times to the idea of trust, mainly in recitals. This comes as no surprise given that trust is a core component of the European Single Market, and the legislative action aims ‘to ensure a well-functioning internal market for artificial intelligence systems’ (Explanatory Memorandum, p.1). This ‘importance of creating the trust that will allow the digital economy to develop across the internal market’ is also repeated in Recital 7 of the EU General Data Protection Regulation (GDPR).

For individuals to trust that AI-based products are developed and used in a safe and compliant manner and for businesses to embrace and invest in such technologies, a series of novelties have been introduced in the proposed Act. Those novelties include, but are not limited to, i) the ranking of AI systems, depending on the level of risk stemming from them (unacceptable, high, low, and minimal), as well as ii) the legal requirements for high-risk AI systems. In ensuring compliance with these requirements and ‘ensuring a high level of trustworthiness’ (Recital 62 of the proposed Act), the tool proposed by the EC is the procedure of conformity assessment. A table which clearly schematizes this procedure was created by the AI Regulation Chair-MIAI in April 2021 and was included in a previous blogpost on the European Law Blog.

In our analysis, we briefly explore the legislator’s choice of the procedure of the conformity assessment as a means to increase trust, introduced in the proposed Act. In this light, we acknowledge a two-fold challenge. On the one hand, ‘conformity’, despite its well-established usage in product compliance legislation, may acquire a totally new meaning, owing to the specificities and intangibility of AI. On the other hand, this choice could potentially come in contrast with a) other parallel assessments and requirements, imposed on technology providers, namely the data controllers’ obligations, as established after the advent of the GDPR and b) previous definitions and conformity assessment procedures implemented for other products.

To study this topic and highlight its complexity, we use the product compliance legislation in the European Union (EU), the proposed Artificial Intelligence Act and the GDPR, as our reference points. Initially, we investigate the origins of the conformity assessment and its definition. We then decipher the scope of ‘conformity’ by comparing it to three other closely related terms found in the text of the proposed Act and in the GDPR: ‘compliance’, ‘impact’, and ‘accountability’. These terms have been selected for different reasons. On the one hand, conformity, accountability, and impact, all share the element of compliance. Accountability, as a principle defined in the GDPR (Article 5(2)), mirrors the data controller’s responsibility for, and his/her ability to demonstrate, compliance. Impact, as the impact of the envisaged processing operation on the protection of personal data (Article 35(1) GDPR), – like conformity in the proposed Act – constitutes the object of an assessment, which should include measures, safeguards and mechanisms envisaged for mitigating risk and for demonstrating compliance with this Regulation. On the other hand, compliance has been selected not only because it penetrates all other terms, but also due to its semantic affinity to the term conformity, which often gives the impression that the two terms can be used interchangeably.

Continue reading

Digital Markets Act: beware of procedural fairness and judicial review booby-traps!

On 15 December 2020, the European Commission presented its ambitious Digital Services Act and Digital Markets Act. Those Acts, which at this stage are only legislative proposals for future EU Regulations (see on the ‘actification’ of the EU legislative process, here), constitute a major step forward in the regulation of digital markets in the European Union (EU). The DMA is particularly unique and important as it sets up an ex ante regulatory framework complementary to, yet also fundamentally different from, existing EU competition law provisions. If adopted, it would contribute significantly to shaping the EU’s particular approach to digital markets regulation (though that approach is multi-faceted, as argued here). Despite the DMA being well-structured and thought through, this blog post submits that it insufficiently pays attention to the requirements necessary to comply with the fundamental right to a fair trial (see also here). Although the proposal explicitly hints at respect for that fundamental right, it does neither acknowledge nor address the differences in interpretation that exist between Article 6 of the European Convention on Human Rights (ECHR) and Article 47 of the EU Charter of Fundamental Rights. As a result of those different interpretations, DMA enforcement does not appear to be compatible with Article 6 ECHR, despite Article 52(3) of the EU Charter considering that provision as a minimum standard of protection within the EU legal order. Now that the EU is once again negotiating accession to the ECHR (see here), one could legitimately ask whether and for how long this could be tolerated. It is submitted, therefore, that ignoring this issue at the negotiation and drafting stage is like inserting a potential booby-trap into the DMA’s institutional design. This post outlines the key features of the DMA prior to summarising its enforcement framework and addressing the problematic ‘fair trial’ elements underlying it.

Reigning in digital gatekeepers

In essence, the DMA wants to regulate big tech companies in order to avoid that they become super-dominant market players. Under EU competition law, becoming dominant in the absence of merging with another enterprise is not as such illegal. Articles 101 and 102 TFEU only intervene in an ex post manner, prohibiting and addressing anticompetitive practices that have already taken place. The DMA would enable the European Commission to intervene before such services providers would become too powerful and start abusing their freshly acquired dominant position as a result.

Continue reading

No collective redress against foreign companies in cases of purely financial damage: Case C-709/19 VEB v. British Petroleum

The decision of the Court of Justice of the European Union (CJEU) in Vereniging van Effectenbezitters (VEB) v British Petroleum PLC (BP) plc, delivered on 12 May 2021, came as a major blow to Dutch claim associations suing foreign defendants before domestic courts. The CJEU ruled that Article 7(2) of Brussels I bis Regulation (Regulation No 1215/2012) must be interpreted as meaning that the direct occurrence of a purely financial loss resulting from investment decisions does not allow the attribution of international jurisdiction to a court of the Member State in which the bank or investment firm is located. This is true even if the investment decision was taken due to misleading information from an internationally listed company that was published worldwide. Only the courts of the Member State where a listed company must fulfil its statutory reporting obligations have international jurisdiction on that ground. Continue reading

Brexit and the free movement of goods: a bitter goodbye to Cassis?

The Brexit and the subsequent Trade and Cooperation Agreement (hereafter TCA) marked the beginning of the bumpy and unprecedented road of European disintegration. Fear about loss of sovereignty and regulatory control was the driving force behind the UK longstanding reluctance to further European integration and, eventually, its exit from the Union altogether. The Leave campaign deployed its “take back control” slogan and promised divestiture from EU institutions and policies. What does “taking back control” entail? Well, in essence, and although there is no consensus on what the referendum vote implied precisely, we may assume that it means that legislation and regulation affecting the UK should be enacted (or at least believed to be enacted) by the UK. In other words, Brexiteers wished to regain full regulatory autonomy and hoped that leaving the EU would achieve this result. Will this narrative be upheld in practice, though? This post offers some answers that the TCA provisions on trade in goods and technical standardisation may offer. In a nutshell, it shows that, when it comes to trade in goods, the UK, although it has regained the theoretical opportunity to depart from EU harmonising legislation and technical standards, has not only not gained back control de facto but it is, in fact, losing opportunities when it comes to technical standardisation and market access. Continue reading

X