Redress: What is the problem?

In the wake of the Schrems II ruling last July by the Court of Justice of the European Union (CJEU) invalidating the EU-US Privacy Shield, redress became a major sticking point in efforts to preserve transatlantic data flows. In that ruling, the CJEU found fault with how the United States affords individuals with ‘redress’ when they believe they have been the targets of illegal surveillance. The CJEU reiterated Article 47 of the European Union Charter of Fundamental Rights (Charter), which provides that ‘[e]veryone whose rights and freedoms guaranteed by the law of the Union are violated has the right to an effective remedy before a tribunal.’ Under Privacy Shield, the U.S. designated a senior State Department official to serve as an ‘ombudsperson’ for reviewing and addressing individual complaints. The CJEU ruled that the ombudsperson mechanism failed to meet Article 47 requirements because it lacked independence and could not issue binding decisions.

For a country with 1.3 million lawyers that is no stranger to litigation, it may seem surprising that the topic of obtaining a remedy would be such a challenging one. And yet, it undeniably is, not only for the U.S. but also for any democracy seeking to protect the nation from external threats.

The reason? Secrecy. Protecting national security requires secrecy. Once foreign terrorists, cyber hackers, or spies realise that their identities, aliases, and online activities have been compromised, they will change their behaviour to avoid detection. A fully transparent intelligence service is a fully ineffective one.

On the other hand, open democracies require transparency to make sure governments remain accountable to the will of the people. As former U.S. Supreme Court Justice Louis Brandeis famously said in an early article about the importance of transparency, ‘[s]unlight is said to be the best of disinfectants.’

For 14 years, I served as the Civil Liberties Protection Officer for the Office of the Director of National Intelligence (ODNI). For the latter part of my tenure, I led the efforts of the Intelligence Community (IC) to enhance transparency, which gave me a unique vantage point on the inherent tension between necessary secrecy and public accountability. The IC has made dramatic progress in enhancing transparency, which continues to this day—you can see evidence of that progress on sites such as IC on the Record and intel.gov. Nonetheless, the IC must remain watchful to prevent the unauthorised disclosure of information that would harm national security.

Here lies the crux of the redress problem. How can claimants show they have suffered a loss that requires compensation, or a wrong that must be made right, if they do not know whether the government collected their data?

On the theory that one must first understand the problem to identify the solution, in this article, we will explore the redress challenge. In our next article, we will compare the U.S. approach with that of Europe. And in our third article, we will examine potential solutions to the redress problem.

Continue reading

New and extensive data processing powers proposed for Europol

The European Law Blog will be taking a summer recess. We’ll be back in September with new commentaries. Please do send us on your contributions throughout this period and we will get back to you in due course. Happy Holidays to all our readers!

Introduction

The European Commission plans to considerably expand the data processing powers of Europol, the EU’s law enforcement agency. In December 2020, the Commission published a proposal for a Regulation amending Regulation 2016/794 (Europol Regulation). In view of the Commission, increasingly digital and complex security threats necessitate new powers for Europol so that it can continue to effectively support and strengthen action by national authorities.

The proposed amendments to the Europol Regulation can be divided up into nine thematic blocks:

  • Enabling Europol to cooperate effectively with private parties;
  • Enabling Europol to process large and complex datasets;
  • Strengthening Europol’s role on research and innovation;
  • Enabling Europol to enter data into the Schengen Information System;
  • Strengthening Europol’s cooperation with third countries;
  • Strengthening Europol’s cooperation with the European Public Prosecutor Office;
  • Clarifying that Europol may request the initiation of an investigation of a crime affecting a common interest covered by a Union policy;
  • Strengthening the data protection framework applicable to Europol;
  • Other provisions, such as support for Member States’ high value target investigations, information processing for judicial proceedings, and increased parliamentary scrutiny.

As a preliminary point, it should be stressed that parts of the proposed amendments aim to legalize personal data processing activities which Europol is already conducting, such as the processing of large datasets and the processing of data about individuals who are not linked to any criminal activity. After an inquiry, the EDPS in its decision of September 2020 admonished Europol for these (currently) unlawful data processing activities and urged Europol to mitigate the risks created by these data processing activities. The Commission responded to the EDPS’s admonishment by proposing certain amendments to the Europol Regulation to create a legal basis for Europol’s extensive data processing activities.

This contribution focuses on the new data processing powers of Europol. These data processing powers relate to personal data which Europol receives via national intermediaries and private parties or which Europol collects via publicly available sources. The contribution makes four points. First, there is a tension between Europol’s new proactive data processing powers and its legally mandated supportive role. Second, the proposed amendments follow a problematic logic in which new data processing powers for Europol are justified by the fact that Europol receives large datasets. Third, the new data processing powers are regulated by open norms which are hard to oversee or supervise. Fourth, the proposed amendments incentive voluntary data sharing by private parties to Europol, with which procedural safeguards for fundamental rights are circumvented.

Continue reading

MAKING AI’S TRANSPARENCY TRANSPARENT: notes on the EU Proposal for the AI Act

The European Law Blog will be taking a summer recess. We’ll be back in September with new commentaries. Please do send us on your contributions throughout this period and we will get back to you in due course. Happy Holidays to all our readers!

 

Transparency is one of the core values promoted by the EU for the development, deployment, and use of AI systems. Since the start of the policy process to regulate AI, all the relevant documents (the Ethics Guidelines for Trustworthy AI issued by the High-Level Expert Group on AI as of December 2018 (AI-HLEG Ethics Guidelines for Trustworthy AI), the White Paper on AI issued by the European Commission in February, 2020 (EC White Paper on AI), and the European Parliament’s Framework of ethical aspects of AI, robotics and related technologies as of October, 2020 (EP Report on AI Framework)) included transparency in the ethical or legal frameworks they respectively suggested.

The first EU legislative proposal on the matter – the European Commission’s proposal for a Regulation on Artificial Intelligence as of April, 2021 (AI Act) – follows the previous policy direction towards transparency of AI and includes several requirements explicitly devoted to it:  Article 13 (the ‘Transparency and information provision’) and Article 52 that covers the full title IV (the ‘Transparency obligations for certain AI systems’). Yet, the precise meaning of transparency remains unclear. I will illustrate this through the analysis of how transparency correlates with the following related concepts: 1) communication, 2) interpretability, 3) explainability, 4) information provision, and 5) record keeping and documentation. These concepts are associated with transparency in the four specified policy acts (further mentioned together as the ‘AI policy documents’). My analysis intends to demonstrate that the EU legislator should have a more coherent vision on the ‘transparency’ terminology in the AI Act. For consistency, I suggest establishing the hierarchy between the related concepts where transparency shall be seen as the broadest one.

Continue reading

How to grant unfettered discretion to the Commission to disregard third parties’ submissions in State aid cases – AG Tanchev Opinion of 3 June 2021 in Tempus

The European Law Blog will be taking a summer recess. We’ll be back in September with new commentaries. Please do send us on your contributions throughout this period and we will get back to you in due course. Happy Holidays to all our readers!

 

We knew that third parties’ rights in State aid assessment procedures are virtually non-existent – as has been deplored for many years (see e.g. here or here) – but Advocate General Tanchev’s Opinion of 3 June 2021 in Case C-57/19 P Commission v. Tempus Energy Ltd and Tempus Energy Technology Ltd (the Opinion) would effectively grant unfettered discretion to the Commission to cherry-pick the information it analyses when deciding on the compatibility of aid measures with the internal market. Besides being illogical in several aspects, the Opinion also highlights the intrinsic flaws of State aid procedural rules when it comes to taking into account information from sources other than the notifying Member States themselves.

Background to the case

Without going into the complex details, the Tempus case originates from an action for annulment brought by Tempus Energy Ltd and Tempus Energy Technology Ltd (together ‘Tempus’) against the Commission’s State aid decision of 23 July 2014 (SA.35980). In the decision, the Commission had raised no objection to a capacity mechanism introduced by the UK government, remunerating energy operators for ensuring capacity adequacy in Great Britain. Tempus was notably arguing that the scheme discriminated against demand-side response (DSR) operators by disfavouring them in the auctions and in the capacity contracts for which they were eligible, compared to energy generators. DSR offers flexibility services by which businesses and consumers can turn up, turn down, or shift electricity demand in real-time. On 15 November 2018 (T-793/14), the General Court (‘GC’) upheld Tempus’ action and found that the Commission should have opened a formal investigation on the scheme based on Article 108(2) TFEU (see here for an analysis amongst many others). The GC considered that the observations from third parties, the length of the pre-notification procedure (18 months), and the complexity and novelty of the measure indicated that the Commission should have had doubts about the compatibility of the measure with the internal market. Therefore, it should have initiated a formal investigation procedure in order to come to a decision with full knowledge of the facts. The GC in particular accused the Commission to have “simply requested and reproduced the information submitted by the relevant Member State without carrying out its own analysis” (para 114). The Opinion commented hereunder relates to the Commission’s appeal against that judgment.

Continue reading

Case C-709/20 CG v The Department for Communities in Northern Ireland: A Post-Brexit Swansong for the Charter of Fundamental Rights

Introduction

The United Kingdom withdrew from the European Union on 31 January 2020, and EU law ceased to apply within the state upon the end of the transition period on 31 December 2020. Nevertheless, on 15 July 2021, the Court of Justice of the European Union found that UK authorities are obliged to ensure that a Dutch-Croatian dual citizen in Northern Ireland and her children have necessary subsistence to live dignified lives. This swansong for the application of the EU Charter of Fundamental Rights to the former Member State results from the procedural conditions enshrined in the Withdrawal Agreement on the CJEU’s jurisdiction post-Brexit. Maria Haag analyses the substance of the judgment in her summary of the case. This post will focus on the Brexit-relevant aspects. It summarises the reasoning on jurisdiction and admissibility, before providing comments that reflect upon whether the specific chronology of the case may have facilitated the CJEU to deliver a judgment that focused on alleviating the personal difficulties of the claimant and her family through the unprecedented application of the Charter. Despite the transition period providing a legal basis, this result may still be criticised as insufficiently sensitive to the Brexit context of the case.

Summary of the judgment

CG’s dispute arose from her exclusion of access to the UK social assistance of Universal Credit by virtue of Article 9(2) of the 2016 Universal Credit Regulations (Northern Ireland), which limits access to those treated as “habitually resident” in the United Kingdom. CG had a temporary right to reside for five years through the Settlement Scheme contained in Appendix EU of the UK Immigration Rules, which was adopted in anticipation of the UK’s obligations under Part 2 on citizens’ rights of the Withdrawal Agreement. Article 9(3)(d) of the Regulations excluded this residence right from the condition for access to Universal Credit (para 29-33). CG claimed that this provision of national law infringed Article 18 TFEU as it discriminated against EU citizens in the same position as UK nationals (paras 36-37).The President of the CJEU showed sensitivity to CG’s position by granting an expedited procedure due to the “potential risks of violation of the fundamental rights of CG and her children” due to her destitution and impossibility of receiving social assistance under national law (paras 40-44).

Continue reading

Case C-709/20 CG – The Right to Equal Treatment of EU Citizens: Another Nail in the Coffin

1. Introduction

Before heading into its judicial vacation for the summer at the end of last week, the CJEU delivered a seminal decision on Union citizenship and the right to equal treatment: CG v The Department for Communities in Northern Ireland. There is much to unpack in this Grand Chamber judgment. First, the case concerns an EU citizen who was granted pre-settled status under UK law during the transition period. What kind of protection does she still enjoy under EU law? Second, this case falls within the ever-growing line of case law on the EU citizen’s equal access to social assistance when residing in a host Member State (i.e. in a Member State other than that of their nationality). The CG judgment illustrates the CJEU’s very restrictive interpretation of the right to equal treatment of economically inactive EU citizens that has already been evident since Dano. In a surprising and unprecedented move, the CJEU has turned to the Charter of Fundamental Rights (Charter or CFREU) to fill the gaps of the dwindling right to equal treatment of economically inactive EU citizens.

Two posts are dedicated in parallel to the decision: while the present post situates the case within the Union citizenship case law, Oliver Garner’s post discusses the Brexit aspects of this case.

Continue reading

German headscarf cases at the ECJ: a glimmer of hope?

On 15 July 2021, the ECJ handed down its judgment in two joined cases, both referred by German courts, regarding the wearing of Islamic headscarves or hijabs at work: IX v Wabe eV and MH Müller Handels GmbH v MJ. These cases have been widely reported under headlines stating that European employers can dismiss employees for wearing a headscarf (e.g. France24, Aljazeera, the Times). The judgment has been criticised for fuelling Islamophobia in Europe. Although the judgment does, indeed, allow employers to ban the wearing of hijabs at work, it does so under certain conditions and contains some positive developments in clarifying its previous judgments in headscarf cases (Achbita and Bougnaoui). In this sense, the judgment presents a small indication that the ECJ is moving – even if very slowly – towards more protection of Muslim hijab-wearing employees.

1. The facts

In Wabe, a Muslim employee of a company running a number of nurseries, was asked, when returning from parental leave, to no longer wear a headscarf. During her leave, the company had introduced a neutrality policy prescribing that employees refrain from wearing any visible signs of political, ideological or religious beliefs. The employee refused to remove her headscarf and, after two official warnings, was released from work. The company’s neutrality policy did not apply to employees who did not come into contact with customers. The claimant, IX, challenged this as direct religion or belief discrimination and as discrimination on the grounds of gender and ethnic origin.

Müller concerned a Muslim employee of a company which runs a number of chemist shops. On her return from parental leave, the employee wore a headscarf, which she had not done before. Her employer asked her to remove the headscarf as it was against company rules not to wear any prominent and large-scale signs of religious, philosophical and political convictions. This rule applied to all shops and aimed to preserve neutrality and avoid conflicts between employees. After twice refusing to do so, she received instructions to come to work without the headscarf. The claimant, MJ, also challenged these instructions as discrimination.

Continue reading

Why the proposed Artificial Intelligence Regulation does not deliver on the promise to protect individuals from harm

In April 2021 the European Commission circulated a draft proposal called ‘Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (hereinafter AI Regulation). The purpose of the proposed AI Regulation is to harmonise the rules governing artificial intelligence technology (hereinafter AI) in the European Union (hereinafter EU) in a manner that addresses ethical and human rights concerns (p. 1, para. 1.1). This blog post argues that the proposed AI Regulation does not sufficiently protect individuals from harms arising from the use of AI technology. One of the reasons for this is that policy makers did not engage with the limitations in international human rights treaties and the EU Charter regarding the protection of fundamental rights in the digital context. If policy makers want to achieve their objective to develop ‘an ecosystem of trust’ by adopting a legal framework on ‘trustworthy’ AI (p. 1, para. 1.1), then they need to amend the draft AI Regulation. Individuals will find it hard to place trust in the use of AI technology if the Regulation does not sufficiently safeguard their interests and fundamental rights. This contribution will use the prohibition of discrimination to illustrate these concerns. First, it will be shown that international human rights law inadequately protects human diversity. By overlooking this issue policy makers failed to detect that the representation of individuals in AI mathematical models distorts their identities and undermines the protection of human diversity. Second, it will be demonstrated that the definition of discrimination by reference to adverse treatment of individuals on the basis of innate characteristics leads to insufficient protection of individuals in the digital context.

Context

The draft AI Regulation arguably does not deliver on its promise ‘to ensure a high level of protection’ for fundamental rights (p. 11, para. 3.5), even if it is designed to be consistent with the European Union Charter of Fundamental Rights (hereinafter EU Charter) and existing EU legislation (p. 4, para. 1.2). Admittedly, the draft AI Regulation states that ‘the proposal complements existing Union law on non-discrimination with specific requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to the design and the quality of data sets used for the development of AI systems complemented with obligations for testing, risk management, documentation and human oversight throughout the AI systems’ lifecycle’ (p. 4, para. 1.2). But the drafters do not engage with the current widely shared concern among scholars that the prohibition of discrimination as it is formulated in international human rights treaties and domestic law prior to the development of AI does not sufficiently protect individuals against all relevant harms. The experts serving on the Council of Europe Ad hoc Committee on Artificial Intelligence brought the insufficiency of existing fundamental rights guarantees up in a report released in 2020 (p. 21-22, para. 82). The concerns about the gaps in the legal protection in international and domestic human rights provisions are also relevant for the EU Charter as it is modelled on the European Convention on Human Rights and other international human rights treaties. All EU Member States are party to the United Nations human rights treaties which contain provisions on the prohibition of discrimination including the Convention on the Rights of Persons with Disabilities (p. 6-7 European Parliament), International Covenant on Civil and Political Rights, International Covenant on Economic Social and Cultural Rights, Committee on the Elimination of Discrimination against Women (hereinafter CEDAW), International Convention on the Elimination of All Forms of Racial Discrimination and the Convention on the Rights of the Child. (p. 24 European Union Agency for Fundamental Rights and Council of Europe).

Continue reading

X