EU lawmaking in the Artificial Intelligent Age: Act-ification, GDPR mimesis, and regulatory brutality

A few months ago the authors identified, in respective posts that were kindly hosted on this blog, two phenomena observable in recent EU law, namely its ‘act-ification’ and its ‘GDPR mimesis’. The first denoted the tendency of the EU legislator, perhaps affected by its US counterpart, to introduce eponymous ‘Acts’ rather than anonymous, sequentially numbered pieces of legislation. The second aimed to describe the GDPR’s heavy-handed influence on all new pieces of EU law that aim to protect individuals from perceived perils of technology. Before long both trends were vindicated, and are visible, in the Commission’s recent release of a draft Artificial Intelligence (‘AI’) Act. After a discussion of both trends, we introduce a reflection about regulatory brutality and the – apparent – lack of concern from the EU side about legal coherence in domestic law systems.

Act-ification in the Artificial Intelligence Act 

The ‘act-ification’ of EU law continues unhindered: The new draft follows the pattern of its predecessors, introducing in a parenthesis a short title with the word ‘Act’ in it. Its full title is ‘Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts’. Once again, the Commission deliberately chose, first, to introduce a short title to refer to its legislative initiative, and, second, to include the word ‘Act’ in it instead of, for example, ‘Regulation’, which would have been the obvious choice (see, for example, the GDPR). The draft AI Act is the latest addition to a long series of other EU law ‘Acts’, as outlined in our previous blog post. The authors welcome this development, because of the proximity and the intimacy it creates between EU law and Europeans, and look forward to a point in the hopefully not so distant future where EU law will require Popular Name Tools, as is the case also in US law (where Popular Name Tools, or Tables, are by now necessary, in order to translate the short title given to many laws (e.g. PATRIOT Act or CLOUD Act) into the citations that will help locate them in the correct section of the U.S. Code).Continue reading

The EU regulates AI but forgets to protect our mind

After the publication of the European Commission’s  proposal for a Regulation on Artificial Intelligence (AI Act, hereinafter: AIA), in April 2021, several commentators have raised concerns or doubts about that draft. Notably, on the 21st of June the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) released a Joint Opinion on the AIA, suggesting many changes to the European Commission’s proposal.

We consider the AIA one of the most advanced and comprehensive attempts to regulate AI in the world. However, we concur with the EDPB and the EDPS that the AIA, in its current version, presents several shortcomings. We identify three main areas where amendment is needed: (i) the regulation of emotion recognition, (ii) the regulation of biometric classification systems and (iii) the protection against commercial manipulation. Partly building upon the thoughtful and comprehensive joint opinion of the European supervisory authorities, we will try to make a case for greater regulatory focus on the nexus between artificial intelligence and the human mind.Continue reading

The Palimpsest of Conformity Assessment in the Proposed Artificial Intelligence Act: A Critical Exploration of Related Terminology

The European Commission’s (EC) Proposal for a Regulation laying down harmonized rules on artificial intelligence (AI Act) has drawn extensive attention as being the ‘first ever legal framework on AI, resuming last year’s discussions on the AI White Paper. The aim of the Proposal and the mechanisms it encompasses is the development of an ecosystem of trust through the establishment of a human-centric legal framework for trustworthy AI (Explanatory Memorandum of the Proposal, p.1). Only on the first page of the Memorandum, the word trust appears five times, whereas the actual text of the proposal refers at least ten times to the idea of trust, mainly in recitals. This comes as no surprise given that trust is a core component of the European Single Market, and the legislative action aims ‘to ensure a well-functioning internal market for artificial intelligence systems’ (Explanatory Memorandum, p.1). This ‘importance of creating the trust that will allow the digital economy to develop across the internal market’ is also repeated in Recital 7 of the EU General Data Protection Regulation (GDPR).

For individuals to trust that AI-based products are developed and used in a safe and compliant manner and for businesses to embrace and invest in such technologies, a series of novelties have been introduced in the proposed Act. Those novelties include, but are not limited to, i) the ranking of AI systems, depending on the level of risk stemming from them (unacceptable, high, low, and minimal), as well as ii) the legal requirements for high-risk AI systems. In ensuring compliance with these requirements and ‘ensuring a high level of trustworthiness’ (Recital 62 of the proposed Act), the tool proposed by the EC is the procedure of conformity assessment. A table which clearly schematizes this procedure was created by the AI Regulation Chair-MIAI in April 2021 and was included in a previous blogpost on the European Law Blog.

In our analysis, we briefly explore the legislator’s choice of the procedure of the conformity assessment as a means to increase trust, introduced in the proposed Act. In this light, we acknowledge a two-fold challenge. On the one hand, ‘conformity’, despite its well-established usage in product compliance legislation, may acquire a totally new meaning, owing to the specificities and intangibility of AI. On the other hand, this choice could potentially come in contrast with a) other parallel assessments and requirements, imposed on technology providers, namely the data controllers’ obligations, as established after the advent of the GDPR and b) previous definitions and conformity assessment procedures implemented for other products.

To study this topic and highlight its complexity, we use the product compliance legislation in the European Union (EU), the proposed Artificial Intelligence Act and the GDPR, as our reference points. Initially, we investigate the origins of the conformity assessment and its definition. We then decipher the scope of ‘conformity’ by comparing it to three other closely related terms found in the text of the proposed Act and in the GDPR: ‘compliance’, ‘impact’, and ‘accountability’. These terms have been selected for different reasons. On the one hand, conformity, accountability, and impact, all share the element of compliance. Accountability, as a principle defined in the GDPR (Article 5(2)), mirrors the data controller’s responsibility for, and his/her ability to demonstrate, compliance. Impact, as the impact of the envisaged processing operation on the protection of personal data (Article 35(1) GDPR), – like conformity in the proposed Act – constitutes the object of an assessment, which should include measures, safeguards and mechanisms envisaged for mitigating risk and for demonstrating compliance with this Regulation. On the other hand, compliance has been selected not only because it penetrates all other terms, but also due to its semantic affinity to the term conformity, which often gives the impression that the two terms can be used interchangeably.

Continue reading

Digital Markets Act: beware of procedural fairness and judicial review booby-traps!

On 15 December 2020, the European Commission presented its ambitious Digital Services Act and Digital Markets Act. Those Acts, which at this stage are only legislative proposals for future EU Regulations (see on the ‘actification’ of the EU legislative process, here), constitute a major step forward in the regulation of digital markets in the European Union (EU). The DMA is particularly unique and important as it sets up an ex ante regulatory framework complementary to, yet also fundamentally different from, existing EU competition law provisions. If adopted, it would contribute significantly to shaping the EU’s particular approach to digital markets regulation (though that approach is multi-faceted, as argued here). Despite the DMA being well-structured and thought through, this blog post submits that it insufficiently pays attention to the requirements necessary to comply with the fundamental right to a fair trial (see also here). Although the proposal explicitly hints at respect for that fundamental right, it does neither acknowledge nor address the differences in interpretation that exist between Article 6 of the European Convention on Human Rights (ECHR) and Article 47 of the EU Charter of Fundamental Rights. As a result of those different interpretations, DMA enforcement does not appear to be compatible with Article 6 ECHR, despite Article 52(3) of the EU Charter considering that provision as a minimum standard of protection within the EU legal order. Now that the EU is once again negotiating accession to the ECHR (see here), one could legitimately ask whether and for how long this could be tolerated. It is submitted, therefore, that ignoring this issue at the negotiation and drafting stage is like inserting a potential booby-trap into the DMA’s institutional design. This post outlines the key features of the DMA prior to summarising its enforcement framework and addressing the problematic ‘fair trial’ elements underlying it.

Reigning in digital gatekeepers

In essence, the DMA wants to regulate big tech companies in order to avoid that they become super-dominant market players. Under EU competition law, becoming dominant in the absence of merging with another enterprise is not as such illegal. Articles 101 and 102 TFEU only intervene in an ex post manner, prohibiting and addressing anticompetitive practices that have already taken place. The DMA would enable the European Commission to intervene before such services providers would become too powerful and start abusing their freshly acquired dominant position as a result.

Continue reading

No collective redress against foreign companies in cases of purely financial damage: Case C-709/19 VEB v. British Petroleum

The decision of the Court of Justice of the European Union (CJEU) in Vereniging van Effectenbezitters (VEB) v British Petroleum PLC (BP) plc, delivered on 12 May 2021, came as a major blow to Dutch claim associations suing foreign defendants before domestic courts. The CJEU ruled that Article 7(2) of Brussels I bis Regulation (Regulation No 1215/2012) must be interpreted as meaning that the direct occurrence of a purely financial loss resulting from investment decisions does not allow the attribution of international jurisdiction to a court of the Member State in which the bank or investment firm is located. This is true even if the investment decision was taken due to misleading information from an internationally listed company that was published worldwide. Only the courts of the Member State where a listed company must fulfil its statutory reporting obligations have international jurisdiction on that ground. Continue reading

Brexit and the free movement of goods: a bitter goodbye to Cassis?

The Brexit and the subsequent Trade and Cooperation Agreement (hereafter TCA) marked the beginning of the bumpy and unprecedented road of European disintegration. Fear about loss of sovereignty and regulatory control was the driving force behind the UK longstanding reluctance to further European integration and, eventually, its exit from the Union altogether. The Leave campaign deployed its “take back control” slogan and promised divestiture from EU institutions and policies. What does “taking back control” entail? Well, in essence, and although there is no consensus on what the referendum vote implied precisely, we may assume that it means that legislation and regulation affecting the UK should be enacted (or at least believed to be enacted) by the UK. In other words, Brexiteers wished to regain full regulatory autonomy and hoped that leaving the EU would achieve this result. Will this narrative be upheld in practice, though? This post offers some answers that the TCA provisions on trade in goods and technical standardisation may offer. In a nutshell, it shows that, when it comes to trade in goods, the UK, although it has regained the theoretical opportunity to depart from EU harmonising legislation and technical standards, has not only not gained back control de facto but it is, in fact, losing opportunities when it comes to technical standardisation and market access. Continue reading

The Data Governance Act: New rules for international transfers of non-personal data held by the public sector

In November 2020, the European Commission (EC) published its proposal for a Data Governance Act (DGA proposal). Among other aspects, the DGA proposal sets out a legal framework for the re-use of public sector data which are covered by third parties’ rights, namely data covered by intellectual property (IP) rights and confidential data of non-personal nature as well as personal data. This legal framework aims to unlock the re-use of public sector data that falls outside the scope of the Open Data Directive. While the General Data Protection Regulation (GDPR) regulates international transfers of personal data, the DGA proposal includes rules regulating international transfers of non-personal data by a re-user that was granted access to such data by the public sector. After presenting these rules, this blogpost assesses their potential effects on international data transfers and, accordingly, cross-border trade regarding data processing activities. It also analyses whether these rules may have a broader impact going beyond their scope, in particular on business-to-business (B2B) data sharing and on third countries’ intellectual property (IP) and trade secrets regimes.

Continue reading

The Big Brother Watch and Centrum för Rättvisa judgments of the Grand Chamber of the European Court of Human Rights – the Altamont of privacy?


On 25 May 2021 (coincidentally or not, the third anniversary of the entry into application of the General Data Protection Regulation, GDPR), the European Court of Human Rights (ECtHR) delivered its long-awaited Grand Chamber judgments in applications against UK and Sweden and their mass surveillance regimes. The landmark judgment Big Brother Watch and others v UK is the final outcome of the Strasbourg battle of 16 different organisations against the UK government mass surveillance regime, that began after the Snowden revelations in 2013. The Chamber judgment was delivered in 2018 (analysed previously by me here and on this blog here). 

The Big Brother Watch judgment also had a ‘little sister’, Centrum för Rättvisa v Sweden. This is an older case against the Swedish intelligence agencies’ laws and their mass surveillance practices. The Chamber judgment is also from 2018, but the application was lodged back in 2008.

After the Grand Chamber judgments came out, Privacy International declared ‘an important win for privacy and freedom for everyone in the UK and beyond’. Admittedly, the Grand Chamber found Article 8 of the European Convention on Human Rights (ECHR; the right to private life) violations in both cases, thereby overturning the Chamber outcome of Centrum för Rättvisa. The Grand Chamber also took the opportunity to develop the Court’s case-law further, specifically regarding bulk interception regimes. It did not content itself with a mere application of the somewhat outdated Weber and Saravia criteria. The optimism of privacy activists is, therefore, understandable at the outset. However, I believe that the bigger picture (and lesson) from the judgments is far bleaker. I agree with professor Milanović, who names the judgments as a ‘grand normalisation of mass surveillance’ and tells us to forget about declaring landmark victories for privacy. I also agree with professor Ni Loideain, who calls the Grand Chamber judgments ‘not so grand’. I will try to explain the main reasons why Big Brother Watch, Privacy International, and many other privacy activists and experts have nothing to celebrate.Continue reading