EU Draft Artificial Intelligence Regulation: Extraterritorial Application and Effects
As is known, AI has become part of everyday life. Although AI may make human life more efficient and easier, it may also impact negatively; it has the potential to cause many risks like discrimination, privacy violations, safety and security risks. These concerns led to the legislative actions on AI. In April 2021, the European Commission (EC) published Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Proposed AIA) which follows a “risk-based” regulatory approach (analysed here in the blog). The EC aims to create a human-centric, sustainable, secure, and trustworthy framework for AI applications. The Proposed AIA is still being discussed and negotiated by the European Parliament and the Council of the EU. The Council of the EU presented the Presidency Compromise Text (AIA) in November 2021. Although the AIA will undergo further changes during the legislative procedure, it presents a clear idea of the future of the legal framework of AI systems. Similar to the General Data Protection Regulation (EU) 2016/679 (GDPR), one of the most important effects of the AIA will be extraterritorial scope which will impose significant obligations for non-EU businesses. Therefore, the AIA will not only apply to providers and users established in the EU. In certain situations, which are determined under Article 2 AIA, the rules will also apply to providers and users established outside of the EU. This blog post focuses on some of the key criteria and considerations on the extraterritorial scope of the AIA, as well as the impact on users and providers outside the EU.
What would count as an AI system?
The Proposed AIA defines AI systems broadly and captures software embodying machine learning approaches, rule-based approaches, and also traditional statistical approaches. Through this broad definition of the AI system, which is based on a definition already used by the Organisation for Economic Co-operation and Development – the OECD, the Proposed AIA intends to cover future AI technological developments. However, the Proposed AIA’s definition of AI System was criticised by many observers, notably that the definition was too broad, therefore the definition was revised by the Council Presidency as:
“Artificial intelligence system (AI system) means a system that
- Receives machine and/or human-based data and inputs,
- Infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and
- Generates outputs in the form of content (generative AI systems), predictions, recommendations or decisions, which influence the environments it interacts with”
As for the techniques and approaches, Annex I of the AIA covers:
- Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning,
- Logic-and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems,
- Statistical approaches, Bayesian estimation, search and optimization methods.”
This substantial change brings more clarity to the definition with explicit reference to the essential capability that AI systems should have. AI systems should be capable of determining how to achieve a given set of human-defined objectives using learning, reasoning, or modelling. The amended definition will help prevent the inclusion of classic software systems that are not considered AI systems in the scope of AIA. The Council Presidency declared that the definition was modified “to ensure more legal clarity and to better reflect what should be understood by an AI system”. However, despite the intense criticism of the statistical approaches, there is no change in Annex I. The definition still includes an extensive variety of methods which may create uncertainty as well as doubt in scope for businesses. For instance, some observers claimed that “Bayesian techniques are not artificial intelligence, but well-proven mathematical formulas” and recommend the deletion of the “Bayesian estimation” from Annex I of the AIA. Nevertheless, that does not mean that all AI technologies defined as an AI system under the AIA will be subject to the obligations. With the risk-based approach of the AIA, most of the obligations are foreseen for high-risk AI systems only.
Two main actors are caught by the AIA which are providers and users of AI systems. In addition, certain obligations have been introduced for importers, distributors, product manufacturers and authorized representatives of providers. The AIA defines provider as “natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge”. The provider could be a third-party provider or an organisation that develops the AI system. Where the provider of high-risk AI systems is established outside the EU, appointing a representative, which is established in the EU, will be mandatory, as in the GDPR.
The AIA will also apply to users which it defines as “any natural or legal person, public authority, agency or other body using an AI system under its authority.” According to the Proposed AIA, the provisions do not apply to personal non-professional uses. However, the Council of the EU decided to revise the definition of the user, by removing the term “except where the AI system is used in the course of a personal non-professional activity” from the Proposed AIA. Revised Article 29 states that in the course of personal non-professional activity, the obligations set for the high-risk user will not apply. Thus, transparency obligations, which will apply to certain minimal-risk AI systems, may apply to personal non-professional uses. As for the exemptions, the AIA does not apply to AI systems developed or used exclusively for military purposes. Also, the Council Presidency has introduced a new exemption for research and development activities. The AIA will not apply to AI systems and their outputs used for the sole purpose of research and development.
Where an importer cannot be identified, providers of high-risk AI systems established outside the EU must appoint an authorized representative which is established within the EU and performing the tasks specified in the mandate received from the provider. The authorized representatives must be appointed prior to making AI systems available on the EU market, and by written mandate.
Per Article 25 AIA, the representative will be empowered to carry out the following tasks:
- Keep a copy of the EU declaration of conformity and the technical documentation at the disposal of the national competent authorities and national authorities,
- Provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system,
- Cooperate with competent national authorities, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system.
The AIA will be another example of the phenomenon called the Brussels effect, firstly coined by Anu Bradford in 2012, which refers to the “EU’s unilateral power to regulate global markets”. The AIA indicates that provisions are applicable even outside of the borders of the EU. The main criterion is whether the impact of the AI system occurs within the EU, regardless of the where the provider or user is established. Article 2 of the AIA provides that the:
“This Regulation applies to:
- Providers placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are physically present or established within the EU or in a third country,
- Users of AI systems who are physically present or established within the EU,
- Providers and users of AI systems who are physically present or established in a third country, where the output produced by the system is used in the EU.
- Importers and distributors of AI systems,
- Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark,
- Authorised representatives of providers, which are established in the Union.”
The natural or legal person who takes the responsibility for the first making available of an AI system on the EU market or the supply of an AI system for first use directly to the user or for own use on the EU market for its intended purpose, regardless of whether that natural or legal person is the person who designed or developed the system and regardless of whether that natural or legal person is established within the EU or in a third country, will be subject to the obligations under AIA as a provider. Also, providers who are established in a third country will be subject to the obligations under AIA in case the output produced by the system is used in the EU. Although the output is not defined under the AIA, the definition of AI system refers to outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influence the environments it interacts with. Recital 6 of the AIA describes text, video, or image as examples for the content of generative AI systems.
Consequently, the AIA will impact non-EU businesses even if they do not have a legal presence in the EU. The global scope is due to EC’s deep concern about the potential for harm embedded in AI systems and its potential impact on the rights and freedoms of individuals. As the Recital 10 of AIA states that: “In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union”. Furthermore, within the existing scope of the AIA, the Brussels effect will apply to global AI markets. As Bradford notes “to avoid the cost of complying with multiple regulatory regimes around the world, these companies often extend EU rules to their operations globally”. Multinational AI providers may be glad to have one set of rules. However, the extraterritorial scope will raise potential compliance challenges for providers of high-risk AI systems established outside of the EU who may not be cognizant or able to determine where the outputs of their AI systems are used. It is possible that output of the AI system will be used in the EU by user of AI system without the provider’s knowledge.
On the other hand, the GDPR, which is a key to the Brussels effect, could provide some guidance on the potential effect of AIA. Since the GDPR entry into force, it has become the gold standard in terms of data protection. As per the statistics from the United Nations Conference on Trade and Development (UNCTAD), 137 out of 194 countries had put in place legislation to secure the protection of data and privacy. As Bradford stated, “we see over 100 countries today with data privacy rules modelled on the GDPR”. It is clear that most of the data protection laws are inspired by the GDPR.
The AIA is expected to have a significant impact on the AI industry. The AIA sets forth administrative fines depending on the severity of the infringement and fines can be demanding (potentially reaching 6% of global turnover for the most serious violations) for AI tech companies across the globe. With the existing extraterritorial scope, the AIA, which is the first-ever comprehensive legal framework in this field, has the potential to become the standard in terms of global AI governance.
Obviously, the standard in terms of global AI governance must be a global effort. It is a fact that investment in the AI industry in the EU is nothing like the size of that in the United States or China (only four European companies are in the top 100 global AI start-ups). Therefore, during the legislative procedure which means that the European Parliament and the Council of the EU need to adopt it jointly and can easily take several years for the new requirements to apply, the AI communities’ responses should be considered especially for the contested issues such as the definition of the AI system, the scope of the AIA and the high-risk AI systems.