MAKING AI’S TRANSPARENCY TRANSPARENT: notes on the EU Proposal for the AI Act
The European Law Blog will be taking a summer recess. We’ll be back in September with new commentaries. Please do send us on your contributions throughout this period and we will get back to you in due course. Happy Holidays to all our readers!
Transparency is one of the core values promoted by the EU for the development, deployment, and use of AI systems. Since the start of the policy process to regulate AI, all the relevant documents (the Ethics Guidelines for Trustworthy AI issued by the High-Level Expert Group on AI as of December 2018 (AI-HLEG Ethics Guidelines for Trustworthy AI), the White Paper on AI issued by the European Commission in February, 2020 (EC White Paper on AI), and the European Parliament’s Framework of ethical aspects of AI, robotics and related technologies as of October, 2020 (EP Report on AI Framework)) included transparency in the ethical or legal frameworks they respectively suggested.
The first EU legislative proposal on the matter – the European Commission’s proposal for a Regulation on Artificial Intelligence as of April, 2021 (AI Act) – follows the previous policy direction towards transparency of AI and includes several requirements explicitly devoted to it: Article 13 (the ‘Transparency and information provision’) and Article 52 that covers the full title IV (the ‘Transparency obligations for certain AI systems’). Yet, the precise meaning of transparency remains unclear. I will illustrate this through the analysis of how transparency correlates with the following related concepts: 1) communication, 2) interpretability, 3) explainability, 4) information provision, and 5) record keeping and documentation. These concepts are associated with transparency in the four specified policy acts (further mentioned together as the ‘AI policy documents’). My analysis intends to demonstrate that the EU legislator should have a more coherent vision on the ‘transparency’ terminology in the AI Act. For consistency, I suggest establishing the hierarchy between the related concepts where transparency shall be seen as the broadest one.
Transparency – Communication
As mentioned above, there are two Articles of the AI Act devoted to transparency: the transparency obligation applicable to providers of high-risk AI systems (Article 13) and the transparency obligation applicable to providers of AI systems that interact with natural persons (Article 52). The AI Act does not explain how these two transparency rules relate to each other.
Article 13 (1) specifies:
‘High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.’ (Emphasis added)
Article 52 (1) states:
‘Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.’ (Emphasis added)
These rules have similar names but imply different concepts. For high-risk AI systems transparency refers to interpretability, for interactive AI systems transparency refers to communication about AI’s presence (the rule from Article 52 (1) is also applicable for emotion recognition (Article 52 (2)), biometric categorization (Article 52 (3)), and ‘deep fake’ systems (Article 52 (4)). It might lead to a situation when we would label as transparency one thing (interpretability) for one type of AI systems (high-risk) and another one (communication) – for the other type (interactive). The lack of legal certainty and the inconsistency in legal compliance are the possible consequences of such a situation.
Transparency – Interpretability
Article 13 of the AI Act would apply to the substantial part of AI systems – those that present potential high risks to the health and safety or fundamental rights of natural persons. This type of AI system includes a large number of applications in important sectors such as healthcare. In this case, the aim of AI’s transparency is to enable users to interpret the system’s output and use it appropriately. In addition, the Article allows AI providers to define the relevant type and degree of transparency with a view to achieving compliance with their and users’ relevant obligations. This way the EU legislator recognizes not only the intrinsic value of transparency but also its instrumental character – the relevance to other values. It provides more flexibility for legal compliance and makes the transparency requirement more balanced. Yet, more guidance on transparency and interpretability is needed.
First, it is not clear if transparency fully overlaps with interpretability, or whether it is a wider category. Article 13 says that transparency is needed for interpretability, but at the same time implies that there are different types and degrees of transparency. However, the AI Act does not explicitly provide examples of types of transparency besides interpretability. It makes it unclear for providers of AI systems whether ensuring interpretability is enough to comply with the transparency requirement. It is also obscure for them what kind of transparency measures besides interpretability are available. Together with the opacity of the interpretability concept itself (described in the next paragraph), these factors make it challenging for AI providers to fully understand what kind of measures are compatible with the transparency obligation.
Second, the AI Act does not define what exactly the interpretability of AI’s outcome is. Of course, a very strict definition would decrease the flexibility of AI providers in defining the measures that are appropriate for the type of technology and the context of use. However, some guidance on interpretability is needed. Scholars and policymakers from different domains have developed their distinct views. For example, one ‘interpretability’ definition is when ‘one can clearly trace the path that your input data takes when it goes through the AI model.’ However, due to self-learning and autonomy, some parts of some algorithmic models are not fully traceable – humans cannot see the whole path of the data from input to output. These opaque AI systems are called ‘black-box’ models. In this case, the obligation of interpretability could mean that the EU legislator wishes to exclude ‘black-box’ models from those eligible for application in high-risk AI systems because they are not interpretable (in the described understanding). I do not argue that this is the legislator’s intention because many of the most sophisticated and promising AI technologies are ‘black-box’ models (for example, neural networks). Their exclusion from the regulatory scope would be too radical because it might hinder innovation. This line of thinking is described to demonstrate how the lack of clarification about the ‘interpretability’ term can misguide AI providers and lead to legal uncertainty.
Another aspect of uncertainty with this “interpretability” transparency obligation (Article 13) is its correlation with explainability, which is described in the following section.
Transparency – Explainability
Together with transparency, the explainability of AI was on the radar of policymakers since the first policy paper – the AI-HLEG Ethics Guidelines for Trustworthy AI. The document stated that the explicability of AI is one of the principles of its ethical use. The principle (described in Chapter I, Section 2 of the Guidelines) was linked to the transparency requirement and covered explainability measures. Explainability measures included an explanation ‘why a model has generated a particular output or decision (and what combination of input factors contributed to that).’ When this explanation is not possible (for example, for ‘black-box’ models), other explainability measures were suggested: traceability, auditability, and transparent communication on system capabilities. Further, the EP Report on AI Framework established the transparency requirement for high-risk AI systems and specified in its Article 8 that these systems are ‘required to be developed, deployed and used in an easily explainable manner so as to ensure that there can be a review of the technical processes of the technologies’ (emphasis added). Although in these two acts explainability is described in different ways , it was included there as a concept that is strongly tied with transparency.
In the AI Act the explainability element disappeared. In this context, the question about the correlation between interpretability and explainability becomes even more important. While one concept is included as the obligation for the providers of AI systems (interpretability) and the other one is not (explainability), it is necessary to understand the difference between them, which is not the easiest task.
Some people working with or on AI use the terms interpretability and explainability interchangeably, others distinguish them. For example, some define explainability as ‘that you can explain what happens in your model from input to output.’ As I demonstrated above, a very similar definition is attributed to interpretability. However, if we understand explainability this way and interpretability differently, for example, as ‘the degree to which a human can understand the cause of a decision’, then the obligation imposed by the AI Act to AI providers seems to be more feasible and can be applicable to even ‘black-box’ models. To illustrate the difference, we can take an example. ‘Imagine you are building a model that predicts pricing trends in the fashion industry. The model might be interpretable — you can see what you are doing (in a sense that humans can check what is done/selected by them (training and validating datasets, initial algorithmic parameters)). But it is not explainable yet. It will be explainable once you dig into the data and features behind the generated results (which means looking into the self-learning part of an algorithm or opening a ‘black-box’). Understanding what features contribute to the model’s prediction and why they do is what explainability is all about.’ In this case, the AI Act might provide a good solution to the opaqueness of AI systems. If opening the black box is needed for explainability and not for interpretability, then exclusion of explainability from the AI Act seems to be justifiable. It would mean that ‘black-box’ models are not excluded from the regulatory scope, and that they are eligible for the application in high-risk AI systems in accordance with the AI Act. However, it is only one possible understanding of the legislator’s intention, and many others can exist. That is why the exact approach on how to distinguish between explainability and interpretability needs be clarified by the EU legislator in order to guide AI providers in their compliance.
Transparency – Information Provision
Article 13 of the proposed AI Act has two main types of obligations – transparency and the provision of information to AI users. The Article establishes the form of the information (concise, complete, correct, clear, relevant, accessible, and comprehensible) and its types (instructions to use, contact details of AI provider, the characteristics, capabilities, and limitations of performance of an AI system, its foreseeable changes and expected lifetime, human oversight measures). The right of AI systems’ users to receive the information together with their relevant obligations (such as the use in accordance with the received instructions and monitoring of the AI’s performance) is crucial because it integrates the user into the legal area related to AI and thus increases accountability of all the actors.
But again, the correlation between the two obligations – information provision and transparency – is not clear. The text of the legal provision implies that the concepts are very related but still separate: the Article’s title uses separate names and different parts of the Article are devoted to two concepts (paragraph 1 – for transparency; paragraphs 2 and 3 – for information provision). However, the provision of information is traditionally one of the main elements of transparency. The EU General Data Protection Regulation (GDPR) is one of the examples of such correlation. Also, all the AI policy documents preceding the AI Act included the provision of information into the general transparency obligation (Chapter II, Section 1.4 (as the part of communication obligation) of the AI-HLEG Ethics Guidelines for Trustworthy AI; Section D(c) of the EC White Paper on AI and Article 8(1)(f) of the EP Report on AI Framework). To avoid a lack of clarity in applying the two concepts, I argue that the AI Act should continue the line taken in the previous AI policy documents and consider information provision as part of the general transparency obligation.
Transparency – Record-Keeping and Documentation
The same situation applies to the correlation between transparency and the obligations of documentation and record-keeping. In this case, these obligations are even more explicitly divided because they are stated in different Articles of the proposed AI Act: Article 11 for the technical documentation, Article 12 for record-keeping, and Article 13 for transparency and information provision. On the one hand, dividing the obligation into different Articles means that they are more detailed, and their own importance is more recognizable.
On the other hand, in the previous AI policy documents, either one of these two obligations or both of them were included in transparency obligations. In the AI-HLEG Ethics Guidelines for Trustworthy AI, the documentation obligation was the part of traceability which in turn was the element of transparency (Chapter II, Section 1.4). In the White Paper on AI, one of transparency’s elements is keeping records, documentation, and data (Section D(b)). The EP Report on AI Framework in its Article 8 (devoted to safety, transparency, and accountability) included the requirement for AI high-risk technologies to be developed, deployed, and used ‘in a transparent and traceable manner so that their elements, processes, and phases are documented to the highest possible and applicable standards.’ I find inclusion of the record-keeping and documentation into the transparency obligation more reasonable. In the end, transparency is needed for the ex ante and ex post control mechanisms that ensure that AI technologies are safe, accurate and respect fundamental rights. Keeping the records and documentation of all the steps taken during AI’s development and use is a good way to organize such control and thus to ensure transparency. Due to this, the AI Act should consider the relevance between transparency, documentation, and record-keeping.
AI is becoming an integral part of our informational society. But turning the potential of AI into reality can only happen if we trust the technology. Transparency is the fundamental element of trust. The more we can observe how AI is developed, deployed, and used, the more information we have and the more we perceive it, the more we can control AI and ensure the proper accountability of all the subjects involved. To acquire AI’s transparency, we all shall agree on its common understanding. This contribution demonstrated that the proposed AI Act is far from reaching this goal.
The act has two different types of transparency for different types of AI technologies (interpretability for high-risk AI systems and communication for interacting AI systems) – it already makes the terminology inconsistent.
Another inconsistency is the use in AI Act of the term ‘interpretability’ as the main element of transparency, while all the acts that preceded the proposed AI Act used explainability instead. Even if interpretability might be the concept that fits better the opaque and complex AI technologies, the AI Act does not explain what exactly it means and how it correlates with explainability. This lack of clarity leads to uncertainty if all the types of AI technologies can fulfill the requirement of transparency (interpretability). Different views on interpretability, explainability, and the correlation of the two concepts exist. Some may consider that as ‘black-box’ models are not interpretable, they cannot comply with the transparency requirement imposed by the proposed AI Act. I argue that this consequence is not welcome because the exclusion from the legal scope of the sophisticated and promising ‘black-box’ AI technologies would hinder innovation.
The last inconsistency with the ‘transparency’ term relates to the concepts it covers. The elements such as information provision, record-keeping, and documentation are the concepts that strongly relate to transparency and the three previous AI policy documents suggested that transparency shall cover all the mentioned concepts. The AI Act also includes all the concepts, but in contrast to the previous acts, it does not establish the correlation between them.
I argue that the possible solution to the described inconsistencies is establishing the hierarchy between the concepts. In this hierarchy, transparency is suggested to be the broadest category. In a legal sense, it can be seen as the principle of the AI Act. This way, the other relevant elements analyzed here can be seen as the measures to ensure the principle. This hierarchy would follow the approach already existing in the legislation. For example, the GDPR establishes the transparency principle (Article 5) and the obligation to provide the information as the measure to ensure the principle (such as Articles 12-15). In addition, it would follow the approach taken in the previous AI policy documents that also consider transparency as the wider category.
In addition to establishing the hierarchy, the concepts such as interpretability and explainability should be clarified, at least the direction of measures they cover. The suggested improvements of the AI Act can increase the understanding of AI providers on what they should do to comply with the transparency measures, which in turn will increase accountability and trust.
Dear A. Kiseleva,
I would like to thank you for this really well-written blog post.
Measuring the distance among concepts like “interpretable output”, “interpretable machine learning” and “explainable machine learning”, and exploits it to clarify the meaning of “transparency” in the AI Act is a challenging, but essential, task.
In the specialized literature a vast spectrum of opinions exists. For instance, in “Metrics, Explainability and the European AI Act Proposal”, Multidisciplinari Scientific Journal, January 2022, Sovrano et al. read the AI Act as clearly pointing towards explainable machine learning (this is not surprising, considering the position of Sovrano with respect to the GDPR), whereas in “Towards an Accountability Framework for AI: Ethical and Legal Considerations”, Research Brief – Technical University of Munich, February 2022, Boch et al. are much more sceptic, reporting on “Legal requirements on explainability in machine learning”, Bibal et al., and “Regulating Explainable AI in the European Union”, Ebers et al.
A couple of intersting sources for deepening the discussion on interpretability vs explainability are “Interpretability and Explainability: A Machine Learning Zoo Mini-tour”, cs.LG arxiv, 2020, and “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead”, stat.ML arxiv, 2019.
My opinion is that, within the AI Act, “interpretable output” has nothing to do with interpretable or explainable machine learning, but just means ‘being able to understand the output (format) as much as it is needed to use it’. For instance, a classification algorithm can output a real number between 0 and 1; a value equal or above 0.5 has to be interpreted as 1, below such a threshold has to be interpreted as a 0. This example shows how to interpet the output, but says nothing about the algorith that originated it.
Indeed, I think that “transparency” in the AI Act is much closer to the guinding principles of the EU Cookies policy than to ‘the right of an explaination’ that involve other parts of the GDPR.
I agree with you, your interpretation is in line with that of other pieces of legislation. If the intention of the lawmaker was to include interpretability (in the meaning used by Cynthia Rudin) or right of explanation, this would have been written clearly in the Recitals and in the explanatory memorandum. I am not aware of scholarly opinions interpreting it as an exclusion of black boxes.
Dear Giulio Masetti,
Thank you very much for the kind words about my work, it is very much appreciated.
Also, I am grateful for the thoughtful and useful comments (also for the references)!
Of course, the research on correlation between transparency, explainability and interpretability and on what should be the object of these concepts (outcomes, algorithms, processes or anything else) requires a lot of analysis and unfortunately, the format of the blog does not allow to do so. Here I was focusing specifically on how much still needs to be sorted out in the AI Act. However, since my main research interest is AI’s transparency (and more specifically in healthcare), I recently wrote a paper on it with the research on taxonomy of AI’s transparency in law and data science. Here it is: https://doi.org/10.3389/frai.2022.879603 I hope you find it interesting and useful and would be glad to futher discuss the topic with you.
Dear Chiara Gallese,
Thank you very much for your comment.
I also agree that most probably is not the intention of the lawmaker, since in the Annex the list of AI technologies also covers the ones that are usually associated with the black-box issue (such as deep machine learning).
Recently I also published the paper that explores in more details the correlation between transparency, explainability and interpretability: https://doi.org/10.3389/frai.2022.879603. Maybe you can find it useful, and I would be glad to discuss it with you.