The Palimpsest of Conformity Assessment in the Proposed Artificial Intelligence Act: A Critical Exploration of Related Terminology

The European Commission’s (EC) Proposal for a Regulation laying down harmonized rules on artificial intelligence (AI Act) has drawn extensive attention as being the ‘first ever legal framework on AI, resuming last year’s discussions on the AI White Paper. The aim of the Proposal and the mechanisms it encompasses is the development of an ecosystem of trust through the establishment of a human-centric legal framework for trustworthy AI (Explanatory Memorandum of the Proposal, p.1). Only on the first page of the Memorandum, the word trust appears five times, whereas the actual text of the proposal refers at least ten times to the idea of trust, mainly in recitals. This comes as no surprise given that trust is a core component of the European Single Market, and the legislative action aims ‘to ensure a well-functioning internal market for artificial intelligence systems’ (Explanatory Memorandum, p.1). This ‘importance of creating the trust that will allow the digital economy to develop across the internal market’ is also repeated in Recital 7 of the EU General Data Protection Regulation (GDPR).

For individuals to trust that AI-based products are developed and used in a safe and compliant manner and for businesses to embrace and invest in such technologies, a series of novelties have been introduced in the proposed Act. Those novelties include, but are not limited to, i) the ranking of AI systems, depending on the level of risk stemming from them (unacceptable, high, low, and minimal), as well as ii) the legal requirements for high-risk AI systems. In ensuring compliance with these requirements and ‘ensuring a high level of trustworthiness’ (Recital 62 of the proposed Act), the tool proposed by the EC is the procedure of conformity assessment. A table which clearly schematizes this procedure was created by the AI Regulation Chair-MIAI in April 2021 and was included in a previous blogpost on the European Law Blog.

In our analysis, we briefly explore the legislator’s choice of the procedure of the conformity assessment as a means to increase trust, introduced in the proposed Act. In this light, we acknowledge a two-fold challenge. On the one hand, ‘conformity’, despite its well-established usage in product compliance legislation, may acquire a totally new meaning, owing to the specificities and intangibility of AI. On the other hand, this choice could potentially come in contrast with a) other parallel assessments and requirements, imposed on technology providers, namely the data controllers’ obligations, as established after the advent of the GDPR and b) previous definitions and conformity assessment procedures implemented for other products.

To study this topic and highlight its complexity, we use the product compliance legislation in the European Union (EU), the proposed Artificial Intelligence Act and the GDPR, as our reference points. Initially, we investigate the origins of the conformity assessment and its definition. We then decipher the scope of ‘conformity’ by comparing it to three other closely related terms found in the text of the proposed Act and in the GDPR: ‘compliance’, ‘impact’, and ‘accountability’. These terms have been selected for different reasons. On the one hand, conformity, accountability, and impact, all share the element of compliance. Accountability, as a principle defined in the GDPR (Article 5(2)), mirrors the data controller’s responsibility for, and his/her ability to demonstrate, compliance. Impact, as the impact of the envisaged processing operation on the protection of personal data (Article 35(1) GDPR), – like conformity in the proposed Act – constitutes the object of an assessment, which should include measures, safeguards and mechanisms envisaged for mitigating risk and for demonstrating compliance with this Regulation. On the other hand, compliance has been selected not only because it penetrates all other terms, but also due to its semantic affinity to the term conformity, which often gives the impression that the two terms can be used interchangeably.

Conformity Assessment and CE Marking: Origins

The concept of a conformity assessment is not a novelty per se; rather it originates from the EU legislation on products compliance, along with CE marking, and it has a long history in the Single Market strategy. Products compliance rules apply to several technology providers, including manufacturers, importers, and distributors.

In general, conformity assessments take place before a product is marketed, in line with the respective product legislation, to demonstrate conformity with the applicable legal requirements – currently the so-called New Legislative Framework or NLF. Specifically, Regulation (EC) No 765/2008 established the NLF and introduced a harmonized procedure with respect to the requirements for accreditation of conformity assessment bodies and market surveillance in the EU. In Article 2(12) of this Regulation, conformity assessment is defined as ‘the process demonstrating whether specified requirements relating to a product, process, service, system, person or body have been fulfilled’.

Another recent source which clarifies the role and procedure of the conformity assessment and the CE marking is the ‘Blue Guide’ on the implementation of EU products rules (summarizing the NLF), adopted in 2016. The Guide affirms that the ‘CE marking’ is the visible consequence of an entire process, rendering a product in conformity with the Union harmonization legislation (e.g., sectoral legislation), which further includes signing the ‘EU Declaration of conformity’. By signing this mandatory document, the technology providers take full responsibility and declare that their product conforms to all applicable Union legislative requirements and that the appropriate conformity assessment procedures have been successfully completed.

The Procedure of the ‘Conformity Assessment’ in the Proposed AI Act

The Proposal reiterates the concepts of conformity assessment, CE marking and EU declaration of conformity in the context of high-risk AI systems under Title III. It provides the definition of conformity assessment in the context of AI under Article 3(20): ‘the process of verifying whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled. It is formulated as a horizontal clause, intended to validate the degree to which the AI system is consistent with the legal requirements mentioned in the corresponding Title and Chapter of the Proposal. Such requirements are data governance, documentation and record-keeping, transparency, provision of information to users, human oversight and robustness, accuracy, and security. To this end, the Explanatory Memorandum describes the conformity assessment as comprehensive, ex-ante, involving internal checks as well as strong ex-post enforcement, being an effective and reasonable solution (p.14).

Second, along with the definition of conformity assessment comes the materialization and externalization thereof, i.e., the affixation of the CE marking. Pursuant to Article 3(24) of the Proposal, the CE marking of conformity is defined as ‘a marking by which a provider indicates that an AI system is in conformity with the [above mentioned] requirements […].’ In other words, the CE marking, essentially demonstrating the conformity of the AI system to the standards set in the EEA (here the legal requirements set for AI systems), is a tangible embodiment and, thus, the result of a successful conformity assessment procedure, as we saw earlier.

Conformity as opposed to compliance

The next paragraphs focus on selected direct comparisons between terms (‘conformity’ to ‘compliance’, ‘impact’, ‘accountability’). According to the Oxford English Dictionary (OED), conformity (not necessarily within the legal context) can mean ‘action in accordance with some standard, e.g., with law, order, wishes, fashion; compliance, acquiescence’ and is syntactically used as in ‘conformity with’ or ‘conformity to’. Compliance, in contrast, is ‘acting in accordance with, or the yielding to a desire, request, condition, direction, etc.; a consenting to act in conformity with’.

One of the key objectives of the Proposal, in accordance with its legislative financial statement, is ‘to ensure legal certainty […] making clear what essential requirements, obligations, as well as conformity and compliance procedures must be followed to place or use an AI system in the Union market’ (p.91). It can thus be argued that the word conformity cannot be used interchangeably with the word compliance, especially when conformity is used with sectoral undertones.

The scope of compliance appears to be broader than the scope of conformity. In this regard, compliance would be translated as overall abidance by the regulatory framework, or, in other words, non-compliance is the failure to comply with any imposed laws or standards. Instead, conformity can be seen as the compliance with the specific requirements envisaged in a particular framework. The impact assessment accompanying the Proposal is a source of clarification at this point. While compliance entails that users are already bound by the fundamental rights legislation, it is the conformity which adds to the existing framework, by addressing gaps in the material scope as regards the obligations of the technology providers. Lastly, the narrower scope of conformity is evident in the inclusion of voluntary technical harmonized standards. These can provide a legal presumption of conformity with requirements, constituting an important means to facilitate providers of AI systems in reaching and demonstrating overall legal compliance.

Conformity as opposed to impact

A comparison between impact and conformity is specifically inspired from the process of data protection impact assessment (DPIA) in Article 35 of the GDPR.

Impact, in general, can mean the ‘effective action of one thing or person upon another’, the ‘effect of such action’ or ‘influence’. In the EU data protection legal context, on the one hand, the DPIA process, as an accountability tool, obliges the data controller to carry out a systematic, technical-legal self-assessment[1] before initializing the processing of personal data, emphasizing on the estimated impact stemming from the high risks of the processing. The impact assessment contains a systematic description of the envisaged processing operations, the (justification of the) necessity and proportionality of such processing, an assessment of the risks to the rights and freedoms of data subjects, along with mitigating measures. The choice of the term impact focuses exactly on the uncertain future and the envisaged consequences (perceived effects) to the fundamental right of data protection or other interrelated rights. Consequently, the impact assessment is a ‘living instrument’ characterized by continuity, monitoring and review.

The conformity assessment as a procedure for providers of high-risk systems, on the other hand, constitutes an ex ante tool. Combined with the CE marking, conformity assessments prima facie create the impression that their role is exhausted prior to the processing of personal data or the function of the AI system, in general. The choice of the term conformity alludes to an anterior control mechanism, which may not necessarily be updated as the conforming AI system is adapting itself (e.g., self-learning AI algorithms). The term suggests a rubber-stamping action, in which the continuity of conformity is not visibly guaranteed.

In sum, the procedure of conformity assessment for AI systems is a limited formal control, performed prior to the deployment of the system, aiming to demonstrate the compatibility of the system with the standards of the internal market, expressing a commercial and economic attitude. In contrast, the impact assessment as per Article 35 GDPR relates to legal but also societal aspects of the processing and is more comprehensive and substantive, addressing the risks and impacts of high-risk processing operations.

The limited scope of conformity assessment could, in principle, be explained by the interplay with the DPIA process, an obligation which data controllers of high-risk AI systems are not exempt from if they are engaged in personal data processing operations; rather they are accountable and must demonstrate compliance. The interrelation between the two instruments and obligations, which stem therefrom, is yet to be clarified, requiring further research on the legal positioning of AI systems.

Conformity as opposed to accountability

 Accountability, according to the OED, may signify ‘the quality of being accountable; liability to give account of, and answer for one’s conduct, performance or duties; responsibility’. Similarly, the legal definition of accountability comprises two consecutive elements of: a) the obligation of a natural or legal person to explain why it has acted in a certain way and b) the relationship in which this natural or legal person is answerable to another. In the context of data protection law, the principle of accountability is a twofold legal obligation for the data controller, first to take appropriate and effective measures to implement data protection principles, and second, to demonstrate, upon request, that appropriate and effective measures have been taken, thus providing evidence thereof.

As an illustration, Recital 38 of the Proposal stresses, primarily in the context of law enforcement, that accuracy, reliability, and transparency are particularly important to ‘avoid adverse impacts, retain public trust and ensure accountability and effective redress’ (p. 27). Moreover, within the context of the establishment of a quality management system for high-risk AI systems as proposed in Article 17(1)(m) of the proposed Act, an ‘accountability framework setting out the responsibilities of the management and other staff’ is further envisaged to ensure compliance. Accountability is again mentioned in Article 54(1)(h) with respect to retaining logs for the processing of personal data in the context of AI regulatory sandboxes.

It is, therefore, suggested that conformity of AI systems with the legal requirements is an essential step towards a well-founded accountability framework, allocating responsibilities among diverse actors in the AI supply chain. At the same time, the term accountability is equally broader than conformity, as it involves legal repercussions, resulting from the degree of conformity with specific legal requirements, or lack thereof, but also goes beyond the conformity obligations, establishing direct obligations on the providers of high-risk AI systems. Put differently, accountability appears to be a regime regulating responsibility (an end), while conformity signifies a procedure towards identifying who is responsible (a means to that end). Accountability, thus, offers a degree of discretion, adaptation and flexibility as compared to ‘command and control’ approaches, including that of conformity assessment.[2]

Conclusions and Future Research Agenda

An interim conclusion that can be drawn, from those comparisons, is that the scope of the term conformity is undeniably narrower than the terms compliance, impact, and accountability. First, compliance is broader than conformity, since it refers to the overall abidance by the regulatory framework, while conformity entails specific requirements envisaged in a particular framework. Second, what principally differentiates impact from conformity is the future-oriented and predictive character of impact, associated with the risks and its uncertain consequences, while conformity mostly refers to the present. Third, while conformity mostly denotes a procedure, accountability is a regime regulating responsibility among actors.

The distinction between them is seen as particularly significant, especially when a legal analysis of data processing operations in extensive AI systems is performed, when standardization processes are involved or when multiple roles converge in one person (e.g., data controller and AI system provider), among others. Lastly, discrepancies may arise regarding the draft translation of the Act across the official European languages. Only recently, the proposed Regulation has become available in all EU languages, for which it is worthy to see how the different terms are translated and how the communication among the different regulatory instruments is achieved. Although it is expected that the translation will follow the terms of already established sectoral legislative translations, the final version of the AI Act may afford a whole new meaning to the concept of conformity assessment procedure and its interrelated terms, due to the partially abstract but also complex legal requirements.

To conclude, trust as understood by the legislators in the Act cannot be achieved without terminological and procedural clarity, especially when technology providers abide by parallel regulatory frameworks.

[1] Dariusz Kloza et al., “Data Protection Impact Assessment in the European Union: Developing a Template for a Report from the Assessment Process,” no. 1 (2020): 1–52, https://doi.org/10.31228/osf.io/7qrfp.

[2] Raphaël Gellert, The Risk-Based Approach to Data Protection (Oxford University Press, 2020), 143.