The European approach to regulating AI through technical standards
Winston Maxwell,Télécom Paris,Institut Polytechnique de Paris,France
PUBLISHED ON: 16 Jul 2024 DOI: 10.14763/2024.3.1784
Abstract
In December 2023, the European institutions reached a political agreement on the AI Act, a new regulation on artificial intelligence. The AI Act will require providers of high-risk AI systems to test their products against harmonised standards (hENs) before affixing a European Conformity (CE) mark to allow AI products to circulate freely on the European market. The CE mark and hENs are long-established European regulatory tools to deal with product safety and already apply to a wide range of products. To date, however, they have never been used to attest to compliance with fundamental rights, something the AI Act aims to achieve.In this article, we examine the role of hENs and CE marking in the AI Act, and how these product safety regulatory techniques have been expanded to cover protection of fundamental rights. We analyse the 5 March 2024 CJEU decision and the respective opinion of the Advocate General in the Public.Resource.Org case which raises questions on democratic processes in standardisation organisations. We show that unlike compliance with product safety norms, compliance with fundamental rights cannot be certified through use of technical standards because violations of rights are too context-specific and require a judicial determination. However, technical standards have an important role to play in encouraging best practices in AI governance.Licence:Creative Commons Attribution 3.0 Germany
Competing interests: The authors have declared that no competing interests exist that have influenced the text.
Keywords:Artificial intelligence,AI Act,Standards,CE mark,Fundamental rights
Citation: Gornet, M., & Maxwell, W. (2024). The European approach to regulating AI through technical standards.Internet Policy Review,13(3). https://doi.org/10.14763/2024.3.1784
Introduction
In April 2021, the European Commission revealed its first draft for the future regulation laying down harmonised rules on artificial intelligence (AI),1 also known as the AI Act (European Commission, 2021). The text proposed a legal framework to regulate AI systems and laid down requirements that they should meet. At the time of writing, the three European institutions – Commission, Council and Parliament – have reached an agreement after debating the content in a trialogue phase. The last version of the text (European Parliament, 2024) was endorsed by the European Parliament but still needs to be adopted by the Council before it is published in the Official Journal of the European Union (OJEU).2
The AI Act is not the first law on digital technologies in Europe, it follows, notably, the adoption of data protection regulations such as the General Data Protection Regulation (GDPR) in 2016 (Regulation 2016/679), the Data Governance Act (Regulation 2022/868) in 2022, and the Data Act (Regulation 2023/2854) in 2023. The Digital Markets Act (DMA) and the Digital Services Act (DSA) (Regulation 2022/1925; Regulation 2022/2065) were also adopted in 2022 for the regulation of online platforms. However, the AI Act takes a different route from these texts, choosing to draw inspiration from European product safety rules. In particular, AI systems will require a conformity assessment that will be based on harmonised standards (hENs3), i.e. technical specifications drawn up by European Standardisation Organisations (ESOs) and possessing various legal properties, such as generating a presumption of conformity with the legislation. This conformity assessment procedure will then lead to the European Conformity (CE) marking of the AI product, a seal affixed to show compliance to EU regulations. However, unlike other product safety regulations, the AI Act is not only intended to protect against risks to safety, but also against adverse effects on fundamental rights. Consequently, hENs and CE marking could also apply to the protection of fundamental rights. This extension of the product safety approach to fundamental rights is new and raises difficult questions that this article attempts to address.
In this article, we start by laying down, in part 2, the structure of the AI Act and how it makes use of the product safety regulatory approach to protect fundamental rights. In part 3, we look in more detail at the status of hENs in EU law, and show that although they are considered legal acts, their scope is intended to remain technical, i.e. outside the realm of political judgement. Finally, we highlight, in part 4, the shortcomings of the application of hENs and CE marking to the protection of fundamental rights, as well as the legitimacy problem faced by ESOs.
Protecting fundamental rights through product safety tools
The AI Act’s risk-based approach
The AI Act pursues a dual objective of protecting individuals’ fundamental rights4 and enabling the free movement of data and AI systems within the Union. The text classifies AI systems based on their level of risk:unacceptable risk,high-risk,limited risk, andminimal risk. “Risk” is understood as the “combination of the probability of an occurrence of harm and the severity of that harm”,5 as stated in the article 3(2) of the AI Act. For limited risk systems, only transparency requirements apply; for minimal risk systems no regulatory burden applies, and systems presenting an unacceptable risk are prohibited entirely. The core focus of the AI Act is on high-risk AI systems, for which Annex III provides a non-exhaustive list (art. 6.2). This list can be amended by the Commission, if a new use case is found to create high risks (art. 7.1). Systems that are considered high-risk must comply with the requirements set forth in Title III, Chapter 2, in relation to risk management, data and data governance, technical documentation, record keeping, transparency and provision of information to users, human oversight, accuracy, robustness and cybersecurity. Within those requirements, risk management is a key element, particularly when AI is used in high-stakes situations (Schuett, 2023). Providers of high-risk AI systems must establish, implement, document and maintain a risk management system, consisting notably of the identification of known and foreseeable risks, as well as adoption of appropriate measures to eliminate or mitigate those risks (art. 9). Residual risks must be reduced to a “reasonable” level, dictated by the state-of-the-art (Fraser & Bello y Villarino, 2023).
CE marking will show that AI systems comply with the regulation
The AI Act establishes anex anteaccountability framework for AI (Castets-Renard & Besse, 2022) in which proof of compliance with general requirements is a prerequisite for the “placing on the market or putting into service” of AI systems (art. 2).
The AI Act is inspired by European product safety regulation based on the so-called New Legislative Framework (NLF). The rules applicable to products under the NLF are explained in an official European Commission (2022b) publication, the Blue Guide. Under the NLF, European legislation6 does not directly define technical specifications, but rather sets out the “essential requirements” that products must meet, leaving providers and manufacturers7 some flexibility as to the means of achieving compliance (CEN, 2019). For a product covered by a NLF legislation to enter the European market, it must be CE marked.8 CE marking has a dual use: it allows consumers to benefit from the same level of – presumably – high protection throughout Europe and allows the free movement of products within Europe by harmonising legislation. Products bearing the CE mark can be traded in Europe without restrictions (European Commission, n.d.a). Before development of the CE mark, trade was limited by differences in national product requirements between member states (Hanson, 2005).
Manufacturers are responsible for CE marking. They must check the applicable European legislation and ensure their products meet the essential requirements. They must then carry out the conformity assessment, set up the technical file, issue the EU declaration of conformity, and affix the CE mark to the product (European Commission, n.d.c). The AI Act stipulates that high-risk AI systems must undergo a conformity assessment procedure and, when they are found to be compliant, providers must draw up an EU declaration of conformity and affix the CE mark on the product (art. 16). This conformity assessment procedure is carried out either by a third party or by the provider of the AI system, depending on: (i) if the system falls under an application use case listed in Annex III; and (ii) if the provider has applied hENs (art. 43).
Harmonised standards will provide a technical means of assessing compliance
In the field of product safety, hENs (European Commission, n.d.b) define the technical requirements that would enable a product to comply with the essential requirements set out in a specific product directive or regulation. EU legislation sets what goals to reach, and hENs define how to reach them (Hernalsteen & Kohler, 2022). A harmonised standard is only one possible way to comply with a legal requirement (European Commission, 2022a, p.50) and is thus intended to be voluntary like any other standard (Regulation 1025/2012, art. 2(1)) but it is in practice the most important pathway for compliance.
hENs are developed by one of the three ESOs: the European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC), or the European Telecommunications Standards Institute (ETSI). If a directive or regulation needs to be supported by hENs, the European Commission issues a standardisation request to one or more ESOs, describing the main topics the standards should cover.9 Once the standards have been drafted by the ESOs and approved, they are generally published in the OJEU.10
hENs are, in this context, seen as a way to operationalise mandatory requirements (Explanatory Memorandum, section 2) while reducing costs (Explanatory Memorandum, section 2.3). Recital 121 of the AI Act further states that “standardisation should play a key role to provide technical solutions to providers to ensure compliance”. Some experts therefore believe that it is in standardisation that the real rule-making will occur (Veale & Borgesius, 2021).
The European Commission issued a standardisation request to the ESOs regarding standards for the AI Act (European Commission, 2023). In the request, the Commission asks the ESOs to cover ten subjects related to the requirements for high-risk systems.11 These topics correspond to the requirements for high-risk AI systems set out in Chapter III, Section 2 of the Act. ESOs are now working on hENs for these topics, as well as other topics, at their own discretion.
Private organisations will draft harmonised standards and assess compliance
European and international standardisation organisations are private associations that are tasked to develop technical standards. They are composed of experts which have signed a service contract with national standardisation bodies. Experts can come from private companies, research institutes, public establishments, or work on their own behalf. Anyone can apply to join a national standardisation body to take part in standards development and committee voting, in exchange for membership fees, paid by the expert’s institution. Once experts are part of their national standardisation body, they can ask to join the working groups at European or international level. This includes the three ESOs and the three international standardisation bodies: the International Organisation for Standardisation (ISO), the International Electrotechnical Commission (IEC) and the International Telecommunication Union (ITU).
The Vienna and Frankfurt agreements between CEN and ISO, and CENELEC and IEC respectively, facilitate the exchange of information between the organisations and avoid duplication of work (ISO & CEN, 2016; CENELEC, 2017). This collaboration extends to the adoption of standards, since ISO and IEC standards can be incorporated into the catalogue of European standards by ratification by CEN-CENELEC. At present, almost 33% of CEN publications come from ISO, and 73% of CENELEC publications come from IEC. As far as hENs are concerned, ISO and IEC standards take precedence where they exist, unless it can be proved that the Commission’s request cannot be met by standards issued by these international bodies (Cuccuru, 2019). This collaboration makes the composition of international standards organisations even more relevant to European issues, since their standards are likely to become hENs.
Additionally, the largest group of ISO stakeholders is the industry (Morikawa & Morrison, 2004). This composition gives standardisation organisations access to beneficial industrial expertise (McFadden et al., 2021), an essential competence for the development of technical requirements related to product safety. However, this can also be problematic as the industry can steer the choices of standard organisations towards their preferences (Werle & Iversen, 2006).
Furthermore, products that fall under the NLF need to undergo a conformity assessment procedure. To this end, manufacturers can choose to rely on any technical specifications, including hENs. For certain products, the conformity assessment must be carried out by a third party, called a notified body. These notified bodies are mainly private entities, designated by a EU country to conduct conformity assessments on a certain range of products (European Commission, n.d.d).12 The entire compliance control chain, from the development of standards to support legislation to the auditing of systems against these standards, is therefore carried out entirely in the private sector. The European institutions have only the right to approve and supervise the work of these private entities.
The AI Act takes standards into the realm of fundamental rights protection
The Commission insists on its desire to integrate ethical considerations into the supervision of AI systems. In the explanatory memorandum to the proposed AI Act, the European Commission (2021) states that the proposed essential requirements are inspired by the Ethics Guidelines of the High-Level Expert Group on AI (HLEG, 2019). These principles are recalled in recital 27 of the AI Act. Recital 3 goes even further, stating that the text should ensure a high level of protection “in order to achieve trustworthy AI”. In a previous version of the text, the Parliament even listed some “general principles applicable to all AI systems” (European Parliament, 2023, amendment 213), directly taken from the seven key requirements13 set out by the HLEG.
Some of the “general principles” previously proposed by the Parliament touched directly upon fundamental rights, such as “transparency” or “diversity, non-discrimination and fairness”, which relate to the fundamental rights to information and non-discrimination. The explanatory memorandum also states that it is in the Union’s interest to “ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles” (Explanatory Memorandum, section 1.1). Another example of how fundamental rights are taken into account can be found directly in the text of the Act: a system shall be considered high-risk if it “pose[s] a risk of harm to health and safety, or an adverse impact on fundamental rights” (art. 7.1(b)).14
The AI Act also introduces in article 27 a new mechanism to assess trustworthiness: the fundamental rights impact assessment (FRIA), inspired by the data protection and privacy impact assessments of the GDPR. FRIAs were initially introduced by the Parliament in a previous version of the text (European Parliament, 2023, amendment 413) as their absence in the first proposition by the Commission was criticised (Edwards, 2022). A FRIA will be mandatory for high-risk systems listed in Annex III. It will contain a list of natural persons and groups likely to be affected by the system, together with these specific risks, as well as the measures to be taken to mitigate these risks, including a description of human oversight implementation.
Although fundamental rights have already been addressed and protected by European law – the GDPR for example – the AI Act is the first attempt to integrate fundamental rights into a product safety approach, using hENs and CE marking. The European Commission (2022a) has recognized that standards no longer only deal with technical components, but also “incorporate core EU democratic values and interests, as well as green and social principles”. Despite this apparent desire to extend the scope of technical standards, the standardisation request by the European Commission (2023) does not expressly refer to a standard on fundamental rights, nor on “trustworthiness”, a broad concept that incorporates ethical values and legal norms (Laux et al., 2024). In the Commission's standardisation request, trustworthiness is rather seen as a cross-cutting theme, not being tackled in a specific standard but being a constitutive part of every standard. CEN-CENELEC, however, continues to address this topic through its working group on foundational and societal aspects of AI systems – CEN-CLC JTC 21/WG 4,15 a European equivalent to the ISO/IEC working group on AI trustworthiness – ISO/IEC JTC 1/SC 42/WG 3.16 Its work includes standards on ”AI trustworthiness characterisation”, ”AI-enhanced nudging” and ”competence requirements for AI ethicists professionals” among others,17 despite the absence of these topics in the Commission’s request. This shows that ESOs are free to venture beyond the strict limits defined in the Commission’s request.
Other organisations, such as the U.S. National Institute of Standards and Technology (NIST) or the Institute of Electrical and Electronics Engineers (IEEE), are rushing to adopt recommendations, guidelines, or draft standards18 on different aspects of trustworthy AI, including fairness, explainability, and privacy. Some of the technical documents relating to trustworthy AI focus on particular measurements, others focus on processes19 that AI developers are supposed to implement to manage risks, including for fundamental rights (Laux et al., 2024). These recommendations, guidelines, and draft standards on AI are not hENs, but they may influence the development of hENs for AI, either by becoming hENs like ISO standards, or by establishing themselves on the market and influencing the state-of-the-art.
The status of harmonised standards in EU law
Harmonised standards were not originally designed to cover fundamental rights
hENs owe their legal existence to Regulation 1025/2012 (2012) on European standardisation. Regulation 1025/2012 lists the elements that can be considered technical specifications (art. 2.4.a). The regulation mentions environmental protection, health and safety, but does not mention ethical criteria or fundamental rights.
The NLF was intended first as a legislative instrument to bring together all the elements of product safety legislation (European Commission, 2022a, p. 12). This emphasis on safety has gradually shifted to include other criteria. The 2022 version of the Blue Guide specifies, in brackets, that “environmental and health policies also have recourse to a number of these elements” (European Commission, 2022a, p. 12), but this is clearly a secondary objective of the NLF, which is above all safety-oriented. After “safety” risks, the most commonly addressed risks are health risks, and then, more rarely, environmental risks. Recently, other criteria have begun to appear in the texts on product safety. For instance, Regulation 765/2008 (2008) on market surveillance and the marketing of products creates a framework to provide “a high level of protection of public interests, such as health and safety [...], the protection of consumers, protection of the environment and security” (art. 1.2). Regulation 2019/1020 (2019) on market surveillance and compliance of products, further states that a product should be suspended from free circulation on the market when it presents a “serious risk to health, safety, the environment orany other public interest” (art. 26.1(e), emphasis added). The term “any other public interest” could encompass risks to fundamental rights. However, this is never explicitly stated in the texts.
Harmonised standards have legal effects and can be considered part of EU law
In Europe, hENs create legal effects. Products manufactured in accordance with hENs benefit from a “presumption of conformity”. This means that the essential requirements covered by hENs are presumed to be automatically met if the products comply with that standard. Manufacturers may then benefit from simplified conformity assessment procedures (Hernalsteen & Kohler, 2022). For instance in the AI Act, providers of certain high-risk AI systems can opt out of a third-party conformity assessment and fully rely on internal control, if they choose to apply hENs (art. 43.3). If they choose not to apply hENs, they must demonstrate by other means how the specifications they use permit products to comply with the essential requirements (European Commission, 2022a, p.55), a more challenging task than if they simply applied a hEN. The presumption of conformity afforded by hENs encourages their adoption and avoids legal claims concerning hENs when a manufacturer’s position on the market is affected by these standards (Schapel, 2013).
The legal significance of technical standards in the EU has grown, because regulations cannot be understood without their relevant standards, making themde facto binding (Gamito, 2018; Everson et al., 1999). Some consider that the development of technical standards has entered a stage of “juridification” (Schapel, 2013), a term taken up by the recent Opinion of the Advocate General (2023) in the Public.Resource.Org case (§29). hENs are now regarded as a form of implementing acts (Tovo, 2018).
A number of cases have involved the analysis of the scope of hENs. The Fra.Bo SpA v Deutsche Vereinigung (Case C-171/11, 2012) case showed that hENs can have de facto mandatory effects, due to the presumption of conformity granted to them that renders any other means of achieving compliance more costly and time-consuming. Additionally, the Court of Justice of the European Union (CJEU) held in the James Elliott Construction Limited v Irish Asphalt Limited (Case C-613/14, 2016) case that hENs form part of EU law due to these legal effects. The last case to date, Public.Resource.Org, Inc. And Right to Know CLG v European Commission (2024) examined whether hENs could be subject to copyright protection. After an initial ruling by the General Court (Public.Resource.org, 2021), the relevance of the claim to copyright protection was re-examined in an appeal. To this end, the Advocate General, in his 22 June 2023 Opinion, conducted a detailed analysis of hENs. The Court delivered its judgement on the appeal on 5 March 2024.
Even if Regulation 1025/2012 considers hENs to be voluntary in theory, as there are other ways to demonstrate compliance, in practice it is difficult if not impossible for manufacturers to choose a different avenue. Recourse to hENs is thus quasi obligatory for economic players if they want to stay competitive (Van Elk & Van der Horst, 2009). Another advantage is that the presumption of conformity reverses the burden of proof, since the company does not have to prove that it complies with the legislation, as this is automatically presumed. If a manufacturer chooses not to comply with hENs, the onus is on him to prove that his product complies with the legislation, which represents a huge commercial risk that no manufacturer would take (Opinion of Advocate General Medina, 2023, §42). As noted by the Advocate General in the Public.Resource.Org case appeal, the whole architecture of the EU standardisation system presupposes that all actors use hENs (§47). According to the Advocate General, there are no realistic alternatives, because ESOs are too focused on hENs development to propose other standards and there is no financial incentive for other private actors to compete with them (§48).
The commercial operating mode of ESOs is at odds with the legal scope of harmonised standards
The Public.Resource.Org decision (2024) involved two non-profit organisations who requested access to several hENs, referenced in the OJEU but whose full text was not public and behind a paywall. In 2021, the Commission refused to grant them this access on the basis of the first indent of Article 4(2) of Regulation 1049/2001. This article lists the exceptions to the free access of the EU institutions documents, and states that access can be refused “where disclosure would undermine the protection of commercial interests [...] including intellectual properties [...], unless there is an overriding public interest in disclosure”. A first judgement was made on 24 July 2021 by the General Court, in favour of the Commission. In their appeal, the organisations asserted that the General Court erred in incorrectly assessing the copyright protection of hENs, since hENs are part of the law and cannot be copyrighted, and if they were allowed copyright protection, free access to the law would take precedence over copyright protection. While the European Commission claimed that the European standardisation system cannot function without paid access to standards, the two non-profit organisations considered that this does not prevail over the right of access to these standards.According to the Vademecum of the European Commission (2015), hENs are only a means to support the implementation of legislation. In the Public.Resource.Org case appeal, the Advocate General questioned this claim, affirming that they are more than a simple aid and are actually an “essential tool” for the correct implementation of EU legislation (§33-36). One of the Advocate General’s conclusions is therefore that, due to the heavy reliance of EU legislation on hENs, the effectiveness of the legislation is compromised in the absence of a publicly accessible version of these standards. hENs are indeed considered by the Advocate General to be “indispensable” for enforcing the corresponding EU legislation, thus, the public cannot exercise their rights if they do not have access to hENs (§46-47). To ensure that everyone can have the possibility to know the law and respect it, every act, including hENs, should respect the principle of transparency and right of access to documents, recognised by the Consolidated Version of the Treaty on European Union (2012, art. 1§2, 10.3, 11.2&3) as well as the Charter of Fundamental Rights of the European Union (2012, art. 42). This is at odds with the operating mode of ESOs that usually charge for access to technical standards and keep the intellectual property of all their standards.
In addition, the Grand Chamber found that “[harmonised standards] may be necessary for [individuals] to verify whether a given product or service actually complies with the requirements of [a] legislation” (Public.Resources.Org, 2024, §82), emphasising the principles of transparency and openness to which democratic institutions are subject under EU law (§83). In this regard, the Grand Chamber agreed with the non-profit organisations, concluding that there was indeed an overriding public interest in the disclosure of these standards. The initial judgement by the General Court was set aside and the European Commission will need to give access to the four requested harmonised standards. This judgement, however, does not seem to question the copyright protection of hENs, as stated by CEN-CENELEC (2024). Yet, it is unclear if this decision entails an automatic publication of hENs in the OJEU or a simple disclosure upon request (Soroiu, 2024).
The Commission is responsible for political choices while the ESOs are responsible for technical choices
Today, hENs are published in the OJEU under the letter L, for legislation, where previously they were published under C, for information and notice (§9). As confirmed by the various CJEU decisions, hENs are the equivalent of a legally binding regulation, even though they are developed by institutions – the ESOs – without any democratic accountability. In reality, hENs are developed under the direction of the Commission, the executive branch of the EU that could be seen as the politically responsible author of the standards.
The James Elliott (2016) case found that the Commission has significant control over the procedure of drafting and considered hENs as constituting acts of the institutions of the EU. Not only does the Commission request hENs, it also supervises the drafting and adopts them. After the draft harmonised standard has been proposed by the ESOs and before publication in the OJEU, the Commission is empowered to send back the document to the ESOs for modification if the draft does not comply with the request. Ultimately, publication in the OJEU depends on acceptance by the Commission. The cycle of an hEN thus starts and ends with the Commission. This led the Advocate General in his Opinion on the Public.Resource.Org case appeal (2023) to conclude that the Commission has the power to transform a preparatory document into an act that forms part of EU law (§28). The Advocate General further advises that the Commission should be seen as the institution adopting hENs and that ESOs are only preparatory bodies (§17).
The European Commission itself has declared (2022b) that more power needs to be transferred from the ESOs to the Commission. One way of achieving this would be to allow the Commission to draw up technical solutions directly, as an alternative to the hENs drawn up by the ESOs. The AI Act acknowledges this possibility: the Commission is tasked to draft “common specifications”, where hENs do not exist or are considered insufficient or when ”the relevant harmonised standards insufficiently address fundamental rights concerns“ (art. 41.1).
However, despite the Commission involvement, democratic oversight of hENs is still lacking, as neither the European Parliament nor the Member States have a right to veto standards. Additionally, the Commission’s right to refuse publication of a hENs is burdened by technical limitations and human resources costs that prevent it from carrying out a comprehensive examination (Ebers, 2022).
Fundamental rights and technical standards
It is hard to separate a technical question from a fundamental rights question
ANEC,20 the organisation that defends the interests of European consumers in standardisation matters, has already recognized the many difficulties involved in transposing EU fundamental rights and values into technical standards (Giovannini, 2021b). In an ideal world, technical standards should be separated from “hard normative questions” (Laux et al., 2024) and value judgements. In reality, however, it is hard to separate the two. As pointed out by Solow-Niederman (2024), "standards have politics"; they are neither objective nor neutral.
For instance, the concept of fairness in AI systems has several meanings, both morally, legally, and technically (Mulligan et al., 2019). In a general sense, fairness means “the quality of treating people equally or in a way that is right or reasonable” (Cambridge Dictionary, n.d.). This relates in law to the principle of non-discrimination protected by Article 21 of the EU Charter of Fundamental Rights (2012) and Article 10 of the Treaty on the Functioning of the European Union (2012). There are many technical definitions of fairness, and a system that is fair according to one definition is not necessarily fair according to another. Many definitions cannot even be satisfied at the same time (Chouldechova, 2017). For instance, the COMPAS software, used in the United States to predict the recidivism rate of criminals, has been accused of penalising African-Americans according to a certain fairness criterion (Angwin et al., 2016), whereas it respected fairness according to another measurement method (Northpointe Inc., 2019). By defining technical formulas to measure fairness in a standard, we run the risk of choosing an approach to non-discrimination that will lead to injustice in certain situations. This example shows that a seemingly technical definition of fairness can hide a normative choice affecting fundamental rights, the kind of normative choice that generally is made by lawmakers and judges.
Another example is the NIST study on demographic differential for facial recognition (Grother, 2022) which displays a few “equity measures” for facial recognition systems. For all of them, error rates are calculated for different groups of people, based on sensitive personal information like gender or ethnicity. For example, some measures are based on a comparison between the error rates of the two groups on which the system performs best and worst, and other measures are based on the average of all error rates. The first case is, unfortunately, not very robust, and even a slight change in parameters can produce a totally different result. On the contrary, an average-based measure will be more robust but will erase the difference between groups: a system whose performance is very poor in one group but excellent in the others, could end up with the same score as a system whose performance is correct in all groups. Thus, the poor performance of this one group could go unnoticed. Yet, if a system does not work well for a certain category of population, it can lead to discrimination, such as people of colour being wrongly accused of committing crimes because an algorithm has matched their face to that of a criminal (Hill, 2020).
The NIST (n.d.) also proposes a benchmark that evaluates the fairness of systems against their performance. A manufacturer can choose to focus on optimising their score in the given performance or fairness criteria. They can also choose which fairness metric they should improve: the benchmark includes demographic variations by false match rate (FMR) or false non-match rate (FNMR). A low FMR aims to avoid mistakes where a person is wrongly judged to be the same as in a certain image, which usually involves higher security and social stakes to avoid intrusions into a building or station, and false accusations in case of police use. A low FNMR avoids systemic rejection of certain people.
A choice of standard signals a preference for a specific logic and set of priorities (Timmermans & Epstein, 2010). Standards organise social life, and it is crucial to question what choices have been made and how they could have been made differently (Timmermans & Epstein, 2010). However, in the context of AI standards, these choices are often presented as purely technical, and therefore non value-laden, choices (Solow-Niederman, 2024). Moreover, by trying to define good ethical behaviour in technical standards, we risk reducing ethics to a set of tools, which trivialises moral reasoning (Bietti, 2020).
Compliance with standards can lead to ethics washing and CE marking may give citizens an unjustified sense of protection
The diversity of approaches to AI ethical development, such as the multitude of fairness measures, is likely to lead to strategic simplification choices (Aivodji et al., 2019). Manufacturers will display the measure that shows that their system is free of bias and therefore fair according to them and not the other measures showing the system is discriminatory. The introduction of these mathematical measures in a standard is likely to accentuate this trend, by giving greater legitimacy to any chosen measure included in the standard.
Additionally, the protection granted by standards is limited and having in place a risk management system will not guarantee that all possible harms have been taken into account, or that the protective measures are sufficient. For instance, respecting a mathematical notion of fairness does not guarantee that the system will not discriminate (Hoffmann, 2019). Certification to technical standards is often perceived by consumers as a guarantee of safety (ANEC, 2012). This is particularly true of CE marking, often regarded as the cornerstone of the European trustworthiness model, a system that European citizens have come to internalise and respect (Burden & Stenberg, 2022). But the mark is also often wrongly understood by consumers as a guarantee of quality when in fact it only signifies compliance with regulations. Indeed, studies have shown that it is difficult for citizens to understand what the CE mark represents (Burden & Stenberg, 2022). Products covered by the NLF do not require pre-market approval to be sold in the EU. The CE mark therefore does not indicate that a product has been approved by a government agency or by the EU (European Commission, n.d.a). As recalled by the Blue Guide, CE marking is a key indicator of a product’s compliance with EU legislation, but it is not a proof of that compliance (European Commission, 2022a, p.64). As such, a CE marked product may also have safety flaws (Wentholt et al., 2005). Several high-profile cases have involved medical devices – breast implants (Van Leeuwen, 2014; Rott, 2019) and glucose monitors (Wentholt et al., 2005) – that had the CE marking but which were seriously defective. In the same way as for CE marking and safety standards, it is likely that a CE marking relating to fundamental rights may be incorrectly interpreted by citizens as meaning that a given AI system respects fundamental rights.
ESOs and notified bodies have a legitimacy problem as regard to fundamental rights
As previously seen, standardisation organisations are private law bodies, mostly led by the industry. There is also a lack of representation of certain stakeholders (Werle & Iversen, 2006). Those impacted by the use of AI have no role to play in standardisation or certification processes (Edwards, 2022). Associations representing the interests of consumers, such as ANEC, as well as those representing workers or small businesses, do not officially have the right to participate in the work of ISO and IEC. They therefore have no say in the development of these standards, even if they are to be adopted by Europe (Cuccuru, 2019). This industry-led composition also raises risks of regulatory capture21 and conflicts of interest, since industrial stakeholders are drafting the very same laws by which they will be governed. This has prompted some experts to call for greater participation of civil society in standardisation, to counterbalance the weight of industry and bring more legitimacy to standardisation organisations (Baeva et al., 2023).
Additionally, while a large proportion of ISO’s members come from Western Europe, almost half come from elsewhere in the world, particularly Asia and North America (Morikawa & Morrison, 2004). This could create tensions, as Europe would want both to rely on the work of international standards and to adopt standards that represent European values. For instance, ANEC has called for ESOs to address EU values and “not just adopt international standards which might not reflect our values and principles” (Giovannini, 2021a). Standards are therefore the product of political steering by both public and private powers (Solow-Niederman, 2024).
Even if responsibility for issuing the hENs is shouldered in large part by the Commission, ESOs that develop the standards are governed by private law, lacking the democratic legitimacy of the Commission and the other EU institutions. However, these legitimacy concerns about private standard-setting for public regulation are often outweighed by the positive externalities associated with the existence of relevant technical requirements (Cuccuru, 2019). The legitimacy of ESOs is further challenged by the AI Act, as standards will encompass fundamental rights issues and ESOs lack the expertise to assess them (Veale & Borgesius, 2021). In a previous version of the standardisation request, the European Commission thus stated that CEN-CENELEC should ensure to “gather relevant expertise in the area of fundamental rights” (European Commission 2022c, art. 2.1). This is necessary to ensure the relevance of technical standards with judicial norms, yet it might not be sufficient to guarantee the legitimacy of the ESOs in the establishment of EU legal acts dealing with the protection of fundamental rights.
This lack of legitimacy can be extended to the notified bodies who are in charge of the conformity assessment procedure in certain cases. To have the right to conduct conformity assessments, notify bodies must be accredited in accordance with the ISO/IEC 17011: 2017 standard (ISO & IEC, 2017), demonstrating notably their impartiality and the competence of their staff. While this accreditation justifies their technical knowledge of a specific field, it does not account for their expertise in fundamental rights issues.
For the AI Act specifically, many systems will not be audited by a third party and the conformity assessment will be carried out internally. This calls into question the legitimacy of a provider of an AI system to assess the risk of their product to fundamental rights, particularly when this assessment is carried out without external oversight.
Standards can cover fundamental rights topics if they do not try to set thresholds or evaluate trade-offs
As seen previously, standards have difficulty in addressing fundamental rights issues, and when they attempt to do so they can lead to ethics washing and consumer deception. ANEC has already advised that hENs should not be used to define or apply fundamental rights, legal, or ethical principles (Giovannini, 2021b). If standards cannot attest to respect for fundamental rights, what purpose do they serve and what should they contain?
Let us take the example of a standard on fairness. Such a standard can be used by a company to benchmark itself against the competition and assess its own progress. If the results are good enough, the company will use the standard as a marketing tool, like the NIST benchmark for facial recognition for which companies compete to achieve the best results based on different fairness tests. This fosters competition between companies and encourages them to innovate (Blind, 2016). A standard can also enhance transparency and redress information asymmetries (Gamito, 2018) by presenting to users and citizens a standardised score of different performance parameters, including for fairness, thereby permitting better comparison between products. Finally, standards, such as hENs, that are linked to legal compliance obligations, provide public authorities with a uniform method for assessing compliance.
These different uses of standards hint to what they can and cannot contain. For compliance, hENs will help clarify the AI Act’s approach to risk, for instance by defining how to conduct a risk management system, or detail what elements a conformity assessment should contain.22 Additionally, standards can help harmonise how to conduct an algorithmic impact assessment (Calvi & Kotzinos, 2023) or a FRIA. As regard to governance, standards can provide guidance on the structure to be put in place within the company – perhaps with a digital ethics officer or an ethics board, the competences required for this position, or the type of decisions they can and cannot make.23 Product-based standards can define tools to help make better design decisions. For example, they can define all the evaluation measures known in the literature24 – paying attention to selection biases, or the technical means to avoid a system malfunction that could lead to fundamental rights violations in the long term. In short, standards can help define tools and provide a common vocabulary for comparison between products or companies. These tools can help market actors transparently compete on fundamental rights issues, showing they have responsible processes in place, and that on certain metrics, they have achieved a certain score on an issue such as fairness. Laux et al. (2024) similarly propose that standards provide metrics for "ethical disclosure by default", a system guaranteeing that users, regulators, judges, and other stakeholders receive meaningful information in order to evaluate fundamental rights compliance in a given context.
However, there are some things that AI standards should not try to do. Even when following a standard on risk management, the evaluation of risks will remain under the responsibility of the provider. A standard can therefore never say what risks are acceptable or unacceptable (Fraser & Bello y Villarino 2021). Fairness standards, should not say what definition of fairness should be used for a given use case25 or what the acceptable threshold of unfairness is. In case there is a trade-off to be made between fairness and performance, a standard should not say what that trade-off should be. A standard can only provide different ways of defining and measuring fairness, making sure everyone is using the same taxonomy and methodology to measure the different aspects of fairness, but will not say which aspect of fairness should be given priority, or whether a residual level of unfairness can be tolerated in a given situation.
Performance standards26 are quite common in product safety. They specify how the product is to be built, what materials are to be used, how they are to be assembled, and so on. They also specify the tests the product must meet, such as the exact temperature or pressure it must withstand. In product safety, it is not unusual for a standard to define a threshold, for example a level of resistance to fire, or the error rate of a safety component for machinery. These standards are, however, nearly impossible to establish today for AI systems due to their probabilistic nature, which makes their reaction to certain tests highly dependent on the situation, the data on which the system has been trained, etc. This is even truer for standards that have a direct impact on people’s fundamental rights, such as fairness standards. Setting a threshold for these measures would be like setting a threshold for the level of discrimination that may be accepted: it is neither a universal decision nor something acceptable from a legal standpoint. Setting a fairness threshold could also be abused by claiming that a system is “fair enough”, without any concern for improving fairness further (Buyl & De Bie, 2022). Whether a fairness score is acceptable or is the right metric to be using in this situation should remain outside of standards and determined by the regulator and judge.
As thresholds cannot be set for standards relating to the protection of fundamental rights, the development of hENs on these subjects for the purpose of assessing compliance with the AI Act seems like a difficult – and not necessarily desirable – task. Because of their legal effects, hENs will always aim to set thresholds, and that indeed seems to be the intent of the AI Act since hENs and CE marking are supposed to signal compliance (Laux et al., 2024). But outside of the safety realm, hENs are less suitable, as they cannot define what is an “acceptable” level of protection to fundamental rights. Standards should not attempt to answer these hard normative questions, nor should they seek consensus; they should rather create means of disclosure (Laux et al., 2024). Access to information regarding a certain technology can then enable regulators and judges to make specific decisions in a given context. This article therefore invites standardisation actors to develop standards, whether hENs or other standards, which contribute to the protection of fundamental rights through the dissemination of good practices, but which avoid making value-laden societal judgements.
Conclusion
This article shows the AI Act’s attempt to operate at two levels:ex ante compliance, inspired by product safety rules with the use of hENs and CE marking, and the protection of fundamental rights. It examines recent case law that has determined the role of hENs in European law, as well as the 5 March 2024 CJEU decision and the respective Advocate General’s Opinion in the Public.Resource.Org case appeal. This case law shows that hENs are to be regarded as EU legal acts and that, while the Commission is to be held responsible for the political dimension of hENs, the ESOs are responsible for the technical content.
However, product safety tools such as hENs and CE marking are not meant to cover fundamental rights. Standards on fundamental rights would be both difficult to establish and could lead to ethical washing and consumer deception. The field of expertise of ESOs, made up mainly of industrial experts, is not that of fundamental rights, and they could face a legitimacy problem if they tried to take on this role reserved for legislators and judges. This does not mean, however, that standards cannot address fundamental rights, as they still have an important role to play in encouraging best practices in processes and measurement techniques, but they can never attempt to decide on a trade-off or on a level of acceptability of a given fundamental right risk.
The AI Act approach calls into question the very nature of standards and their limits. It might also pose problems for the interpretation of standards by the courts, as in the past the boundaries between the technical and legal worlds were well-defined, whereas today there is a certain overlap. In this context, even more than in the case of safety standards, ESOs will have to account for the power they hold. The hENs to be developed in support of the AI Act will set the tone for future regulations in the field of digital law. Europe should, however, be cautious about the power it grants to hENs, particularly if they continue their foray into fundamental rights.
References
AIRISE. (n.d.).Work programme on AI in manufacturing: CEN/CLC/JTC 21 work programme [Report].https://airise.eu/ecosystem/standards-training/work-in-progress
Aivodji, U., Arai, H., Fortineau, O., Gambs, S., Hara, S., & Tapp, A. (2019). Fairwashing: The risk of rationalization.Proceedings of the 36th International Conference on Machine Learning,97, 161–170.https://proceedings.mlr.press/v97/aivodji19a.html
Allen, R. H., & Sriram, R. D. (2000). The role of standards in innovation.Technological Forecasting and Social Change,64(2–3), 171–181.https://doi.org/10.1016/S0040-1625(99)00104-3
ANEC. (2012).CE marking. ‘Caveat emptor—Buyer beware’ (Position paper ANEC-SC-2012-G-026final).https://www.anec.eu/attachments/ANEC-SC-2012-G-026final.pdf
ANEC. (n.d.).FAQ & useful links. ANEC: The European consumer voice in standardisation.https://www.anec.eu/about-anec/faq-useful-links
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias.ProPublica.https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Baeva, G., Puntschuh, M., & Binder, M. (2023).Power to the standards. Expert consultation on the role of norms and standards in the European regulation of artificial intelligence [White paper]. Zentrum für vertrauenswürdige Künstliche Intelligenz.https://www.zvki.de/storage/publications/2023-12/Fohsi7Yzn7/ZVKI-Whitepaper-Standards-EN-2023_v2.pdf
Bietti, E. (2020). From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy.Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 210–219.https://doi.org/10.1145/3351095.3372860
Blind, K. (2004).The economics of standards. Theory, evidence, policy. Edward Elgar Publishing.https://doi.org/10.4337/97810353051555
Blind, K. (2016). The impact of standardisation and standards on innovation. In J. Edler, P. Cunningham, A. Gök, & P. Shapira (Eds.),Handbook of innovation policy impact (pp. 423–449). Edward Elgar Publishing.https://doi.org/10.4337/9781784711856.00021
Burden, H., & Stenberg, S. (2022).Regulating trust – An ongoing analysis of the AI Act (Position Paper 2022:138). RISE Research Institutes of Sweden.https://www.ri.se/en/regulating-trust-an-ongoing-analysis-of-the-ai-act
Buyl, M., & De Bie, T. (2022).Inherent limitations of AI fairness (arXiv.2212.06495; Version 2). arXiv.https://doi.org/10.48550/ARXIV.2212.06495
Calvi, A., & Kotzinos, D. (2023). Enhancing AI fairness through impact assessment in the European Union: A legal and computer science perspectivA.Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 1229–1245.https://doi.org/10.1145/3593013.3594076
Cambridge Dictionary. (n.d.). Fairness. InCambridge English Dictionary Online. Cambridge University Press.https://dictionary.cambridge.org/dictionary/english/fairness
Case C-171/11. (2012).Judgment of the Court (Fourth Chamber), 12 July 2012. Fra.bo SpA v Deutsche Vereinigung des Gas- und Wasserfaches eV (DVGW)—Technisch-Wissenschaftlicher Verein. The Court of Justice of the European Union.https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62011CJ0171
Case C‑588/21 P. (2023).Opinion of Advocate General Medina delivered on 22 June 2023(1) Public.Resource.Org, Inc., Right to Know CLG v European Commission. The Court of Justice of the European Union.https://curia.europa.eu/juris/document/document.jsf?text=&docid=274881&pageIndex=0&doclang=EN&mode=req&dir=&occ=first&part=1&cid=2825624
Case C-588/21 P. (2024).Judgment of the Court (Grand Chamber) 5 March 2024 Public.Resource.Org and Right to Know v Commission and Others. The Court of Justice of the European Union.https://curia.europa.eu/juris/document/document.jsf?text=&docid=283443&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=8223993
Case C-613/14. (2016).Judgment of the Court (Third Chamber) of 27 October 2016 (request for a preliminary ruling from the Supreme Court—Ireland)—James Elliott Construction Limited v Irish Asphalt. The Court of Justice of the European Union.https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62014CA0613
Case T-185/19. (2021).Judgment of the General Court (Fifth Chamber, Extended Composition) of 14 July 2021 Public.Resource.Org, Inc. And Right to Know CLG v European Commission. The Court of Justice of the European Union.https://curia.europa.eu/juris/liste.jsf?language=en&td=ALL&num=T-185/19
Castets-Renard, C., & Besse, P. (2023). Ex ante accountability of the AI Act: Between certification and standardisation, in pursuit of fundamental rights in the country of compliance. In C. Castets-Renard & J. Eynard (Eds.),Artificial intelligence law: Between sectoral rules and comprehensive regime. Comparative Law. Bruylant.https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4203925
CEN. (2019).The ’New Approach’ [Guidance document].https://boss.cen.eu/reference-material/guidancedoc/pages/newapproach/
CEN-CENELEC. (2024).Copyright protection of Harmonized Standards not in question – however, there is an overriding public interest in their disclosure [Press statement].https://www.cencenelec.eu/news-and-events/news/2024/brief-news/2024-03-05-ecj-case/
CEN-CENELEC. (n.d.).Search standards [Search engine].https://standards.cencenelec.eu/dyn/www/f?p=205:105:0
CENELEC. (2017).CENELEC guide 13 FAQ: Frequently asked questions on the Frankfurt Agreement [Guide].https://www.cencenelec.eu/media/Guides/CLC/13_cenelecguide13_faq.pdf
Charter of Fundamental Rights of the European Union. (2012).Official Journal,C 326/02, 391–407.
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.Big Data,5(2), 153–163.https://doi.org/10.1089/big.2016.0047
Consolidated Version of the Treaty on European Union. (2012).Official Journal,C326, 13–390.
Cuccuru, P. (2019).Interest representation in European standardisation: The case of CEN and CENELEC (Research Paper 2019–52). Amsterdam Centre for European Law and Governance.http://dx.doi.org/10.2139/ssrn.3505290
Dal Bó, E. (2006). Regulatory capture: A review.Oxford Review of Economic Policy,22(2), 203–225.https://doi.org/10.1093/oxrep/grj013
Ebers, M. (2022). Standardising AI: The case of the European Commission’s proposal for an Artificial Intelligence Act. In L. A. DiMatteo, C. Poncibò, & M. Cannarsa (Eds.),The Cambridge handbook of artificial intelligence: Global perspectives on law and ethics (pp. 321–344). Cambridge University Press.https://doi.org/10.1017/9781009072168.030
Edwards, L. (2022).Regulating AI in Europe: Four problems and four solutions [Expert opinion]. Ada Lovelace Institute.https://www.adalovelaceinstitute.org/wp-content/uploads/2022/03/Expert-opinion-Lilian-Edwards-Regulating-AI-in-Europe.pdf
European Commission. (2015).Vademecum on European standardisation in support of Union legislation and policies—Part 1: Role of the Commission’s standardisation requests to the European standardisation organisations (Working Document SWD(2015) 205 final PART 1/3).https://ec.europa.eu/docsroom/documents/13507/attachments/1/translations
European Commission. (2021).Proposal for a regulation of the European Parliament and of the Council. Laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021) 206 final).https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
European Commission. (2022a).Commission notice. The ’Blue Guide’ on the implementation of EU product rules 2022 (OJ C 247; pp. 1–152).https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022XC0629%2804%29
European Commission. (2022b).Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. An EU strategy on standardisation: Setting global standards in support of a resilient, green and digital EU single market (COM(2022) 31 Final).https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022DC0031
European Commission. (2022c).Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence.https://ec.europa.eu/docsroom/documents/52376
European Commission. (2023).Commission implementing decision on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence (Implementing Decision C(2023)3215).https://ec.europa.eu/transparency/documents-register/detail?ref=C(2023)3215&lang=en
European Commission. (n.d.a).CE marking. Your Europe.https://europa.eu/youreurope/business/product-requirements/labels-markings/ce-marking/index_en.htm
European Commission. (n.d.b).Harmonised standards. Internal market, industry, entrepreneurship and SMEs.https://single-market-economy.ec.europa.eu/single-market/european-standards/harmonised-standards_en
European Commission. (n.d.c).Manufacturers. Internal market, industry, entrepreneurship and SMEs.https://single-market-economy.ec.europa.eu/single-market/ce-marking/manufacturers_en
European Commission. (n.d.d).Notified bodies. Internal market, industry, entrepreneurship and SMEs.https://single-market-economy.ec.europa.eu/single-market/goods/building-blocks/notified-bodies_en
European Commission. (n.d.e).Notified bodies (NANDO). Single market compliance space.https://webgate.ec.europa.eu/single-market-compliance-space/#/notified-bodies
European Council. (2022).Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts—General approach (14954/22).https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf
European Parliament. (2023).Artificial Intelligence Act. Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) (P9_TA(2023)0236; Texts Adopted).https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf
European Parliament. (2024).Artificial Intelligence Act. European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) (P9_TA(2024)0138; Texts Adopted).https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf
Everson, M., Majone, G., Metcalfe, L., & Schout, A. (1999).The role of specialised agencies in decentralising EU governance [Report]. European Commision.https://www.academia.edu/103116487/The_Role_of_Specialised_Agencies_in_Decentralising_EU_Governance
Fraser, H., & Bello y Villarino, J.-M. (2021).Where residual risks reside: A comparative approach to Art 9(4) of the European Union’s proposed AI Regulation. SSRN.https://doi.org/10.2139/ssrn.3960461
Fraser, H., & Bello y Villarino, J.-M. (2023). Acceptable risks in Europe’s proposed AI Act: Reasonableness and other principles for deciding how much risk management is enough.European Journal of Risk Regulation, 1–16.https://doi.org/10.1017/err.2023.57
Gamito, M. C. (2018). Europeanization through Standardization: ICT and Telecommunications.Yearbook of European Law,37, 395–423.https://doi.org/10.1093/yel/yey018
Giovannini, C. (2021a).ANEC comments on the European Commission proposal for an Artificial Intelligence Act (Position Paper ANEC-DIGITAL-2021-G-071). ANEC.https://www.anec.eu/images/Publications/position-papers/Digital/ANEC-DIGITAL-2021-G-071.pdf
Giovannini, C. (2021b).The role of standards in meeting consumer needs and expectations of AI in the European Commission proposal for an Artificial Intelligence Act (Position Paper ANEC-DIGITAL-2021-G-141). ANEC.https://www.anec.eu/images/Publications/position-papers/Digital/ANEC-DIGITAL-2021-G-141.pdf
Gornet, M., & Maxwell, W. (2023).Normes Techniques et éthique de L’IA [AI technical standards and ethics]. CNIA 2023 - Conférence Nationale en Intelligence Artificielle, Strasbourg, France.https://hal.science/hal-04121843
Grother, P. (2022).Face Recognition Vendor Test (FRVT). Part 8: Summarizing demographic differentials (Report NIST IR 8429). National Institute of Standards and Technology.https://pages.nist.gov/frvt/reports/demographics/nistir_8429.pdf
Hanson, D. (2005).CE marking, product standards and world trade. Edward Elgar Publishing.https://doi.org/10.4337/9781781958339
Hernalsteen, L., & Kohler, C. (2022).Drafting harmonized standards in support of the Artificial Intelligence Act (AIA) [Presentation]. CEN-CENELEC.https://www.cencenelec.eu/media/CEN-CENELEC/AreasOfWork/CEN-CENELEC_Topics/Artificial%20Intelligence/jtc-21-harmonized-standards-webinar_for-website.pdf
High-Level Expert Group on Artificial Intelligence (AI HLEG). (2019).Ethics guidelines for trustworthy AI [Guidelines]. European Commission.https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
Hill, K. (2020, June 24). Wrongfully accused by an algorithm.The New York Times.https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse.Information, Communication & Society,22(7), 900–915.https://doi.org/10.1080/1369118X.2019.1573912
IEEE Standards Association. (n.d.).IEEE portfolio of AIS technology and impact standards and standards projects. AIS Standards.https://standards.ieee.org/initiatives/autonomous-intelligence-systems/standards/
International Organisation for Standardisation (ISO) & CEN. (2016).Foire aux questions eelatives à l’Accord de Vienne [Frequently asked questions about the Vienna Agreement].https://boss.cen.eu/media/CEN/ref/va_faq_fr.pdf
International Organisation for Standardisation (ISO) & International Electrotechnical Commission (IEC). (2017).Conformity assessment – Requirements for accreditation bodies accrediting conformity assessment bodies (Report ISO/IEC 17011:2017).https://www.iso.org/standard/67198.html
International Organisation for Standardisation (ISO) & International Electrotechnical Commission (IEC). (2021).Information technology—Artificial intelligence (AI)—Bias in AI systems and AI aided decision making (Report ISO/IEC TR 24027:2021).https://www.iso.org/standard/77607.html
ITEH Standards. (n.d.a).CEN/CLC/JTC 21—Artificial intelligence.https://standards.iteh.ai/catalog/tc/cen/5af9e506-b1dc-4fcd-a3af-84d65edbf2bb/cen-clc-jtc-21
ITEH Standards. (n.d.b).ISO/IEC JTC 1/SC 42—Artificial intelligence.https://standards.iteh.ai/catalog/tc/iso/a8b53a70-2bb4-40a8-abf1-f42dde4432c5/iso-iec-jtc-1-sc-42
Kaplinsky, R. (2010).The role of standards in global value chains (Working Paper 5396; Policy Research Working Paper Series). The World Bank.https://doi.org/10.1596/1813-9450-5396
Laux, J., Watcher, S., & Mittelstadt, B. (2024). Three pathways for standardisation and ethical disclosure by default under the European Union Artificial Intelligence Act.Computer Law & Security Review,53.https://doi.org/10.1016/j.clsr.2024.105957
McFadden, M., Jones, K., Taylor, E., Osborn, G., & Oxford Information Labs. (2021).Harmonising artificial intelligence: The role of standards in the EU AI regulation (Working Paper 2021.5). Oxford Commission on AI & Good Governance.https://www.oii.ox.ac.uk/news-events/reports/harmonising-artificial-intelligence/
Morikawa, M., & Morrison, J. (2004).Who develops ISO standards? A survey of participation in ISO’s international standards development processes [Report]. Pacific Institute for Studies in Development, Environment, and Security.https://library.iso.org/contents/data/255-who-develops-iso-standards-a.html
Mulligan, D. K., Kroll, J. A., Kohli, N., & Wong, R. Y. (2019). This thing called fairness: Disciplinary confusion realizing a value in technology.Proceedings of the ACM on Human-Computer Interaction,3(CSCW), 1–36.https://doi.org/10.1145/3359221
National Institute of Standards and Technology. (2023).AI risk management framework (AI RMF 1.0) (Report NIST AI 100-1).https://doi.org/10.6028/NIST.AI.100-1
National Institute of Standards and Technology. (n.d.).Face recognition technology evaluation (FRTE) 1:1 verification.https://pages.nist.gov/frvt/html/frvt11.html
Northpointe Inc. (2019).Practitioner’s guide to COMPAS core [Guide].https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf
Regulation 765/2008. (2008).Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93. European Parliament and Council.http://data.europa.eu/eli/reg/2008/765/oj
Regulation 1025/2012. (2012).Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council. European Parliament and Council.http://data.europa.eu/eli/reg/2012/1025/oj
Regulation 1049/2001. (2001).Regulation (EC) No 1049/2001 of the European Parliament and of the Council of 30 May 2001 regarding public access to European Parliament, Council and Commission documents. European Parliament and Council.http://data.europa.eu/eli/reg/2001/1049/oj
Regulation 2016/679. (2016).Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). European Parliament and Council.http://data.europa.eu/eli/reg/2016/679/2016-05-04
Regulation 2019/1020. (2019).Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011. European Parliament and Council.http://data.europa.eu/eli/reg/2019/1020/oj
Regulation 2022/868. (2022).Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act). European Parliament and Council.http://data.europa.eu/eli/reg/2022/868/oj
Regulation 2022/1925. (2022).Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act). European Parliament and Council.https://eur-lex.europa.eu/eli/reg/2022/1925/oj
Regulation 2022/2065. (2022).Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act). European Parliament and Council.http://data.europa.eu/eli/reg/2022/2065/oj
Regulation 2023/988. (2023).Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC (pp. 1–51). European Parliament and Council.http://data.europa.eu/eli/reg/2023/988/oj
Regulation 2023/2854. (2023).Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act). European Parliament and Council.http://data.europa.eu/eli/reg/2023/2854/oj
Rott, P. (2019). Certification of medical devices: Lessons from the PIP scandal. In P. Rott (Ed.),Certification – Trust, accountability, liability (Vol. 16, pp. 189–211). Springer International Publishing.https://doi.org/10.1007/978-3-030-02499-4_9
Schapel, H. (2013). The new approach to the New Approach: The juridification of harmonized standards in EU law.Maastricht Journal of European and Comparative Law,20(4), 521–533.https://doi.org/10.1177/1023263X1302000404
Schuett, J. (2023). Risk management in the Artificial Intelligence Act.European Journal of Risk Regulation, 1–19.https://doi.org/10.1017/err.2023.1
Solow-Niederman, A. (2024). Can AI standards have politics?UCLA Law Review, 231–245.
Soroiu, A. (2024). The fall of the great paywall for EU harmonised standards: The CJEU dismantles EU standardisation in C-588/21 P (Public.Resource.Org).Verfassungsblog.https://doi.org/10.59704/5a60ea5d42c2b059
Tassey, G. (2000). Standardization in technology-based markets.Research Policy,29(4–5), 587–602.https://doi.org/10.1016/S0048-7333(99)00091-8
Timmermans, S., & Epstein, S. (2010). A world of standards but not a standard world: Toward a sociology of standards and standardization.Annual Review of Sociology,36(1), 69–89.https://doi.org/10.1146/annurev.soc.012809.102629
Tovo, C. (2018). Judicial review of harmonized standards: Changing the paradigms of legality and legitimacy of private rulemaking under EU law.Common Market Law Review,55(4), 1187–1216.https://doi.org/10.54648/COLA2018096
van Elk, K., & van der Horst, R. (2009).Access to standardisation: Study for the European Commission, Enterprise and Industry Directorate-General [Final report]. EIM Business & Policy Research.https://www.anec.eu/images/Publications/Access-Study---final-report.pdf
van Leeuwen, B. (2014). PIP breast implants, the EU’s New Approach for goods and market surveillance by notified bodies.European Journal of Risk Regulation,5(3), 338–350.https://doi.org/10.1017/S1867299X0000386X
Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act—Analysing the good, the bad, and the unclear elements of the proposed approach.Computer Law Review International,22(4), 97–112.https://doi.org/10.9785/cri-2021-220402
Wentholt, I. M. E., Hoekstra, J. B. L., Zwart, A., & DeVries, J. H. (2005). Pendra goes Dutch: Lessons for the CE mark in Europe.Diabetologia,48, 1055–1058.https://doi.org/10.1007/s00125-005-1754-y
Werle, R., & Iversen, E. J. (2006). Promoting legitimacy in technical standardization.Science, Technology & Innovation Studies,2, 19–39.https://doi.org/10.17877/DE290R-12756
Footnotes
1. This work uses the term “AI” to refer both to machine learning algorithms and logic- and knowledge-based systems, in a similar way to the European Council (2022) version of the AI Act.
2. For the sake of clarity, when we refer to the AI Act in this article, it will always be this last known version, unless stated otherwise.
3. The acronym hEN is used by European Standards Organisations (ESOs) such as CEN-CENELEC to designate harmonised standards. The letters EN are placed in front of the name of a standard to indicate that it has been adopted by the ESOs and is therefore considered to be a European standard. The letter h is added to indicate that it is a harmonised standard.
4. The Commission’s explanatory memorandum presented just before the text of the AI Act (European Commission, 2021) and constituting an important aid to the interpretation of the legislation, contains a list of rights whose protection should be enhanced by the AI Act (section 3.5). It includes, for example, the right to human dignity, respect for private life and protection of personal data, non-discrimination, equality between women and men, freedom of expression, freedom of assembly, right to an effective remedy and to a fair trial, rights of defence and the presumption of innocence, the general principle of good administration, etc.
5. Note that a similar definition is given in the General Product Safety Regulation (Regulation 2023/988).
6. Directives and regulations.
7. While the European Commission (n.d.c) usually prefers the term “manufacturer” when referring to NLF legislation, the AI Act uses the term “provider”, defined in article 3(3). We will use the former when discussing NLF legislation generally and the latter when discussing the AI Act.
8. CE marking is applicable throughout the European Economic Area (EEA).
9. Not all standards developed by ESOs, are hENs, only those following a request from the Commission (Regulation 1025/2012, art. 2(1)(b)&(c)).
10. Not all harmonised standards are cited in the OJEU. Some might be requested by the European Commission to address standardisation gaps, without supporting a specific legislation (Hernalsteen & Kohler, 2022).
11. Risk management system for AI systems, governance and quality of datasets used to build AI systems, record keeping through logging capacities by AI systems, transparency and information provisions for users of AI systems, human oversight of AI systems, accuracy specifications for AI systems, robustness specifications for AI systems, cybersecurity specifications for AI systems, quality management system for providers of AI systems, including post-market monitoring process, and conformity assessment for AI systems.
12. For a complete list of all notified bodies, see (European Commission, n.d.e).
13. Except for “accountability”, as it is assumed that the regulation will enable this key requirement to be enforced.
14. This list was initially extended by the Parliament in a previous version of the AI Act (European Parliament, 2023), which also considered harms to “the environment, democracy and the rule of law” (amendment 246), but this extension was not retained in the latest version of the text.
15. For the structure of JTC 21, see (ITEH Standards, n.d.a).
16. For the structure of SC 42, see (ITEH Standards, n.d.b).
17. For a complete list of published standards and standards under development, see (CEN-CENELEC, n.d.).
18. See for instance the IEEE 7000 standards series, available in the list of IEEE standards (IEEE Standards Association, n.d.) or the National Institute of Standards and Technology (NIST) risk management framework (NIST, 2023). For an overview of standards related to ethics, see (Gornet & Maxwell, 2023).
19. It is worth noting that even outside of AI trustworthiness, standards are often classified as "product" or "process" standards (Tassey, 2000; Kaplinsky, 2010).
20. As stated on ANEC’s website: “ANEC stands for the ‘European Association for the Co-ordination of Consumer Representation in Standardisation AISBL’ [...] ANEC is often described as ‘The European consumer voice in standardisation’" (ANEC, n.d.).
21. According to Dal Bo (2006), regulatory capture is “the process through which special interests affect state intervention”.
22. These topics are notably present in the standardisation request (European Commission, 2023).
23. Like, for instance, the standard on “competence requirements for AI ethicists professionals” that is being prepared by CEN-CENELEC (Arise.EU, n.d.).
24. Like the ISO standard on bias mitigation (ISO & IEC, 2021), which lists all the means known in the literature for assessing and dealing with bias.
25. This includes both the metric used and the population groups on which the system is evaluated.
26. Following (Allen & Sriram, 2000) terminology, also referred to as quality standards (Blind, 2004).