Learn about certification frameworks and evaluation criteria for secure and resilient AI. This section explains what stakeholders need to know to check if AI systems are safe and work well, based on certain rules.

Reference / Standard NoTitleShort Description / ScopePublication StatusHarmonization statusCategoryIssuing Body
ETSI TR 103 910 V1.1.1 (2025-02)Methods for Testing and Specification (MTS); AI Testing; Test Methodology and Test Specification for ML-based SystemsThe present document describes test types, test items, quality criteria, and testing methodologies associated with testing ML-based systems, with an emphasis on supervised, unsupervised, and reinforcement learning. The present document outlines how these testing practices can be effectively integrated into the life cycle of typical ML-based systems. The present document applies to all types of organizations involved in any of the lifecycle stages of eveloping and operating ML-based systems as well as to any other stakeholder roles.PublishedNOSecurity in AI SystemsETSI TC MTS
ETSI TS 104 050 V1.1.1 (2025-03)Securing Artificial Intelligence (SAI); AI Threat Ontology and definitionsThe present document defines what an Artificial Intelligence (AI) threat is and defines how it can be distinguished from any non-AI threat. The model of an AI threat is presented in the form of an ontology to give a view of the relationships between actors representing threats, threat agents, assets and so forth and defines those erms (see also [1]). The ontology in the present document extends from the base axonomy of threats and threat agents described in ETSI TS 102 165-1 [2] and addresses the verall problem statement for SAI presented in ETSI TR 104 221 [i.21] and the mitigation strategies described in ETSI TR 104 222 [i.22]. Note that, although both technical eports are listed in clause 2.2, they are indeed essential for understanding the scope of the present document.PublishedNOSecurity in AI SystemsETSI TC SAI
ETSI TR 104 221 V1.1.1 (2025-01)Securing Artificial Intelligence (SAI); Problem StatementThe present document describes the problem of securing AI-based systems and solutions, with a focus on machine learning, and the challenges relating to onfidentiality, integrity and availability at each stage of the machine learning lifecycle. It also describes some of the broader challenges of AI systems including bias, ethics and explainability. A number of different attack vectors are described, as well as several real-world use cases and attacks.PublishedNOSecurity in AI SystemsETSI TC SAI
ETSI TR 104 048 V1.1.1 (2025-01)Securing Artificial Intelligence (SAI); Data Supply Chain SecurityThe present document addresses the security problems arising from data supply chains in in the development of Artificial Intelligence (AI) and Machine Learning (ML) systems. Data is a critical component in the development of AIML systems. Compromising the integrity of data has been demonstrated to be a viable attack vector against such systems (see clause 4). The present document summarizes the methods currently used to source data for training AI, along with a review of existing initiatives for developing data sharing protocols. It then provides a gap analysis on these methods and initiatives to scope possible requirements for standards for ensuring integrity and confidentiality of the shared data, information and feedback.
The present document relates primarily to the security of data, rather than the security of models themselves. It is recognized, however, that AI supply chains can be complex and that models can themselves be part of the supply chain, generating new data for onward training purposes. Model security is therefore influenced by, and in turn influences, the security of the data supply chain. Mitigation and detection ethods can be similar for data and models, with poisoning of one being detected by analysis of the other. The present document focuses on security; however, data integrity is not only a security issue. Techniques for assessing and understanding data quality for performance, transparency or ethics purposes are applicable to security assurance too. An adversary aim can be to disrupt or degrade the functionality of a model to achieve a destructive effect. The adoption of mitigations for security purposes will likely improve performance and transparency, and vice versa. The present document does not discuss data theft, which can be considered a traditional cybersecurity problem. The focus is instead specifically on data manipulation in, and its effect on, AI/ML systems.
PublishedNOSecurity in AI SystemsETSI TC SAI
ETSI TR 104 222 V1.2.1 (2024-07)Securing Artificial Intelligence; Mitigation Strategy ReportThe present document summarizes and analyses existing and potential mitigation against threats for AI-based systems as discussed in ETSI GR SAI 004 [i.1]. The goal is to have a technical survey for mitigating against threats introduced by adopting AI into systems. The technical survey shed light on available methods of securing AI-based systems by mitigating against known or potential security threats. It also addresses security capabilities, challenges, and limitations when adopting mitigation for AI-based systems in certain potential use cases.PublishedNOSecurity in AI SystemsETSI TC SAI
ETSI TR 104 066 V1.1.1 (2024-07)Securing Artificial Intelligence; Security Testing of AIThe present document identifies methods and techniques that are appropriate for security testing of ML-based components. Security testing of AI does not end at the component level. As for testing of traditional software, the integration with other components of a system needs to be tested as well. However, integration testing is not the subject of the present document.
The present document addresses:
• security testing approaches for AI;
• security test oracles for AI;
• definition of test adequacy criteria for security testing of AI.
Techniques of each of these topics should be applied together to security test of a ML component. Security testing approaches are used to generate test cases that are executed against the ML component. Security test oracles enable to calculate a test verdict to determine if a test case has passed, that is, no vulnerability has been detected, or failed, that is a vulnerability has been identified. Test adequacy criteria are used to determine the entire progress and can be employed to specify a stop condition for security testing.
PublishedNOSecurity in AI SystemsETSI TC SAI
ETSI TR 104 062 V1.2.1 (2024-07)Securing Artificial Intelligence; Automated Manipulation of Multimedia Identity RepresentationsThe present document covers AI-based techniques for automatically manipulating existing or creating fake identity data represented in different media formats, such as audio, video and text (deepfakes). The present document describes the different technical approaches and analyses the threats posed by deepfakes in different attack scenarios. It then provides technical and organizational measures to mitigate these threats and discusses their effectiveness and limitations.PublishedNOSecurity in AI SystemsETSI TC SAI
ETSI TR 104 225 V1.1.1 (2024-04)Securing Artificial Intelligence TC (SAI); Privacy aspects of AI/ML systemsThe present document identifies the role of privacy as one of the components of the Security of AI, and defines measures to protect and preserve privacy in the context of AI that covers both, safeguarding models and protecting data, as well as the role of privacy-sensitive data in AI solutions. It documents and addresses the attacks and their associated remediations where applicable, considering the existence of multiple levels of trust affecting the lifecycle of data. The investigated attack mitigations include Non-AI-Specific (traditional Security/Privacy redresses), AI/ML-specific remedies, proactive remediations (“left of the boom”), and reactive responses to an adversarial activity (“right of the boom”).PublishedNOSecurity in AI SystemsETSI TC SAI
Reference
/Standard Number
TitleShort description/ ScopePublication StatusHarmonization statusCategoryIssuing Body
ISO/IEC 18033-1:2021Information security — Encryption algorithms — Part 1: GeneralThis document is general in nature and provides definitions that apply in subsequent parts of the ISO/IEC 18033 series. It introduces the nature of encryption and describes certain general aspects of its use and properties.PublishedNOSecurity in AI SystemsISO/IEC JTC 1/SC 27
ISO/IEC 18033-6:2019IT Security techniques — Encryption algorithms — Part 6: Homomorphic encryptionThis document specifies the following mechanisms for homomorphic encryption.
— Exponential ElGamal encryption;
— Paillier encryption.
For each mechanism, this document specifies the process for:
— generating parameters and the keys of the involved entities;
— encrypting data;
— decrypting encrypted data; and
— homomorphically operating on encrypted data.
Annex A defines the object identifiers assigned to the mechanisms specified in this document. Annex B provides numerical examples.
PublishedNOSecurity in AI SystemsISO/IEC JTC 1/SC 27
ISO/IEC 20546:2019Information technology — Big data — Overview and vocabularyThis document provides a set of terms and definitions needed to promote improved communication and understanding of this area. It provides a terminological foundation for big data-related standards.

This document provides a conceptual overview of the field of big data, its relationship to other technical areas and standards efforts, and the concepts ascribed to big data that are not new to big data.
PublishedNOSupporting standards (terminology…)ISO/IEC JTC 1/SC 42
ISO/IEC 24029-2:2023Artificial intelligence (AI) — Assessment of the robustness of neural networks — Part 2: Methodology for the use of formal methodsThis document provides methodology for the use of formal methods to assess robustness properties of neural networks. The document focuses on how to select, apply and manage formal methods to prove robustness properties.PublishedNORobustness specifications for AI systemsISO/IEC JTC 1/SC 42
ISO/IEC 42005:2025Information technology — Artificial intelligence (AI) — AI system impact assessmentISO/IEC 42005 provides guidance for organisations conducting AI system impact assessments. These assessments focus on understanding how AI systems — and their foreseeable applications — may affect individuals, groups, or society at large. The standard supports transparency, accountability and trust in AI by helping organisations identify, evaluate and document potential impacts throughout the AI system lifecycle.PublishedNOGovernance and quality of datasets used to build AI systemsISO/IEC JTC 1/SC 42
ISO/IEC 5338:2023Information technology — Artificial intelligence — AI system life cycle processesThis document defines a set of processes and associated concepts for describing the life cycle of AI systems based on machine learning and heuristic systems. It is based on ISO/IEC/IEEE 15288 and ISO/IEC/IEEE 12207 with modifications and additions of AI-specific processes from ISO/IEC 22989 and ISO/IEC 23053.

This document provides processes that support the definition, control, management, execution and improvement of the AI system in its life cycle stages. These processes can also be used within an organization or a project when developing or acquiring AI systems. When an element of an AI system is traditional software or a traditional system, the software life cycle processes in ISO/IEC/IEEE 12207 and the system life cycle processes in ISO/IEC/IEEE 15288 can be used to implement that element.
PublishedNOGovernance and quality of datasets used to build AI systemsISO/IEC JTC 1/SC 42
ISO/IEC TR 24028:2020Information technology — Artificial intelligence — Overview of trustworthiness in artificial intelligenceThis docThis document surveys topics related to trustworthiness in AI systems, including the following:
— approaches to establish trust in AI systems through transparency, explainability, controllability, etc.;
— engineering pitfalls and typical associated threats and risks to AI systems, along with possible mitigation techniques and methods; and
— approaches to assess and achieve availability, resiliency, reliability, accuracy, safety, security and privacy of AI systems.
The specification of levels of trustworthiness for AI systems is out of the scope of this document.ument surveys topics related to trustworthiness in AI systems, including the following:
PublishedNORobustness specifications for AI systemsISO/IEC JTC 1/SC 42
ISO/IEC TR 24368:2022Information technology — Artificial intelligence — Overview of ethical and societal concernsISO/IEC TR 24368 is a technical report that provides a high-level overview of the ethical and societal concerns surrounding artificial intelligence (AI). It outlines common sources of these concerns, highlights key principles and processes, and maps relevant international standards that support responsible AI development and use. The report is meant for technologists, regulators, interest groups, and society at large — without promoting any specific value system. As AI systems become increasingly embedded in decision-making processes, the ethical and societal risks they introduce are gaining global attention. Concerns such as bias, lack of transparency, privacy violations and diminishing human autonomy must be addressed proactively. ISO/IEC TR 24368 supports a structured, inclusive approach to understanding and mitigating these challenges. It offers guidance on how to evaluate AI systems ethically, with input from diverse disciplines and stakeholders, helping to build AI that is not only effective, but also fair, transparent and accountable.PublishedNOTransparency and information to the users of AI systemsISO/IEC JTC 1/SC 42
ISO/IEC TR 5469:2024Artificial intelligence — Functional safety and AI systemsThis document describes the properties, related risk factors, available methods and processes relating to:

— use of AI inside a safety related function to realize the functionality;

— use of non-AI safety related functions to ensure safety for an AI controlled equipment;

— use of AI systems to design and develop safety related functions.
PublishedNORobustness specifications for AI systemsISO/IEC JTC 1/SC 42
Reference
/Standard Number
TitleShort description/ ScopePublication StatusHarmonization statusCategoryIssuing Body
CEN/CLC ISO/IEC/TR 24027:2023Information technology – Artificial intelligence (AI) – Bias in AI systems and AI aided decision making (ISO/IEC TR 24027:2021)This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.PublishedNOGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
CEN/CLC ISO/IEC/TR 24029-1:2023Artificial Intelligence (AI) – Assessment of the robustness of neural networks – Part 1: Overview (ISO/IEC TR 24029-1:2021)This document provides background about existing methods to assess the robustness of neural networks.PublishedNORobustness specifications for AI systemsCEN/CLC/JTC 21
CEN/CLC ISO/IEC/TS 12791:2024Information technology – Artificial intelligence – Treatment of unwanted bias in classification and regression machine learning tasks (ISO/IEC TS 12791:2024)This document describes how to address unwanted bias in AI systems that use machine learning to conduct classification and regression tasks. This document provides mitigation techniques that can be applied throughout the AI system life cycle in order to treat unwanted bias. This document is applicable to all types and sizes of organization.PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
CEN/CLC/TR 18115:2024Data governance and quality for AI within the European contextThis document provides an overview on AI-related standards, with a focus on data and data life cycles, to organizations, agencies, enterprises, developers, universities, researchers, focus groups, users, and other stakeholders that are experiencing this era of digital transformation. It describes links among the many international standards and regulations published or under development, with the aim of promoting a common language, a greater culture of quality, giving an information framework.
It addresses the following areas:
– data governance;
– data quality;
– elements for data, data sets properties to provide unbiased evaluation and information for testing.
PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
EN ISO/IEC 23053:2023Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) (ISO/IEC 23053:2022)This document establishes an Artificial Intelligence (AI) and Machine Learning (ML) framework for describing a generic AI system using ML technology. The framework describes the system components and their functions in the AI ecosystem. This document is applicable to all types and sizes of organizations, including public and private companies, government entities, and not-for-profit organizations, that are implementing or using AI systems.PublishedYESSupporting standards (terminology…)CEN/CLC/JTC 21
EN ISO/IEC 23894:2024Information technology – Artificial intelligence – Guidance on risk management (ISO/IEC 23894:2023)This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management.
The application of this guidance can be customized to any organization and its context.
PublishedYESRisk management for AI systemsCEN/CLC/JTC 21
EN ISO/IEC 5259-1:2025Artificial intelligence – Data quality for analytics and machine learning (ML) – Part 1: Overview, terminology, and examples (ISO/IEC 5259-1:2024)This document provides the means for understanding and associating the individual documents of the ISO/IEC 5259 series and is the foundation for conceptual understanding of data quality for analytics and machine learning. It also discusses associated technologies and examples (e.g. use cases and usage scenarios).PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
EN ISO/IEC 5259-2:2025Artificial intelligence – Data quality for analytics and machine learning (ML) – Part 2: Data quality measures (ISO/IEC 5259-2:2024)This document specifies a data quality model, data quality measures and guidance on reporting data quality in the context of analytics and machine learning (ML). This document is applicable to all types of organizations who want to achieve their data quality objectives.PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
EN ISO/IEC 5259-3:2025Artificial intelligence – Data quality for analytics and machine learning (ML) – Part 3: Data quality management requirements and guidelines (ISO/IEC 5259-3:2024)This document specifies requirements and provides guidance for establishing, implementing, maintaining and continually improving the quality of data used in the areas of analytics and machine learning. This document does not define a detailed process, methods or metrics. Rather it defines the requirements and guidance for a quality management process along with a reference process and methods that can be tailored to meet the requirements in this document. The requirements and recommendations set out in this document are generic and are intended to be applicable to all organizations, regardless of type, size or nature.PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
EN ISO/IEC 5259-4:2025Artificial intelligence – Data quality for analytics and machine learning (ML) – Part 4: Data quality process framework (ISO/IEC 5259-4:2024)This document establishes general common organizational approaches, regardless of the type, size or nature of the applying organization, to ensure data quality for training and evaluation in analytics and machine learning (ML). It includes guidance on the data quality process for: — supervised ML with regard to the labelling of data used for training ML systems, including common organizational approaches for training data labelling; — unsupervised ML; — semi-supervised ML; — reinforcement learning; — analytics. This document is applicable to training and evaluation data that come from different sources, including data acquisition and data composition, data preparation, data labelling, evaluation and data use. This document does not define specific services, platforms or tools.PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21
EN ISO/IEC 8183:2024Information technology – Artificial intelligence – Data life cycle framework (ISO/IEC 8183:2023)This document defines the stages and identifies associated actions for data processing throughout the artificial intelligence (AI) system life cycle, including acquisition, creation, development, deployment, maintenance and decommissioning. This document does not define specific services, platforms or tools. This document is applicable to all organizations, regardless of type, size or nature, that use data in the development and use of AI systems.PublishedYESGovernance and quality of datasets used to build AI systemsCEN/CLC/JTC 21