top of page

AI Standardization Roadmap 2.0: Path towards Future Standards in Trustworthy AI


Generated on Midjourney


With lighthouse regulations such as the EU AI Act and the Liability Directive on the horizon, it's hardly news to anyone that the AI community is working on a plethora of additional, specific norms and standards that will guide the development, deployment, and monitoring of future AI systems. The concrete measures that organizations will have to take and the specific tools they’ll require in order to provide trustworthy AI systems are therefore yet to be settled.


For this purpose, it is helpful to take a closer look at Germany's AI Standardization Roadmap 2.0 to which QuantPi had the honor to contribute. This document outlines a pathway toward future standardization of AI systems and the methods ensuring their trustworthiness and transparency.


Filiz Elmas, Head of Strategic Development Artificial Intelligence at DIN, Christoph Winterhalter, Chairman of the Board of DIN, Robert Habeck, Vice Chancellor and Federal Minister for Economic Affairs and Climate Action, Prof. Dr. Wolfgang Wahlster, CEA of the German Research Center for Artificial Intelligence (DFKI) and Michael Teigeler, Managing Director of the German Commission for Electrical, Electronic & Information Technologies in DIN and VDE (DKE) (left to right). © Stefan Zeitz


In this post, we will be discussing what the latest version of this roadmap aims to accomplish, what specific methodologies have been proposed to achieve these goals, and what this means for the future of AI standardization and development.


Table of Contents

  • What is the German AI Standardization Roadmap

  • Roadmap Goals: Facilitate the Adoption of Safe AI Systems

  • Proposed Conformity Assessment Standards

  • Holistic Risk Management for Artificial Intelligence

  • The Importance of Transparency of Auditing Tools

  • Enabling Co-Existence with Intelligent Machines


What is the German AI Standardization Roadmap

The German Standardization Roadmap on Artificial Intelligence is a community-driven framework designed to identify and outline the requirements for future standardization of safe and trustworthy AI technologies. The AI Standardization Roadmap was developed and co-authored with inputs from more than 570 experts from business, academia, and the public sector, over the course of the entire year 2022.


Although this framework introduces no enforceable regulation, it provides clear outlines for future standardization that will shape requirements for AI systems to come. For organizations preparing for compliance with future regulation, it’s a good indicator of requirements that experts in standardization find relevant for creating an environment that minimizes the potential risks for providers and users of AI systems.


Goals of standardization roadmaps as published by The German Commission for Electrotechnical, Electronic & Information Technologies (DKE)


Roadmap Goals: Facilitate the Adoption of Safe AI Systems

The aim of the standardization roadmap is to facilitate the adoption of safe AI systems and minimize the risks posed by the current lack of mechanisms ensuring trustworthiness. This means developing standards that ensure system trustworthiness in terms of explainability, integrity, privacy, transparency, and human-centricity. The authors also recognize that trustworthiness cannot be achieved without addressing legal aspects such as liability and responsibility when errors occur and risks materialize.


According to the authors, "the task of this framework is to formulate a strategic roadmap for AI standardization". For this purpose, the document considers an array of norms and standards that, once developed and applied, will “enable the reliable and safe application of AI technologies and contribute to explainability and traceability.”



More specifically, it aims to establish assessment and certification standards because "the lack of such conformity assessments and certification programs threatens the economic growth and competitiveness of AI as a technology of the future." The authors further add that “statements about the trustworthiness of AI systems are not robust without high-quality testing methods”.


Proposed Conformity Assessment Standards

Based on the initial analysis of the broad landscape of ML applications across industries, the authors outline a total of 116 focal requirements for future standardization across the various use cases. Based on these, they formulate the following six key recommendations for action:

  1. Development, validation, and standardization of a horizontal conformity assessment and certification program for trustworthy AI systems.

  2. Development of data infrastructures and elaboration of data quality standards for the development and validation of AI systems.

  3. Considering humans as part of the system at all phases of the AI lifecycle.

  4. Developing specifications for conformity assessments of evolving learning systems in the field of medicine.

  5. Developing and deploying secure and trustworthy AI applications in mobility through best practices and assurance.

  6. Development of overarching data standards and dynamic modeling methods for the efficient and sustainable design of AI systems.

Naturally, the scope of the action recommendations is extremely broad as they need to encompass as many current or future use cases and industry-specific challenges as possible. This is a challenge that this version of the roadmap proposes to address through the implementation of horizontal conformity assessments, standardized across use cases. Assessments that are to be developed, validated, and implemented within operational risk management practices — an approach that is well-aligned with the European AI Act and highly relevant from a societal perspective.


As AI-enabled products and services are becoming more relevant in many different spheres of life, users and affected individuals will need to rely on the outcomes of conformity assessments which are comparable across domains and the characteristics of specific systems.


For this reason, the document proposes specific follow-up actions to develop standards for horizontal conformity assessments of AI systems (action recommendation 1). These include amongst others:


  • Development of an accreditable AI certification procedure, which integrates into the existing certification infrastructure.

  • Research projects in fields of testing with high quality requirements and an exceptionally high testing effort (eg. explainability).


The authors argue that failing to implement horizontal conformity assessments might jeopardize the economic impact of artificial intelligence as users would be unable to verify and compare system reliability across different AI applications.


Holistic Risk Management for Artificial Intelligence

Of similar importance to the operationalization of risk management are holistic assessment frameworks. Risk management practices need to be applied across the entire lifecycle of an AI system and encompass multiple risk dimensions. The standardization roadmap outlines the following 9 stages of the AI lifecycle:

  • Inception: Initial phase of the development process of an AI system in which the essential requirements and design parameters for the project are defined.

  • Design and development: The design phase of the AI system in which a functional version is made available for the subsequent phase of verification and validation.

  • Verification and validation: Testing of the AI system with regard to internal risk management guidelines, regulatory requirements, and the fulfillment of project goals.

  • Deployment: The AI system is deployed within its inference environment. This phase includes further testing to ensure that the system works satisfactorily in this environment.

  • Operation and monitoring: The system is put into operation and monitoring provides continuous visibility into model behavior and how predictions are made.

  • Continuous validation: The AI system may adapt to changing circumstances in its operating environment and therefore needs to be validated at set intervals.

  • Re-evaluation: In longer deployment phases, the AI system should be re-evaluated with regard to changed goals or system requirements.

  • Retirement: The AI system is put out of service.


Mapping of risk dimensions to stages of the AI lifecycle as described in the 2nd Version of the German AI Standardization roadmap and following ISO/IEC DIS 22989.


Six AI Risk Dimensions Highlighted in the Standardization Roadmap:

  • Data quality and data management: This includes, for example, correct data annotation or representative and relevant data sources which constitute an important foundation for the intended functionality of an AI system. Closely related are risks related to data management. These are especially relevant to ensuring fairness or achieving satisfactory performance.

  • Bias, fairness, and avoidance of undesirable discrimination: AI systems can treat individuals or groups in an unjustified and unequal manner. Non-discrimination of an AI system can, for example, be quantified by measuring bias in the training data.

  • Autonomy and control: The inability of providers from intervening in an AI system’s functioning might harm the primacy of human action. At the same time, users and affected stakeholders should be adequately informed and empowered to interact with an AI system.

  • Transparency: Providing insufficient or inadequate information about an AI system can lead to misinterpretation of the system’s output. It might be necessary in some use cases, to explain individual decisions of AI systems to affected stakeholders.

  • Performance, capability, reliability, robustness, and completeness: The measurable performance of an AI system might change over time and differ from its intended behavior. Nonetheless, users have to rely on the system to maintain its level of performance under a variety of circumstances.

  • Safety, security, and privacy: AI systems can be exposed to intentional, authorized acts designed to harm or damage the system. They should not, under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.

Obviously, addressing all of the lifecycle stages and risk dimensions outlined above is a huge undertaking. While there are significant efficiency gains to be made through the introduction of (automated) testing tools, there are currently no holistic, pre-configured, off-the-shelf solutions available. For this reason, it is important for organizations to carefully consider the limitations and calibration requirements of existing fragmented auditing tools, as they work to implement holistic risk management strategies.


Overall, the German AI Standardization Roadmap is set to have a strong impact on the testing and certification of AI systems used by enterprises.


The Importance of Transparent Auditing Tools

QuantPi Co-founder and Chief Scientist Dr. Antoine Gautier participated in its creation as part of the testing and certification workgroup, chaired by Dr. Maximilian Poretschkin of the Fraunhofer Institute for Intelligent Analysis and Information Systems, and Daniel Loevenich of the German Federal Office for Information Security.


With regard to the concrete requirements for future auditing tools, the workgroup recommended that these “can be derived from the properties of the effectiveness criteria. Testing tools should provide all necessary information to interpret results appropriately. Such information should cover at least the following dimensions:

  • Scope and depth: what specific part of the AI system is being tested? What are the inputs and outputs of this part? What and how much data is used to test the system?

  • Function mapping: what functions are supported by the tool? What is the desired outcome of the testing? What is an undesired result of the test?

  • Functionality of the testing tool: the technical method used to test the AI system should be described. Limitations of the test method used should also be explicitly presented, as well as information on the stability and reproducibility of the test results."

Applying audit tools in line with the outlined transparency requirements could constitute an important safeguard against the misapplication of solutions intended to bolster the trustworthiness of AI systems.


Notably, these recommendations for the transparency of testing tools are not made in a vacuum. Existing and future standards on trustworthiness in artificial intelligence (eg. ISO TR 24028) or AI risk management (eg. ISO/IEC 23894) serve as a basis for this work.


Nonetheless, methods to concretely assess the objective quality of auditing tools still need to be developed. Providing transparent tools for conformity assessments of AI systems is a major objective of QuantPi’s R&D team.


Enabling Coexistence With Intelligent Machines

Closing the gap between the (future) regulatory requirements and the capabilities of tools available to actually meet them will require enterprises to implement innovative, purpose-built tools, designed to meet both current challenges and upcoming landscape drift.


Still, regulatory compliance is not an end in itself. AI systems developed and deployed according to the requirements outlined in the Standardization Roadmap have a higher chance of achieving their full transformative potential. This way standardization enables a responsible and economically successful AI transformation.


QuantPi is grateful for the numerous substantive, and purpose-driven discussions in the course of working on this new version of the roadmap. We will further engage in the standardization community to push for our vision of a safe and worth-living co-existence of humans and intelligent machines.


bottom of page