top of page

Operationalization of AI Tests



We co-authored the second version of the German AI Standardization

roadmap which defines requirements for AI testing tools.




The Challenge:


With lighthouse regulations such as the EU AI Act and the Liability Directive on the horizon, the AI community is working on a plethora of additional, specific norms and standards that will guide the development, deployment, and monitoring of future AI systems. The concrete measures that organizations will have to take and the specific tools they will require in order to provide trustworthy AI systems are therefore yet to be settled.


The Project:


The German Standardization Roadmap on Artificial Intelligence is a community-driven framework designed to identify and outline the requirements for future standardization of safe and trustworthy AI technologies. Although this framework introduces no enforceable regulation, it provides clear outlines for future standardization that will shape requirements for AI systems to come.


For organizations preparing for compliance with future regulation, it’s a good indicator of requirements which experts in standardization find relevant for creating an environment that minimizes potential risks for providers and users of AI systems.


QuantPi’s Contribution:


QuantPi Co-founder and Chief Scientist Dr. Antoine Gautier participated in its creation as part of the testing and certification workgroup, chaired by Dr. Maximilian Poretschkin of the Fraunhofer Institute for Intelligent Analysis and Information Systems, and Daniel Loevenich of the German Federal Office for Information Security.


With regard to the concrete requirements for future auditing tools, the workgroup recommended that these “can be derived from the properties of the effectiveness criteria. Testing tools should provide all necessary information to interpret results appropriately. Such information should cover at least the following dimensions:


  • Scope and depth: what specific part of the AI system is being tested? What are the inputs and outputs of this part? What and how much data is used to test the system?

  • Function mapping: what functions are supported by the tool? What is the desired outcome of the testing? What is an undesired result of the test?

  • Functionality of the testing tool: the technical method used to test the AI system should be described. Limitations of the test method used should also be explicitly presented, as well as information on the stability and reproducibility of the test results." 


Applying audit tools in line with the outlined transparency requirements could constitute an important safeguard against the misapplication of solutions intended to bolster the trustworthiness of AI systems. 


Notably, these recommendations for the transparency of testing tools are not made in a vacuum. Existing and future standards on trustworthiness in artificial intelligence (eg. ISO TR 24028) or AI risk management (eg. ISO/IEC 23894) serve as a basis for this work. 


Nonetheless, methods to concretely assess the objective quality of auditing tools still need to be developed. Providing transparent tools for conformity assessments of AI systems is a major objective of QuantPi’s R&D team.



More Information:


bottom of page