top of page

Documenting Explainability Methods for AI


We co-authored DIN SPEC 92001-3 which provides a domain-independent guide on explainability methods in all stages of the AI system life cycle.



The Challenge


The discipline of Artificial Intelligence (AI) aims to develop systems capable of performing tasks that would otherwise require human intelligence. Although the principles governing these systems’ operation continues to be the subject of ongoing research, their increasingly high performance has led to their proliferation in many different application domains.


At the same time, AI systems can be opaque because of a stakeholder’s lack of technical training or ability, or due to a lack of adequate documentation. Or its implementation elements or features are intentionally obfuscated by an AI producer or AI provider. Or an AI system can even be considered opaque because of its own complexity or because of the complexity of its data environment.



QuantPi’s Contribution:


Co-authored by QuantPi, DIN SPEC 92001-3 provides a domain-independent guide of appropriate approaches and methods to promote explainability throughout all stages of the AI system life cycle. It defines ‘opacity’, describes the sources and effects of opacity in contemporary AI, and considers the way in which explanations can and should be employed to mitigate these effects for different stakeholders at different stages of the AI system life cycle.


QuantPi contributed a text paragraph on transparency requirements of explainability methods. In addition to the documentation provided for an AI system itself, the benefit of any employed explanation method(s) should be facilitated through supporting explanation documentation. Among others, this documentation should specify:


  1. The applicability (e.g. data type, system type) and underlying assumptions (e.g. feature independence, differentiability) of the explanation method.

  2. In order to justify that the computed explanations mitigate opacity as intended, relevant verifiable properties, approximation errors, and/or an explicit description of the algorithm (e.g. average predictions).

  3. If the explanation method contains non-deterministic components (e.g. sampling of feature subsets, sampling from baseline distribution), a stability analysis of the algorithm’s output(e.g. with summary statistics, confidence intervals).



More Information:

bottom of page