top of page

Model-Agnostic Assessments of ML Models



We developed an efficient and extensible framework which computes tests by perturbing and comparing input and outputs of ML models.



The Challenge:


As regulators and testing experts are increasingly focused on addressing the potential impacts of ML models, these efforts can appear to be a cat and mouse face off given the immense speed of development in the ML space. With no one-size-fits-all approach to assessments of ML models given the broad scope of their usage, the challenge was to develop a scalable assessment framework which could match the speed of innovation.


QuantPi’s Contribution:


Through this project, QuantPi developed a fully automated, scalable, extensible, and agnostic test and validation engine for trustworthy AI. It is built based on a unique proprietary mathematical framework designed specifically to address the following major challenges when making AI systems safe and trustworthy, in a systematic and rigorous way:


  • Automation: Given basic technical information about the ML task, the type of input and output data, an extensive list of functions is automatically generated to test the model.

  • Alignment: To filter and prioritise the generated tests, further information about the context of the use-case is needed. Configuring the relevance of technical tests require the involvement of different stakeholders with potentially less technical understanding.

  • Scalability: The algorithms to perform the generated tests are fundamentally agnostic in that they merely require to compare and perturb inputs and outputs. In particular they can be applied to any black-box from simple rule-based classifier to sophisticated ensemble of deep neural networks.

  • Extensibility: Innovation is part of our DNA and we know that some use-cases require special care. The developed technology is therefore built in a way that it can be seamlessly extended with third-party tests.



bottom of page