Listed in:
QuantPi’s AI Trust Platform is listed in the OECD’s catalogue of tools and metrics designed to help AI actors develop and use trustworthy AI systems.
The Challenge:
Explainability, transparency and avoiding bias are among the most critical challenges for AI practitioners, as complex AI systems and algorithms can make these hard to attain. There are tools and metrics out there that help AI actors to build and deploy AI systems that are trustworthy. However, these tools and metrics are often hard to find and absent from the ongoing AI policy discussions.
The Project:
The OECD.AI Policy Observatory catalogue of tools and metrics for trustworthy AI makes it easier to find tools and metrics by providing a one-stop-shop for helpful approaches, mechanisms, and practices for trustworthy AI. These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure, and safe.
The catalogue is a platform where AI practitioners from all over the world can share and compare tools and build upon each other’s efforts to create global best practices and speed up the process of implementing the OECD AI Principles.
QuantPi's Contribution:
QuantPi’s AI Trust Platform is listed on the OECD.AI catalogue of tools and metrics for trustworthy AI. We support the mission and will continue to support making available tools and best practices available within the catalogue.
More Information:
Comments