Uncovering bias in AI recruiting

Practical approach to assess potentially discriminatory outcomes in AI recruiting: QuantPi, The Stepstone Group, and TÜV AI.Lab demonstrate how AI-supported recruiting systems can be examined to determine if they systematically disadvantage applicants based on factors such as their ethnic origin. This is achieved through an in-depth regulatory analysis, which is in turn used to inform the statistical testing methodologies applied.

What happens when artificial intelligence decides who gets invited to a job interview and who doesn’t? Automated recommendation systems in recruiting promise efficiency. At the same time, they have to comply with Germany’s General Equal Treatment Act (AGG) or the provisions of the EU AI Act - for example, by avoiding discriminating against applicants based on their ethnic origin.

A new white paper by QuantPi, The Stepstone Group, and TÜV AI.Lab now presents a practical testing approach to uncover if an AI-supported recommender system discriminates. In a joint effort, the project partners are demonstrating how legal, technical, and ethical requirements - in this case, with regard to non-discrimination - can be translated into concrete testing procedures, thereby providing companies with a blueprint for transparently analyzing AI applications in recruiting and reliably assessing their risks.

Fairness testing of AI is such a technically and legally complex area, particularly when using sensitive data to debias AI models. This is a topic I have worked and thought on quite a bit before. Hence, it is great to see that a major company in this space is taking their compliance obligations under the AI Act very seriously, proactively engages with the AI Act and GDPR framework, and comes up with concrete steps to actually implement non-discrimination in practice while acknowledging frictions in the legal framework that academics have pointed out for years.
Prof. Dr. Philipp Hacker
Academic Advisor to the project
European New School of Digital Studies at European University Viadrina
Access the Technical Whitepaper

About QuantPi

QuantPi is pioneering the technologies of trust for the adoption of AI. Their end-to-end platform rigorously tests AI systems for unintended bias, robustness, compliance, and other critical metrics of performance. This offers AI lifecycle stakeholders a shared understanding of their AI systems—whether built in-house or third-party procured. At the heart of this platform is a powerful proprietary testing engine that uniformly assesses all types of AI (LLMs, computer vision, machine learning, agentic AI etc.). This delivers actionable insights and operationalizes internal AI policies and regulatory frameworks, such as the European AI Act.

Funded by the European Union and emerging from one of the world’s leading information security research centers (CISPA), QuantPi is shaping a future where intelligent machines are deployed confidently and responsibly. Trusted by some of the world’s largest enterprises and institutions, QuantPi remains at the forefront of advancing trustworthy AI globally.