Uncovering bias in AI recruiting
Practical approach to assess potentially discriminatory outcomes in AI recruiting: QuantPi, The Stepstone Group, and TÜV AI.Lab demonstrate how AI-supported recruiting systems can be examined to determine if they systematically disadvantage applicants based on factors such as their ethnic origin. This is achieved through an in-depth regulatory analysis, which is in turn used to inform the statistical testing methodologies applied.
What happens when artificial intelligence decides who gets invited to a job interview and who doesn’t? Automated recommendation systems in recruiting promise efficiency. At the same time, they have to comply with Germany’s General Equal Treatment Act (AGG) or the provisions of the EU AI Act - for example, by avoiding discriminating against applicants based on their ethnic origin.
A new white paper by QuantPi, The Stepstone Group, and TÜV AI.Lab now presents a practical testing approach to uncover if an AI-supported recommender system discriminates. In a joint effort, the project partners are demonstrating how legal, technical, and ethical requirements - in this case, with regard to non-discrimination - can be translated into concrete testing procedures, thereby providing companies with a blueprint for transparently analyzing AI applications in recruiting and reliably assessing their risks.
Fairness testing of AI is such a technically and legally complex area, particularly when using sensitive data to debias AI models. This is a topic I have worked and thought on quite a bit before. Hence, it is great to see that a major company in this space is taking their compliance obligations under the AI Act very seriously, proactively engages with the AI Act and GDPR framework, and comes up with concrete steps to actually implement non-discrimination in practice while acknowledging frictions in the legal framework that academics have pointed out for years.
