November 27th | 4:00-4:45pm CET | Online

Compliance & Beyond: How to Build AI Hiring Systems People Trust

AI is reshaping hiring, but the spotlight on these systems is brighter than ever. Compliance alone won’t guarantee fairness or protect organisations from reputational, legal, or operational risk. Responsible deployment requires situational awareness, transparent reasoning, and a trust layer that goes deeper than regulatory checklists.

Join us for a 45-minute session exploring the current paradoxes in AI-driven hiring, from global model assumptions that fail locally, to the subtle ways untested “attributes” can distort outcomes. Together with guest speaker Sarah Mathews, Group Responsible AI Manager at The Adecco Group, we will unpack why assurance must extend beyond compliance, and what practical steps teams can take today to build hiring systems people genuinely trust.

What we’ll cover

  • Why compliance is necessary, but not sufficient
  • The status quo and present paradoxes in the hiring and AI ecosystem
  • Realistic examples for how it can go wrong
  • The “trust layer” every hiring AI system needs
  • Practical methods for ensuring transparency, accountability, and fairness

Who Should Attend:

  • HR, Talent Acquisition, or People Operations
  • Compliance, Legal, or Risk Management
  • Responsible AI, Data Science, or Technical Leadership
  • AI Governance and Ethics
Register Now

Speaker:

Sarah Mathews
Group Responsible AI Manager, The Adecco Group
Leading global efforts in AI governance, responsible deployment, and internal readiness for safe and equitable AI systems.

Moderator:

Carlotta Clauter
Product Marketing Manager, QuantPi
Working at the intersection of AI assurance, compliance, and enterprise-grade transparency.

About QuantPi

QuantPi is pioneering the technologies of trust for the adoption of AI. Their end-to-end platform rigorously tests AI systems for unintended bias, robustness, compliance, and other critical metrics of performance. This offers AI lifecycle stakeholders a shared understanding of their AI systems—whether built in-house or third-party procured. At the heart of this platform is a powerful proprietary testing engine that uniformly assesses all types of AI (LLMs, computer vision, machine learning, agentic AI etc.). This delivers actionable insights and operationalizes internal AI policies and regulatory frameworks, such as the European AI Act.

Funded by the European Union and emerging from one of the world’s leading information security research centers (CISPA), QuantPi is shaping a future where intelligent machines are deployed confidently and responsibly. Trusted by some of the world’s largest enterprises and institutions, QuantPi remains at the forefront of advancing trustworthy AI globally.