Detecting bias in AI hiring systems

FINDHR – Fairness and Intersectional Non-Discrimination in Human Recommendation, a Horizon Europe project coordinated at UPF and partnered by Eticas, is releasing new tools, approaches, and concrete recommendations aimed at tackling discrimination in recruitment caused by AI hiring systems.

An increasing number of companies are using AI-assisted recruiting systems to preselect candidates or rank applicants. While these tools can save time, they also carry significant risks of discrimination.

Proven risks of discrimination in AI-assisted hiring

AI-assisted hiring systems promise time savings for HR professionals. However, real-world experiences show that these systems can reinforce existing patterns of discrimination—or create new ones—often without the awareness of those using them. The FINDHR project focuses especially on intersectional discrimination, where combinations of personal characteristics (such as gender, age, religion, origin, or sexual orientation) generate new or multiplied forms of discrimination.

The research demonstrates that discrimination in automated hiring is not a theoretical concern but a lived reality. Interviews with affected individuals in seven European countries—Albania, Bulgaria, Germany, Greece, Italy, the Netherlands, and Serbia—revealed feelings of powerlessness and frustration, with applicants often receiving only automated rejections outside working hours, despite strong qualifications and repeated applications.

Solutions and methods to counter algorithmic discrimination

How can organizations reduce discrimination risks in AI hiring systems?

“Tackling algorithmic discrimination requires action across software development, HR, and policy. It’s not just a technical issue—social, cultural, and political contexts must also be considered.”
Carlos Castillo, ICREA Professor

The following freely available resources are provided by FINDHR:

  1. Toolkits with practical recommendations for software developers, HR professionals, and policymakers.

  2. Guidelines and Methods for inclusive software design and responsible use, auditing, and monitoring of algorithmic recruiting systems:

  3. Technical Tools and software and datasets, including synthetic CVs—to reduce the risk of algorithmic discrimination in AI hiring systems.

  4. Training Programs for professionals to raise awareness about the risks of algorithmic discrimination in hiring.

  5. Manual for jobseekers with concrete suggestions and steps that can help any person applying for a job in which an Artificial Intelligent tool is used for (pre-) screening the application.

The FINDHR project represents a comprehensive, interdisciplinary effort to make AI hiring systems fairer, more accountable, and more transparent. For more information please visit the website: www.findhr.eu

Previous
Previous

Deep Learning for social services

Next
Next

Auditing an AI-based cybersecurity application