Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Mitigating Diversity Biases of AI in the Labor Market

Periodic Reporting for period 1 - BIAS (Mitigating Diversity Biases of AI in the Labor Market)

Okres sprawozdawczy: 2022-11-01 do 2023-10-31

One recent industry study from 2023 by the consultancy Sage reported that 47% of Human Resource Management (HRM) professionals utilize AI, up from 24% in 2021. Many applications of AI in HRM context involve AI based around human language, such as Natural Language Processing (NLP). One example are systems that aid in the recruitment process by reviewing large number of applications and recommending which applications should advance beyond the first round of screening. Architectures based on neural networks have been shown to be very efficient for natural language processing (e.g. word embeddings or large language models [LLMs]). However, due to their complexity, it is difficult to explain the outcomes of the downstream applications. Research has shown that unwanted bias and stereotypes are encoded in these models, having been trained on data of the society.

Another branch of AI that can be used is Case-Based Reasoning (CBR). These AI systems automatically analyze a certain case (e.g. a job application) using multiple criteria, such as the cover letter, education, or experience. It then compares this case to many previous cases to determine which one it is most similar to in order to make a recommendation, e.g. to advance the candidate or not, based on how these previous similar cases were treated.

BIAS aims to advance the state of the art in AI technology that is used in HRM by developing the underlying technology in a way that it could be implemented in real use cases. But in doing so it seeks to actively explore all the ways unwanted bias can be introduced into AI technology and how such bias can be mitigated. This will both make the specific technology developed in the BIAS project fairer, but the underlying research on bias identification and mitigation can also be applied to many other AI technologies and use cases.

BIAS will extensively engage end-users during the design and development processing, including robust co-creation methodologies. This will ensure that unfair biases can be properly described within specific employment contexts as well as making the resulting technology useful in practice. This will include a series of training and capacity building activities for HRM professionals and AI developers to proactively consider issues of bias in AI in their daily work.

BIAS will also carry out extensive fieldwork, interviewing more than 350 workers, HRM professionals, and technology developers across Europe to learn how AI is being used in practice, what biases area already evident or how they could manifest, and what future implementation—both good and bad—could look like. This will not only advance our knowledge of how technology is used in the workplace—it will lay the groundwork for future AI research that is more responsive to real-world concerns.

Many of these aspects make the BIAS project especially innovative. There is very little extant research on using Case-Based Reasoning in HRM contexts, although the potential benefits are substantial. Additionally, NLP and CBR are not often researched together, especially in the domain of bias identification and mitigation. Designing a system that brings together both technologies is novel.

Most research on bias identification and mitigation in language models has been confined to English. However, BIAS is focusing on many European languages (especially Dutch, Estonian, German, Icelandic, Italian, Norwegian, and Turkish).

Finally, although many technology development projects do contain some consultation with citizens and end-users, it is rare that such projects include such extensive interviews and fieldwork from scholars trained in the social sciences and humanities. The use, or so-called “domestication” of technology in the workplace has been an important area of research, but rarely are researchers studying the workplace so closely connected with scientists developing the technology. And it is rare that such in-depth analysis of how technology is used in practice can then directly inform how new technology be developed more responsibly in the future.
Expert interviews with HR practitioners and AI developers and a pan-European survey targeting 4000 respondents provided insights into professional experiences and personal attitudes towards fairness and diversity bias in AI applications. The BIAS Consortium created the National Labs, namely a pool of diverse stakeholders (e.g. employees, employers, HR practitioners, AI specialists, policymakers, trade union representatives, representatives of civil-based society organizations, and scholars) that could contribute or be interested in the implementation of the BIAS project.

BIAS partners conducted two rounds of national co-creation workshops. Diverse stakeholders first helped generate wordlists for the NLP classifier and bias detection. Additionally, the workshops facilitated discussions on fairness and diversity bias in the labor market, thereby offering a holistic perspective on the technology's implications and identifying essential requirements for the effective and trustworthy design of the Debiaser.

The starting blocks of the Debiaser was sculpted, with initial data gathering preparation, and data transfer prepared, as well as programming expertise tuned in to the complex data material. Materials from the co-creation activities and other interdisciplinary discussions have been extracted and generated to form the foundation of the bias detection modules. Initial business cases for the implementation of the Debiaser in HRM contexts have been developed.

Ethnographic fieldwork has begun, with 26 of 365 interviews being completed
Results from initial co-creation and citizen consultation and engagement will shortly become available. This includes a survey of more than 5 000 respondents across Europe, interviews with AI and HRM experts, and co-creation workshops about bias in AI and HRM practices. These results have informed the technical work of BIAS, but will also shortly be made available in public deliverables and open access scientific publications.

The most significant results will occur later in the project—namely the development of bias identification and mitigation AI modules for use in NLP and CBR as well as specific tools for use in HRM use cases. These modules will be made available open-source, while the complete business use case solution will be developed by the partners.
Illustration of NLP component of the Debiaser
BIAS consortium map
Illustration of CBR component of the Debiaser