Periodic Reporting for period 1 - BIAS (Mitigating Diversity Biases of AI in the Labor Market)
Berichtszeitraum: 2022-11-01 bis 2023-10-31
Another branch of AI that can be used is Case-Based Reasoning (CBR). These AI systems automatically analyze a certain case (e.g. a job application) using multiple criteria, such as the cover letter, education, or experience. It then compares this case to many previous cases to determine which one it is most similar to in order to make a recommendation, e.g. to advance the candidate or not, based on how these previous similar cases were treated.
BIAS aims to advance the state of the art in AI technology that is used in HRM by developing the underlying technology in a way that it could be implemented in real use cases. But in doing so it seeks to actively explore all the ways unwanted bias can be introduced into AI technology and how such bias can be mitigated. This will both make the specific technology developed in the BIAS project fairer, but the underlying research on bias identification and mitigation can also be applied to many other AI technologies and use cases.
BIAS will extensively engage end-users during the design and development processing, including robust co-creation methodologies. This will ensure that unfair biases can be properly described within specific employment contexts as well as making the resulting technology useful in practice. This will include a series of training and capacity building activities for HRM professionals and AI developers to proactively consider issues of bias in AI in their daily work.
BIAS will also carry out extensive fieldwork, interviewing more than 350 workers, HRM professionals, and technology developers across Europe to learn how AI is being used in practice, what biases area already evident or how they could manifest, and what future implementation—both good and bad—could look like. This will not only advance our knowledge of how technology is used in the workplace—it will lay the groundwork for future AI research that is more responsive to real-world concerns.
Many of these aspects make the BIAS project especially innovative. There is very little extant research on using Case-Based Reasoning in HRM contexts, although the potential benefits are substantial. Additionally, NLP and CBR are not often researched together, especially in the domain of bias identification and mitigation. Designing a system that brings together both technologies is novel.
Most research on bias identification and mitigation in language models has been confined to English. However, BIAS is focusing on many European languages (especially Dutch, Estonian, German, Icelandic, Italian, Norwegian, and Turkish).
Finally, although many technology development projects do contain some consultation with citizens and end-users, it is rare that such projects include such extensive interviews and fieldwork from scholars trained in the social sciences and humanities. The use, or so-called “domestication” of technology in the workplace has been an important area of research, but rarely are researchers studying the workplace so closely connected with scientists developing the technology. And it is rare that such in-depth analysis of how technology is used in practice can then directly inform how new technology be developed more responsibly in the future.
BIAS partners conducted two rounds of national co-creation workshops. Diverse stakeholders first helped generate wordlists for the NLP classifier and bias detection. Additionally, the workshops facilitated discussions on fairness and diversity bias in the labor market, thereby offering a holistic perspective on the technology's implications and identifying essential requirements for the effective and trustworthy design of the Debiaser.
The starting blocks of the Debiaser was sculpted, with initial data gathering preparation, and data transfer prepared, as well as programming expertise tuned in to the complex data material. Materials from the co-creation activities and other interdisciplinary discussions have been extracted and generated to form the foundation of the bias detection modules. Initial business cases for the implementation of the Debiaser in HRM contexts have been developed.
Ethnographic fieldwork has begun, with 26 of 365 interviews being completed
The most significant results will occur later in the project—namely the development of bias identification and mitigation AI modules for use in NLP and CBR as well as specific tools for use in HRM use cases. These modules will be made available open-source, while the complete business use case solution will be developed by the partners.