Skip to main content
European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Artificial Intelligence without Bias

Periodic Reporting for period 2 - NoBIAS (Artificial Intelligence without Bias)

Periodo di rendicontazione: 2022-01-01 al 2024-06-30

Businesses, governments, and other organizations widely employ Artificial Intelligence (AI) algorithms. Decisions, once undertaken by humans, are now conducted by algorithms, mostly through ML and AI powered by big data. Incidents of bias and unfairness in various real-world AI applications have led to an ever-increasing public concern about the impact of AI in our lives. If such issues are not carefully tackled, AI-based decision-making may underperform and cause significant societal harm. NoBIAS aims to be the answer in this respect. To achieve this objective, our challenges stem from the AI-based decision-making process, which at a high level involves the following phases: data collection, AI algorithms, and results.
At each step in this process, biases may arise, which need to be accounted for and countered in order to produce business benefits while addressing related legal and ethical concerns. In particular, the three core challenges are: (C1) Data can be biased; (C2) Algorithms can be biased; (C3) Results can be biased.
Research, development and training of early-stage researchers (ESRs) in NoBIAS is organized around the AI-based decision-making pipeline and the identified C1-C3 core challenges and the corresponding O1-O3 objectives (as listed next) to ensure that crucial skills for AI-based decision making in industry and society are acquired and are well-aligned with business value creation.
Objective 1: Understanding bias in data. The quality of the data provided as input to AI decision-making processes strongly influences the results. Understanding why and how bias is manifested in data is of paramount importance. In this regard, NoBIAS has developed a comprehensive view of bias generation within sociotechnical systems, how design and development choices impact representations, formal methods for bias detection, and documenting bias through ontologies.
Objective 2: Mitigating bias in algorithms. To account for bias in AI, we can improve the bias-related quality of the data, or we can introduce extra constraints/costs in the utility measure of the model to “enforce” fairness. The former approach is independent of the algorithm, whereas the latter depends on the algorithm per se. In the context of NoBIAS, we aim to tackle both model-independent and model-dependent challenges as well as connect them with legal issues and contexts. In this regard, NoBIAS has developed both model-dependent and model-independent methods of mitigating bias.
Objective 3: Accounting for bias in results. The results of AI-based decision-making systems might be biased, even if the data has been corrected for bias and even if the algorithms have been modified to account for bias. Moreover, new sources of biases are introduced by the interpretation of the results and application context when continuous model outputs are converted into binary decisions or when concept drift arises over time. In this regard, NoBIAS has developed methods of explaining black-box and white-box decision models, and methods for time-dependent monitoring and mitigation of biases in AI systems.
[O1]: Understanding bias in data.
Each ESR started out exploring the problem from a specific direction, and each acquired a much deeper understanding of the nature of the manifestation of bias in data within their individual area. The important but common questions that O1 ESRs have been working on are:
(i) What is bias?
(ii) How is bias created?
(iii) How can we detect bias?
To answer these questions, long ethnographic fieldwork has been conducted by ESR Miriam Fahimi, suitable technical frameworks have been identified/ developed, methodological innovations and testing are done for various use cases by ESRs Kristen Marie Scott, Jose Manuel Alvarez, Simone Fabbrizzi and Mayra Russo.
[O2]: Mitigating bias in algorithms.
O2 ESRs focus on the following multi-disciplinary research directions.
(i) Development of model-independent approaches for mitigating bias at the data level
(ii) Development of model-dependent approaches for mitigating bias at the model/algorithm and at the output level
(iii) Reconciling bias mitigation approaches with legal norms and legal theory
Legal exploration into the lawfulness of processing personal data for pre-processing debiasing purposes was done by ESR Ioanna Papageorgiou. Prior works on bias mitigation in different contexts have been surveyed by ESR Alaa Elobaid, and new techniques for NoBIAS’s ranking and classification use cases have been developed and tested by ESRs Antonio Ferrara and Paula Reyero.
[O3]: Accounting for bias in results.
All ESRs have conducted extensive overviews of the literature on a rather large spectrum of multidisciplinary topics. Specifically, the ESRs have looked into the following aspects;
(i) an ethical and legal perspective on accountability,
(ii) the field of eXplainable AI (XAI),
(iii) the issue of monitoring time-evolving AI models and their biases.
While a legal framework which encompasses the rights to information and an explanation was developed by ESR Alejandra Bringas Colmenarejo, the technical aspects of black-box and white-box explanations, and time-dependent mitigation of bias in various use cases have been explored and supporting experimental analyses have been performed by ESRs Xuan Zhao, Laura State, Carlos Mougan Navarro, and Seyed Siamak Ghodsi.

The network has successfully organized various training programs, including the onboarding week, three summer schools, the European AI regulation week, NoBIAS monthly colloquium talks, and NoBIAS doctoral school. NoBIAS ESRs have already published 43 research papers and attended 79 events, including top-tier conferences, workshops, and panel discussions, thereby contributing to the dissemination and communication goals of NoBIAS. The network has garnered a lot of attention on its social media handle (#followers=410) and also on the NoBIAS website (#visitors=13049). The ESRs have also successfully run the NoBIAS newsletter.
Bias in AI systems cannot be addressed only by computer scientists. It rather requires an interdisciplinary approach and close collaboration with legal experts and social scientists, to make sure that societal origins of data and the legal limits of socio-technical systems are appropriately considered and new methods become usable and useful in practice. Society today lacks professionals and researchers that do not only vow for economic success, but have the capacity to embody ethics and legal behavior in AI algorithms that decide upon all our lives. NoBIAS provided the ESRs with such interdisciplinary training to instill in them such capacity. The manifestation of bias depends on the application per se, therefore our ESRs have gained practical experience on real world applications spanning a variety of sectors from finance, pharmaceutical industry, marketing, media, software, and legal consultancy to broadly foster legal compliance and innovation.
Related ETN projects include analysis of Big Data (ETN Longpop, ID: 676060), privacy and usability (ETN Privacy.us 675730), risk analysis (ETN BigDataFinance, 675044), Big Data management (ETN BigStorage, 642963), or question answering from Big Data (ETN WDAqua, 642795). However, the focus in these projects lies on generating value from big data by new methods, and with some lesser focus on privacy. The issues arising with regard to bias, such as fairness, legality, or discrimination are not addressed.
Project goals and scope