Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Artificial Intelligence without Bias

Article Category

Article available in the following languages:

Fairness awareness training helps researchers reduce AI bias

Artificial intelligence algorithms, used to filter data, have led to concern about automated bias and its impact on discrimination and fairness.

Machine learning tools can be biased. This occurs because of biased source data, bias in algorithms used to process the data, or the way artificial intelligence (AI) applications are used, such as filtering applicants for jobs or university admissions. As part of a Marie Skłodowska-Curie Innovative Training Network (ITN), the NoBIAS project trained 15 doctoral students from six countries in ‘fairness awareness’. It aimed to reduce biases in machine learning at all three stages: understanding bias in data, mitigating bias in algorithms and accounting for bias in results. “The computer science community, but also others, were already working on bias. But ours was a big project where we tried to push the state of the art with new methods and new analysis,” explains project coordinator, Wolfgang Nejdl, director of the L3S Research Center, Leibniz University Hannover, Germany.

Training PhD students in interdisciplinary approaches

An interdisciplinary group of PhD students tackled bias in machine learning from the perspectives of computer science, law and social science. They developed ‘fairness-aware’ algorithms and tracked data provenance and transparency to understand at which stage biases occur. “By working with others, a computer science student can build on the legal context or social science perspective and then invent a new tool or method. Having a background of what aspects are relevant from other disciplines makes a difference,” notes Nejdl. Students from different disciplines were brought together in summer schools which introduced topics on bias, and later at workshops and conferences. Each doctoral student was connected to a partner organisation which applies AI in specific fields, such as medicine, banking and recruitment.

Fairness and legal perspectives

AI applications have the potential to infringe upon non-discrimination rights as well as legal and ethical considerations, such as privacy. “That’s why it’s also important to analyse existing algorithms and see where we can find a bias with input data which the original researchers were not aware of,” according to Nejdl. “Sometimes you take data obtained in the easiest way, not in the way that best represents the situation you want to model,” remarks Nejdl. In the recent past, computer scientists have tried to mitigate bias by making technical changes within the model. “Now the question arises whether those technical changes are actually compliant with the regulations or not,” he says, pointing to policies and laws on equal opportunity employment or equal access to university. In analysing data for fairness, the project went beyond previous EU-funded data gathering projects such as LONGPOP and data sharing tools such as in the PrivacyUs data privacy and usability project.

Bias from limited data needs more diverse sources

Bias is more pronounced when data is limited or does not adequately represent the full population, such as with medical data restricted to a geographic region or demographic. “It’s hardest to eliminate bias in the areas where you don’t have so much data or different sources [of data],” Nejdl adds, noting additional effort is required to diversify data sources. NoBIAS project manager Gourab Kumar Patro, cites an example of using knowledge gained on the project to get around legal restrictions on the use of racial information. “A NoBIAS early-stage researcher working on legal considerations of mitigating bias found it was possible to collect racial information if you specify it is only for mitigating bias in the algorithm, but not to use in any kind of selection process. The interdisciplinary interaction meant the algorithms could be improved.” Patro says the project’s findings and best practices can be used to influence policy and future regulations, as well as improving industry practices around responsible AI development.

Keywords

NoBIAS, artificial intelligence, algorithms, bias, ethical, discrimination, data privacy, machine learning

Discover other articles in the same domain of application