Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS

Periodic Reporting for period 1 - AEQUITAS (ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS)

Reporting period: 2022-11-01 to 2024-04-30

AI-based decision support systems are increasingly being deployed across industries, in both public and private sectors, as well as in policy-making. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems from amplifying this phenomenon but rather mitigate it. To trust these systems, domain experts and stakeholders need to trust the decisions.
The emerging field of AI fairness, alongside efforts to enhance trustworthiness, has become a central point of European research and strategy in advancing ethical and trustworthy AI systems.
Fairness of AI is a multifaceted concept that consists of developing equitable algorithms, addressing bias and discrimination, and ensuring equitable outcomes across diverse societal contexts. Fairness stands as one of the main principles of Trustworthy AI promoted at the EU level. How these principles, in particular fairness, translate into technical, social, and lawful requirements in the AI system design is still an open question. Similarly, there is a lack of established methods for testing compliance with these principles and for repairing non-compliant systems.

AEQUITAS proposes the development of a controlled experimentation environment aimed at facilitating developers and users in creating controlled experiments for several purposes:
i) Assessing the bias present in AI systems, such as identifying potential sources of bias in data, algorithms, and the interpretation of results.
ii) Providing effective methods and engineering guidelines, whenever possible, to address, remove, and mitigate bias.
iii) Offering fairness-by-design guidelines, methodologies, and software engineering techniques to design new bias-free systems.

This experimentation environment offers also the possibility to generate synthetic datasets with various features that influence fairness, suitable for testing in lab settings.
Real-world use cases in healthcare, human resources, and challenges faced by socially disadvantaged groups further validate the experimentation platform, demonstrating the effectiveness of the proposed solution.

The experimentation playground will be seamlessly integrated into the AI-on-demand platform to enhance its accessibility. However, a stand-alone release will also be available, allowing for on-premise, privacy-preserving testing of AI systems' fairness.
The first 18 months of the project have been focused on analysing the identified high-risk use cases and grounding their requirements in terms of fairness. These requirements were then translated into technical requirements for AI algorithms. Analysing the available data and studying them to make them processable by AI algorithms without introducing bias has been one of the main activities of the first period, as well as establishing a common grounding between the socio-legal and technical worlds.

Main results at M18:

- Collection of real use case datasets that can be used as fairness benchmarks, development of cleaning procedures, studies on pre-processing techniques, and initial experimentation.
- Generation of synthetic data for each use case (preliminary testing completed).
- Availability of a synthesizer for generating synthetic data on the AEQUITAS repository.
- Socio-legal-technical methodology for fairness: all components of the methodology related to the AI lifecycle have been created; some components have already been developed into a methodology, while others require further exploration. The key element has been identifying a methodology for transforming social/legal requirements into technical requirements.
- Proof of concept of the experimentation environment: gathering requirements, designing architecture, and implementing the framework of the experimentation environment.
Results beyond the state of the art have included the meta-methodology for translating socio-legal requirements into a technical system, promoting the development of socio-technical systems through concrete methodologies.
New methods for bias identification and mitigation have also been proposed, leading to the publication of significant scientific contributions.
logo-aequitas-with-claim-colors-rgb.jpg