Periodic Reporting for period 1 - AEQUITAS (ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS)
Período documentado: 2022-11-01 hasta 2024-04-30
The emerging field of AI fairness, alongside efforts to enhance trustworthiness, has become a central point of European research and strategy in advancing ethical and trustworthy AI systems.
Fairness of AI is a multifaceted concept that consists of developing equitable algorithms, addressing bias and discrimination, and ensuring equitable outcomes across diverse societal contexts. Fairness stands as one of the main principles of Trustworthy AI promoted at the EU level. How these principles, in particular fairness, translate into technical, social, and lawful requirements in the AI system design is still an open question. Similarly, there is a lack of established methods for testing compliance with these principles and for repairing non-compliant systems.
AEQUITAS proposes the development of a controlled experimentation environment aimed at facilitating developers and users in creating controlled experiments for several purposes:
i) Assessing the bias present in AI systems, such as identifying potential sources of bias in data, algorithms, and the interpretation of results.
ii) Providing effective methods and engineering guidelines, whenever possible, to address, remove, and mitigate bias.
iii) Offering fairness-by-design guidelines, methodologies, and software engineering techniques to design new bias-free systems.
This experimentation environment offers also the possibility to generate synthetic datasets with various features that influence fairness, suitable for testing in lab settings.
Real-world use cases in healthcare, human resources, and challenges faced by socially disadvantaged groups further validate the experimentation platform, demonstrating the effectiveness of the proposed solution.
The experimentation playground will be seamlessly integrated into the AI-on-demand platform to enhance its accessibility. However, a stand-alone release will also be available, allowing for on-premise, privacy-preserving testing of AI systems' fairness.
Main results at M18:
- Collection of real use case datasets that can be used as fairness benchmarks, development of cleaning procedures, studies on pre-processing techniques, and initial experimentation.
- Generation of synthetic data for each use case (preliminary testing completed).
- Availability of a synthesizer for generating synthetic data on the AEQUITAS repository.
- Socio-legal-technical methodology for fairness: all components of the methodology related to the AI lifecycle have been created; some components have already been developed into a methodology, while others require further exploration. The key element has been identifying a methodology for transforming social/legal requirements into technical requirements.
- Proof of concept of the experimentation environment: gathering requirements, designing architecture, and implementing the framework of the experimentation environment.
New methods for bias identification and mitigation have also been proposed, leading to the publication of significant scientific contributions.