Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS

Descripción del proyecto

Un parque de experimentación para evaluar y reparar el sesgo en la inteligencia artificial

La inteligencia artificial (IA) se utiliza ampliamente en un gran número de sectores debido a los beneficios de la automatización y la optimización. Sin embargo, la IA puede ser también una fuente de prejuicios y discriminación que hay que controlar, medir y evitar. Además, hay una falta de conocimientos sobre la reparación y evaluación de los sesgos en los sistemas de IA existentes y sobre el diseño de nuevas herramientas de IA sin sesgos. El proyecto AEQUITAS, financiado con fondos europeos, cambiará esta situación desarrollando un entorno de experimentación controlado para ayudar a los productores de IA a aumentar la conciencia sobre los sesgos producidos por los sistemas de IA y evaluar y (posiblemente) reparar los sistemas de IA existentes. También proporcionará directrices para unos sistemas de IA justos desde el diseño y concienciará sobre los riesgos de la IA si no se maneja y gestiona adecuadamente.

Objetivo

AI-based decision support systems are increasingly deployed in industry, in the public and private sectors, and in policy-making. As our society is facing a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems to amplify this phenomenon but rather mitigate it. To trust these systems, domain experts and stakeholders need to trust the decisions.
Fairness stands as one of the main principles of Trustworthy AI promoted at EU level. How these principles, in particular fairness, translate into technical, functional social, and lawful requirements in the AI system design is still an open question. Similarly we don’t know how to test if a system is compliant with these principles and repair it in case it is not.
AEQUITAS proposes the design of a controlled experimentation environment for developers and users to create controlled experiments for
- assessing the bias in AI systems, e.g. identifying potential causes of bias in data, algorithms, and interpretation of results,
- providing, when possible, effective methods and engineering guidelines to repair, remove, and mitigate bias,
- provide fairness-by-design guidelines, methodologies, and software engineering techniques to design new bias-free systems
The experimentation environment generates synthetic data sets with different features influencing fairness for a test in laboratories. Real use cases in health care, human resources and social disadvantaged group challenges further test the experimentation platform showcasing the effectiveness of the solution proposed. The experimentation playground will be integrated on the AI-on-demand platform to boost its uptake, but a stand-alone release will enable on-premise privacy-preserving test of AI-systems fairness.
AEQUITAS relies on a strong consortium featuring AI experts, domain experts in the use case sectors as well as social scientists and associations defending rights of minorities and discriminated groups.

Coordinador

ALMA MATER STUDIORUM - UNIVERSITA DI BOLOGNA
Aportación neta de la UEn
€ 556 379,00
Dirección
VIA ZAMBONI 33
40126 Bologna
Italia

Ver en el mapa

Región
Nord-Est Emilia-Romagna Bologna
Tipo de actividad
Higher or Secondary Education Establishments
Enlaces
Coste total
€ 556 379,00

Participantes (16)