Descrizione del progetto
Algoritmi equi per l’intelligenza artificiale
I sistemi basati sull’intelligenza artificiale (IA) sono sempre più utilizzati nelle domande che emettono automaticamente decisioni o valutazioni. Tali sistemi possono avere un impatto su singoli individui o gruppi di persone in relazione a questioni importanti come i pagamenti o le cure mediche, ma le distorsioni dell’IA possono essere un problema. Le fonti di distorsione delle decisioni dell’IA possono trovarsi a livello dei dati derivati automaticamente, degli algoritmi che elaborano i dati o dell’uso delle applicazioni. Per eliminare tali distorsioni in tutte e tre le fasi, il progetto NoBIAS, finanziato dall’UE, svilupperà algoritmi che tengono conto dell’equità. Tali algoritmi si baseranno su principi etici e giuridici e saranno concepiti come soluzioni tecniche grazie allo sforzo multidisciplinare di 15 ricercatori formati in informatica, scienza dei dati, apprendimento automatico, diritto e scienze sociali e altri settori.
Obiettivo
Artificial Intelligence (AI)-based systems are widely employed nowadays to make decisions that have far-reaching impacts on individuals and society. Their decisions might affect everyone, everywhere and anytime entailing risks, such as being denied a credit, a job, a medical treatment, or specific news. Businesses might miss chances, because biases make AI-driven decisions underperform; much worse, they may contravene human rights when treating people unfairly.
Bias may arise at all stages of AI-based decision making processes: (i) when data is collected, (ii) when algorithms turn data into decision making capacity, or (iii) when results of decision making are used in applications. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in the training, design and deployment of AI algorithms to ensure social good while still benefiting from the potential of AI.
NoBIAS will develop novel methods for AI-based decision making without bias by taking into account ethical and legal considerations in the design of technical solutions. The core objectives of NoBIAS are to understand legal, social and technical challenges of bias in AI-decision making, to counter them by developing fairness-aware algorithms, to automatically explain AI results, and to document the overall process for data provenance and transparency.
We will train a cohort of 15 ESRs (Early-Stage Researchers) to address problems with bias through multi-disciplinary training and research in computer science, data science, machine learning, law and social science. ESRs will acquire practical expertise in a variety of sectors from telecommunication, finance, marketing, media, software, and legal consultancy to broadly foster legal compliance and innovation. Technical, interdisciplinary and soft-skills will give ESRs a head start towards future leadership in industry, academia, or government.
Campo scientifico
Not validated
Not validated
Parole chiave
Programma(i)
Argomento(i)
Meccanismo di finanziamento
MSCA-ITN - Marie Skłodowska-Curie Innovative Training Networks (ITN)Coordinatore
30167 Hannover
Germania