Skip to main content
European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

ASSESSMENT AND ENGINEERING OF EQUITABLE, UNBIASED, IMPARTIAL AND TRUSTWORTHY AI SYSTEMS

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Risultati finali

Data management plan

This deliverable will contain the action for managing and protecting the data collected during the project and the agreements to use data jointly involving partners that have not participated in data collection.

Requirements

This deliverable reports the requirement for the methodology, awareness & diagnosis, repair & mitigation sub-components.

Methodology for creating synthetic datasets

This deliverable provides an overview of data generation methods and functional data synthesizer tool(s) for the provided reference data sets.

Fair-by-design software engineering methodologies and architecture. Preliminary compendium

This deliverable provides the first version of fair-by-design software engineering methodologies to design and develop fair AI systems that adhere to EGTAI.

First dissemination, communication and exploitation plan

A detailed communication and dissemination plan will be defined in the first months of the project with the objective to build a strong and recognizable identity. This plan will be updated throughout the project based on the evaluation of its impacts. It will include a detailed planning of all communication actions including key messages, target audiences and key performance indicators. Moreover, exploitation strategy to find the right path to continued operation of AEQUITAS activities and ensure a long-term impact after the end of the project. Exploitable assets developed by the research partners will be assessed for sustainable exploitation on social impact (e.g. users acceptance), policy impact (e.g. recommendation to adapt the legislation) and business impact (e.g. open-source licensing).

Second dissemination, communication and exploitation plan

Second iteration of D8.1

Project Handbook

The Project Handbook brings together a wide range of general operational information including contact details, roles and responsibilities of the partners according to the governance structure, operational and reporting processes, templates, procedures for the preparation of deliverables

Architecture design of AEQUITAS

This deliverable will describe the architecture design and technologies to be used in AEQUITAS

First report on dissemination and communication activities

A detailed list of activities of activities of dissemination and communication of project partners for first half of the project

Social, legal and policy landscapes of AI-fairness 1st version

This deliverable provides a preliminary overview of the necessary social, legal and policy elements for AEQUITAS consisting of: (i) a preliminary insight in the main manifestations of AI unfairness in society, (ii) the level of awareness and understanding, and narratives of AI-fairness in society; (iii) a preliminary methodology to identify the relevant stakeholders to involve in the design process of AI, a; (iv) a preliminary overview of existing and anticipated rules and regulations dealing with AI-fairness; (v) a preliminary overview of relevant policy developments around AI-fairness; and (vi) a preliminary AI-fairness methodology to follow in the design of AI systems, from a social, legal and policy perspective. Because the social, legal and policy landscapes of AI-fairness are constantly evolving, this deliverable provides updated versions deliverable 6.1.

Fair-by-design sociological, legal methodologies, preliminary compendium

This deliverable provides a very preliminary version of social and legal methodologies to follow in the design of AI systems. It will be exploited in the early stage of the project to collect requirements in WP2.

Diagnostic tools for bias-1st version

This deliverable provides the first version of state-of-the-art techniques to detect and measure undesirable biases contained in AI systems.

Educational and awareness raising tools on social and legal elements of AI fairness

This deliverable provides 3 internal knowledge sessions to inform the project partners on the social and legal elements of AI-fairness at crucial moments of the project (M03 to feed into WP2, M06 to feed into WP3, 4 and 5 and M18 to feed into WP7). It also provides open knowledge and awareness raising resources such as explainers, infographics, whitepapers, and expert sessions on the social and legal elements of AI fairness aimed at external stakeholders.

Data, algorithms, and interpretation bias mitigation methods 1st version

This deliverable provides the first version of state-of-the-art techniques to repair and mitigate undesirable biases contained in data, algorithm as well as in socio-technical factors.

Pubblicazioni

Unlocking Insights and Trust: The Value of Explainable Clustering Algorithms for Cognitive Agents

Autori: Federico Sabbatini, Roberta Calegari
Pubblicato in: 2023
Editore: WOA 2023 – 24th Workshop “From Objects to Agents”

FAiRDAS: Fairness-Aware Ranking as Dynamic Abstract System

Autori: Eleonora Misino, Roberta Calegari, Michele Lombardi, Michela Milano
Pubblicato in: 2023
Editore: Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)

Unveiling Opaque Predictors via Explainable Clustering: The CReEPy Algorithm

Autori: Federico Sabbatini, Roberta Calegari
Pubblicato in: 2023
Editore: Proceedings of the 2nd Workshop on Bias, Ethical AI, Explainability and the role of Logic and Logic Programming, BEWARE 2023 co-located with the 22nd International Conference of the Italian Association for Artificial Intelligence (AI*IA 2023)

Assessing and Enforcing Fairness in the AI Lifecycle

Autori: Roberta Calegari, Gabriel G. Castañé, Michela Milano, Barry O'Sullivan
Pubblicato in: 2023
Editore: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence (IJCAI-23)
DOI: 10.24963/ijcai.2023/735

A geometric framework for fairness

Autori: Alessandro Maggio, Luca Giuliani, Roberta Calegari, Michele Lombardi, Michela Milano
Pubblicato in: 2023
Editore: Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)

ExACT Explainable Clustering: Unravelling the Intricacies of Cluster Formation

Autori: Federico Sabbatini, Roberta Calegari
Pubblicato in: 2023
Editore: International Conference on Principles of Knowledge Representation and Reasoning (KR2023)

Achieving Complete Coverage with Hypercube-Based Symbolic Knowledge-Extraction Techniques

Autori: Federico Sabbatini, Roberta Calegari
Pubblicato in: 2023
Editore: Proceedings of the 1st Workshop on Fairness and Bias in AI co-located with 26th European Conference on Artificial Intelligence (ECAI 2023)
DOI: 10.1007/978-3-031-50396-2_10

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile