Paving the way for AI we can trust
AI is changing the world around us. But as its presence in our everyday lives increases, it becomes more and more important to ensure that AI is ethical, lawful and robust. As part of its efforts to advance human-centred trustworthy AI, the EU-funded TAILOR project is developing a series of powerful instruments for AI research and collaboration. One of these instruments is the Strategic Research and Innovation Roadmap that establishes the foundations of trustworthy AI in Europe for the period 2022-2030.
Laying foundations for the future
Why is trustworthy AI so important? “The development of Artificial Intelligence is in its infancy,” explains Prof. Fredrik Heintz of TAILOR project coordinator Linköping University, Sweden, in an article posted on ‘Innovation News Network’. “When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That’s why it’s important to lay the foundation of trustworthy AI now.” The roadmap aims to boost research on trustworthy AI by identifying the major scientific research challenges. Three objectives are outlined. The first is to provide guidelines for strengthening and enlarging the pan-European network of research excellence centres on the foundations of trustworthy AI. The second is to define paths for advancing the scientific foundations for trustworthy AI and translating them into technical requirements to be widely adopted by industry. The final objective is to identify directions for fostering collaborations between academic, industrial, governmental and community stakeholders on the foundations of trustworthy AI.
Speaking the same language
To help non-experts, especially researchers and students, gain a general understanding of the problem concerning development of ethical and trustworthy AI systems, the TAILOR project has also published a Handbook of Trustworthy AI. The handbook is an encyclopaedia of the major scientific and technical terms related to the subject. Dr Francesca Pratesi of TAILOR project partner National Research Council of Italy provides an overview of the work on the project website: “Trustworthy AI is a term that comprehends a variety of different dimensions, namely explainability, safety, fairness, accountability, privacy, and sustainability. Some of them (such as security and privacy protection) are more consolidated, while other ones (e.g. explainability and sustainability) are relatively newer. Nevertheless, we acknowledge a certain lack of a common ground in both terms and definitions. Indeed, our ambition is to create a common language, starting from existing taxonomies and definitions whether is [sic] possible, and going further in the hierarchies of various concepts. This handbook will be beneficial to both beginners (who can learn from the basis of the topic, and will particularly take advantage from summaries and examples) and specialists (who can go more in depth with also suggested readings, links, and bibliographies, and who can ask to contribute in the project given the living nature of the handbook).” The TAILOR (Foundations of Trustworthy AI - Integrating Reasoning, Learning and Optimization) project’s contributions will help to reduce AI-associated risks and maximise related opportunities for European society. “People often regard AI as a technology issue, but what’s really important is whether we gain societal benefit from it,” states Prof. Heintz in the ‘Innovation News Network’ article. “If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people.” For more information, please see: TAILOR project website
Keywords
TAILOR, artificial intelligence, AI, trustworthy AI, roadmap, handbook