Project description
Fighting online disinformation with trustworthy AI solutions
Online media is a minefield of disinformation and misleading or manipulated news. The spread of disinformation is difficult to contain, resulting in increased risks to public safety and health. Moreover, verifying the credibility of information sources or uncovering disinformation campaigns remains extremely challenging. In this context, the EU-funded "vera.ai" project will cooperate with media professionals and researchers to build trustworthy AI solutions that will include a fact-checker-in-the-loop approach and AI models that constantly check updated sources and multimodal data, verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. The project will facilitate the fight against complex disinformation technologies in all formats.
Objective
Online disinformation and fake media content have emerged as a serious threat to democracy, economy and society. Recent advances in AI have enabled the creation of highly realistic synthetic content and its artificial amplification through AI-powered bot networks. Consequently, it is extremely challenging for researchers and media professionals to assess the veracity/credibility of online content and to uncover the highly complex disinformation campaigns.
vera.ai seeks to build professional trustworthy AI solutions against advanced disinformation techniques, co-created with and for media professionals & researchers and to also set the foundation for future research in the area of AI against disinformation.
Key novel characteristics of the AI models will be fairness, transparency (incl. explainability), robustness against concept drifts, continuous adaptation to disinformation evolution through a fact-checker-in-the-loop approach, and ability to handle multimodal and multilingual content. Recognising the perils of AI generated content, we will develop tools for deepfake detection in all formats (audio, video, image, text).
vera.ai adopts a multidisciplinary co-creation approach to AI technology design, coupled with open source algorithms. A unique key proposition is grounding of the AI models on continuously collected fact-checking data gathered from the tens of thousands of instances of “real life” content being verified in the InVID-WeVerify plugin and the Truly Media/EDMO platform. Social media and web content will be analysed and contextualised to expose disinformation campaigns and measure their impact.
Results will be validated by professional journalists and fact checkers from project partners (DW, AFP, EUDL, EBU), external participants (through our affiliation with EDMO and seven EDMO Hubs), the community of more than 53,000 users of the InVID-WeVerify verification plugin, and by media literacy, human rights and emergency response organisations.
Fields of science
Keywords
Programme(s)
Funding Scheme
HORIZON-RIA - HORIZON Research and Innovation ActionsCoordinator
57001 Thermi Thessaloniki
Greece