Project description
AI-based antidote to disinformation online
The spread of fake news (disinformation) on social media impacts society at individual and collective levels. Defined as the intentional spread of unreliable information, disinformation actually spreads much faster than humanly possible to monitor and analyse. In this context, the EU-funded AI4TRUST project will develop a hybrid system based on machine-human cooperation and advanced solutions based on AI. The idea is to support media professionals and policymakers in tackling disinformation. This system will make it possible to monitor numerous online social platforms in almost real time. It will flag content with a high risk of being disinformative for expert review by analysing multimodal (text, audio, visual) and multilingual content with novel AI algorithms.
Objective
Increasing evidence shows that disinformation spreading has non-negligible impact on our society at individual and collective levels. From public health to climate change, it is of paramount importance to timely identify emerging disinformation signals such as content from known unreliable sources and new narratives, especially from online social media, to provide media professionals and policy makers with trustworthy elements to extinguish disinformation outbreaks before they run out of control. However, monitoring and analyzing large volumes of online content is well beyond the capacity of human ability only. In fact, regardless of the socio-psychological and behavioral reasons behind information forging, the rate at which disinformation is produced is much larger than the rate at which it can be analyzed and its effects adequately mitigated.
AI4TRUST will provide a hybrid system, where machines cooperate with humans, relying on advanced AI solutions against advanced disinformation techniques to support media professionals and policy makers. Our system will monitor, in nearly real time, multiple online social platforms, filtering out social noise and analyzing multimodal (text, audio, visual) content in multiple languages (up to 70% of coverage in EU) with novel AI algorithms, while cooperating in an automated way with an international network of human fact-checkers who will be periodically triggered and who will frequently provide validated data to update our algorithms. The resulting quantitative indicators, including infodemic risk, will be inspected under the lens of social and computational social sciences, to build the trustworthy elements required by media professionals to create customizable and reliable data reports.
We expect that the AI4TRUST's system, based on a human-centred approach to technology development that is aligned with European social and ethical values, will be integrated in the standard toolbox of data analysts working on disinformation.
Fields of science
Programme(s)
Funding Scheme
HORIZON-RIA - HORIZON Research and Innovation ActionsCoordinator
38122 Trento
Italy