Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Article Category

Article available in the following languages:

Unravelling the deepfake

A new white paper offers insight into AI technologies and their role in spreading disinformation, shedding light on the challenges and opportunities ahead.

With their ever-increasing ability to generate high-quality images, text and audio, generative AI models such as DALL-E and ChatGPT are rapidly transforming many industries worldwide. However, this transformation is coming at a price. As generative AI technology becomes more and more fluent and affordable, its misuse for large-scale disinformation campaigns increases, disrupting platforms and fact checkers’ ability to tackle this disinformation and making it very difficult to tell real from fake. A new white paper delves into the connection between generative AI and disinformation. The result of a collaboration between the EU-funded vera.ai, TITAN, AI4Media and AI4TRUST projects, it discusses recent advances, challenges and opportunities in this critical field. “In our new dynamic world of AI and disinformation, this white paper serves as a cornerstone, shedding light on the intricate interplay between technological advancement, ethical considerations, and regulatory imperatives,” remarks contributing author Francesco Saverio Nucci of TITAN project coordinator Engineering Ingegneria Informatica, Italy, in a news item posted on the project’s website. The paper explores the disinformation-generation capabilities of state-of-the-art AI and highlights the prevailing ethical and legal challenges, and opportunities for innovation. One key theme explored is the evolution of generative AI, with a focus on the different kinds of synthetically generated disinformation, their prevalence and impact on elections. “The threat posed by generative AI to the democratic processes and election integrity is unfortunately no longer hypothetical … and cannot be dismissed as fear mongering,” states the report. The disinformation campaigns enabled by AI-generated content ultimately “undermine citizens’ trust in political leaders, elections, the media, and democratic governments.”

The challenges

The paper discusses the varied challenges faced in detecting and debunking disinformation and recent advances made in this area, also highlighting selected AI-powered tools that can help media professionals verify content and counter disinformation-related risk. Additionally, it describes AI-based services being developed by TITAN to stimulate critical thinking in citizens, coaching them to, among other things, be aware of click bait content and verify authors and sources of online content. The authors go on to analyse the ethical and legal issues surrounding the use of AI technologies to spread false information. These include data quality challenges, copyright concerns, and the “visible power imbalance between content creators, academics, and citizens on one hand and the large technology companies (e.g. OpenAI, Microsoft, Google, and Meta) developing and selling generative AI models on the other.” Emphasis is placed on the need for robust regulatory frameworks. One major challenge outlined is the fact that state-of-the-art large language models are not designed to tell the truth, but trained to generate plausible statements based on statistical patterns, making them prone to generating misinformation. Other challenges include overcoming citizens’ ill-founded trust in AI, developing new tools that will be able to detect AI-generated content and finding the research funds to make this possible. However, there are also opportunities for innovation, according to the report. These include the development of state-of-the-art detection models and the enhancement of AI-driven counter-disinformation strategies. The white paper prepared with support from the vera.ai (vera.ai: VERification Assisted by Artificial Intelligence), TITAN (AI for Citizen Intelligent Coaching against Disinformation), AI4Media (A European Excellence Centre for Media, Society and Democracy), and AI4TRUST (AI-based-technologies for trustworthy solutions against disinformation) projects could help guide future research and policy formulations in this crucial domain. For more information, please see: vera.ai project website TITAN project website AI4Media project website AI4TRUST project website

Keywords

vera.ai, TITAN, AI4Media, AI4TRUST, disinformation, AI, generative AI

Related articles