AI factcheckers aid the battle against fake news
Launched in January 2018, the EU-funded FANDANGO project experienced first-hand the proliferation and evolution of media disinformation. The recent American election, the aftermath of the United Kingdom’s Brexit referendum and of course the COVID pandemic, have all underlined the challenge that ‘fake news’ presents to our understanding of complex events. “Defining media disinformation is in itself incredibly complex,” explains FANDANGO project coordinator Francesco Saverio Nucci, application research director at Engineering R&D Labs, Italy. “Even the meaning of the term ‘fake news’ has changed, as it has been adopted for more political ends.” Another challenge is that one person’s interpretation of what constitutes media disinformation is not necessarily the same as someone else’s. And if it is difficult for humans to even agree on a baseline for identifying media disinformation, then applying artificial intelligence algorithms to identify ‘fake news’ is clearly not a straightforward exercise.
Tackling media disinformation
Nonetheless, this was the key aim of the FANDANGO project. “Our goal was to try to test and validate various AI tools that could be used to identify disinformation,” adds Nucci. Some of the issues examined included climate change, European policies and immigration. First, the project team applied machine learning tools to identify ‘fake’ images and so-called ‘deep fake’ videos – videos that have been manipulated. AI and natural language processing were also applied to text, to help identify if something might be suspicious. “We made a number of findings,” says Nucci. “First, we found that it is not possible to eliminate the human in this context. AI can provide support, but a media professional still needs to be at the end of the line. AI is useful but cannot completely solve the problem of fake news.” Secondly, the team found that it is not enough for the software to just tell the journalist that something is suspected to be ‘fake’. The journalist wants to know why an image or a text is suspicious. The project team also applied machine learning tools to better understand how misinformation is spread across networks. Nucci believes that another important element of the project has been the tight collaboration between technology researchers and those from the social sciences. “I have technical people on my team who are now experts in media literacy,” he remarks. “On the other side, we have seen how critical it is that journalists begin to understand how AI can help solve this challenge of disinformation.”
Developing media literacy
From this research, a modular platform has been developed, with machine learning tools including language processing for text and data investigation for sources. While the work is still in its early stages, Nucci envisages that this platform can be further developed and eventually marketed to media companies. “In order to improve these tools, we need more data,” he adds. “The more data you have, the better the algorithms will work.” The project has also underlined the need to train media professionals in data literacy, and in how to manage data better. Misinterpreting statistics relating to the percentage of COVID patients who have been vaccinated, for example, has helped to fuel vaccine scepticism. “In addition to improving machine learning, there are a number of research aspects that we will continue to look at,” says Nucci. “These include media literacy, and how ‘fake news’ is propagated in social networks.”
Keywords
FANDANGO, media, disinformation, political, AI, algorithms, journalists, misinformation