Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Co-Creating Misinformation-Resilient Societies

Periodic Reporting for period 2 - Co-Inform (Co-Creating Misinformation-Resilient Societies)

Période du rapport: 2019-04-01 au 2021-07-31

Mis - and disinformation is a problem for society. This in particular during pandemics for the detrimental effects that it has on health-related and political behaviour. These effects has been called “infodemic”.
Co-inform addressed mis - and disinformation in social media by applying co-creation with stakeholders to build digital tools to combat disinformation, as proposed in the OECD publication "Combating disinformation, an ecosystem in co-creation. Mis - and disinformation is information that whether by design or not, is false and causes factually erroneous beliefs and, following from such beliefs, detrimental behaviours in society.
The 2000s has witnessed a rapid development of social media that has facilitated the spread of both information and misinformation regarding everything from local neighbourhoods to global issues. Studies analysing misinformation on social media platforms, has found that misinformation and disinformation in social media travel faster than trustworthy information
In an infodemic, society sees deliberate attempts to disseminate wrong information to undermine the public health response e.g. by spreading mis- and disinformation on vaccines, masks etc. This has a harmful effect on citizens' physical and mental health. Such information can increase stigmatization; threaten precious health gains; and lead to poor observance of public health measures, thus reducing their effectiveness and endangering countries’ ability to stop the pandemic. In the end mis -and disinformation costs lives.

The general aims of Co-Inform was to co-create with citizens, journalists, and policymakers, a system for (a) detecting and combating a variety of misinforming posts and articles on social media, (b) supporting, persuading, and nourishing misinformation-resilient behaviour, (c) understanding and predicting which misinforming news and content are likely to spread across which parts of the network and demographic sectors, and (d) providing policymakers with advanced misinformation analysis to support their policy making process and validation.
The most tangible results of the project are the Co-inform plug in and the dashboard.
These are only the front end of the co-inform ecosystem; it also includes several back-end modules that find, define and track mis - and disinformation.
Every important design choice was based on workshops and evaluations with stakeholders.
A major contribution from the Co-Inform project is in activating the actual users in several ways.
First, in making the user aware that the Twitter content they are reading/viewing could contain misinformation, and by extension also other online content, - and in doing so also telling the user why the Twitter information could be misinformative. Second, helping users to think before sharing/spreading information if it is considered to contain misinformation and third, involving the user in co-creation of an anti-misinformation model that helps others, through labelling misinformation or requesting reviews on content that might contain misinformation. This combination of qualities is unique and gives the Co-inform tools a unique place in the market.
The Co-inform project also managed to increase the efficiency of combating disinformation by applying the Co-inform ecosystems´ unique focus on a combination of machine learning and human fact checking. Co-Inform also researched the correlation of specific characteristics, such as gender, and psychographics to study the impact of the tools on misinformation spread and user behaviour.
The Co-inform team achieved this in many ways. One way was how the project reviewed the credibility of content online. The state of the art previously was to propose incremental/individual solutions. This being similar to e.g. ClaimReview , which enables collaboration between fact checkers and search engines, but Co-inform brought this forth to a more generic and wide level of collaboration for all, by making tools that allow for continuous co-creation reviewing the credibility of content online. We needed such a more conceptual model of work to enable integration within Co-inform. As an added benefit, such a model enables and encourages explainability of the reviewing process.
The Co-inform project also learned about limitations of the current applications of AI as we pushed it to the limit, in particular; transformer based language models, and the available datasets for evaluating misinformation. Most existing datasets that feed the AI oversimplify the problem as a binary (true/false) issue. Co-inform learned that real-world content is much more complex/nuanced than so. We also learned about human-system interaction issues when trying to explain content credibility, by using a more nuanced credibility model.
The Co-inform team performed one of the first studies on the spread of fact-checks on Twitter, alongside the misinformation they are correcting. This study highlighted the resistance of some misinformation to fact-checks, and the impact of the release of fact-checks on the spread of misinformation.
The project also created a large dataset of well over 120K misinforming URLs and their corresponding fact-checks. This dataset was extensively used in the project and integrated information published by dozens of registered fact-checks worldwide.
Co-inform also created the first tool that enables the assessment of a Twitter account with regards to its interaction with misinformation over time. The tool; MisinfoMe (https://misinfo.me/ ) collects the whole timeline of a given Twitter account, and compares the URLs shared by this account with those assessed by the fact-checkers, and highlights which were true, false, mixed, and which belong to a credible/not-credible source.
Co-inform also developed a Twitter bot that automatically replies to misinforming Tweets to draw their attention to related fact-checks and their assessment of the link article in the Tweet.
The project also completed a user-based assessment of existing tools for the detection of misinformation, and highlighted their strengths, weaknesses, and gaps.
The project results have facilitated the emergence of new, sustainable models and tools with which citizens, journalists, and policymakers can enact misinformation resilient behaviour, grounded in an evolved and contextualised co-creation methodologies.
This is done by strengthening stakeholders’ resilience to misinformation by promoting: (1) Empowerment, raising individual and collective awareness of current misinformation content and sources, (2) Engagement, by fostering networking and cross-communication between all stakeholders, (3) Access to reliable and verifiable information, by informing stakeholders of advanced misinformation analysis results and predictions, and (4) Encouragement of all stakeholders to play a role in detecting, in/validating, and combating misinformation. Co-inform has built a platform for achieving this societal challenge as the plug-in raises the citizen awareness, and the co-creative aspect of offering corrections promotes networks of cross-communication between all stakeholders
OECD Publication "Combating Misinformation, An Ecosystem in Co-Creation"