Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Fake nEws Risk MItigator

Periodic Reporting for period 1 - FERMI (Fake nEws Risk MItigator)

Période du rapport: 2022-10-01 au 2024-03-31

The FERMI project pursues five core objectives. Three of those are technical and concern the development of the FERMI platform and its different components, including, first and foremost the “Key Technology Offerings.” Two further non-technical objectives address validation planning, amongst other things, in the form of use cases, and outreach efforts.
In line with the objective on validation planning the FERMI consortium has outlined three different use cases that cover law-enforcement agencies’ (LEA) response to disinformation campaigns that include illegal messaging being shared on social media. In the context of these use cases the Key Technology Offerings will be validated subsequently.
The first such use case (on violent right-wing extremism) covers crucial steps that are required to facilitate an investigation such as identifying human-operated accounts (whose owners qualify as subjects of investigations (unlike bot-operated accounts)) and collecting evidence. In this regard human-operated accounts and the spread of disinformation on social media are to be identified. Obviously, carrying out these tasks manually is remarkably time-consuming and error-prone, so the Disinformation sources, spread and impact analyser should greatly facilitate the work of LEAs.
The second use case (on violent Covid-related extremism) covers the need to conduct a proper threat assessment, including grasping the overall atmosphere surrounding the disinformation campaign and estimating the likely crime landscape. Threat assessments are likely required in the event of large-scale insecurity and a reliable estimate of the future crime landscape can hugely facilitate such assessments and enable LEAs to take precautionary measures.
The third use case (on violent left-wing extremism) addresses the assessment of the criminal activities’ ramifications by estimating the crimes’ impact (in terms of cost) and identifying proper counter-measures, which can greatly advance an end-user’s reaction to the disinformation messaging and might even help remove the root cause of the problem.
Thanks to the integration of all components providing the services above into a joint platform, each user has the possibility to apply the tools to the whole range of disinformation-induced challenges that affect LEAs.
The validation efforts will be guided by the revised and amended experimentation protocol. Other than the above-mentioned use cases, the experimentation protocol includes end-user requirements (identified with the help of an end-user survey and interviews), KPIs (enabling the consortium to measure whether those have been met), technical guidance, validation questionnaires and informed consent proceedings etc.
Moreover, a communication, dissemination and exploitation plan and a stakeholder engagement plan have been presented. Some of the GA’s communication and dissemination KPIs regarding the website and social media have already been exceeded in RP1. An exploitation strategy has been drafted too (to be fine-tuned in RP2).
Insights from social sciences and humanities have guided the legal and ethics scope of the project, for example by delineating the role of LEAs in the fight against disinformation and the constraints the law places on them. All of FERMI’s research complies with an Ethics Protocol signed by all partners. Besides the contributions of the legal and ethics advisors (KU Leuven and VUB), BIGS has developed a model to calculate the costs of disinformation campaigns (see above), whereas CONVERGENCE has organised training activities for the general public.
In the technical realm the key achievement is the development and integration of the above-mentioned Key Technology Offerings, which are summarised below.
• Dynamic Flows Modeler: assesses the nexus between disinformation and the crime landscape. More specifically, datasets on the former are used to do an estimate of the latter.
• Disinformation sources, spread and impact analyser: grasps the spread of disinformation on social media, including the messages’ influence and the distinction between human- and bot-operated accounts.
• Community Resilience Management Modeler: impact analysis based on the costs of criminal activities (input is provided by behaviour profiling and socioeconomic analysis) combined with the Socioeconomic Disinformation Watch: counter-measures to stem the tide of disinformation, if the impact thereof is deemed medium or high.
• Swarm Learning module: fine-tunes estimates of the crime landscape using data from the LEAs without the need to share such data between them.
• Sentiment Analysis module: analyses the underlying sentiments of the social media messages and categorises the latter as positive, negative or neutral.

Obviously, a further technical breakthrough was the integration of all of these modules into the FERMI platform.
Further non-technical achievements include the above-mentioned Behaviour Profiler & Socioeconomic Analyser that, again, calculates the likelihood and severity (cost-wise) of disinformation campaigns, the experimentation protocols, in particular, the impact assessment methodology aimed at analysing whether the aforementioned components meet end-user expectations and the societal landscape analysis on finding a proper balance between LEAs' needs to fight disinformation that may cause unrest and the obligation to safeguard freedom of expression, privacy and data protection.
All technical components have been developed in view of LEA needs to facilitate investigations, do fast and comprehensive threat assessments, impact assessments and receive counter-measure suggestions. Some fine-tuning is still underway, which mostly concerns data access issues such as further datasets on disinformation and violent left-wing extremism etc. Other outstanding research activities concern efforts to further enhance the models’ accuracy, e.g. by removing the Sentiment Analysis module’s neutral category.
The pilot validation will lay the ground for fine-tuning of the FERMI tools.
The FERMI training methodology, tools and curricula remains to be developed (which is required under T5.5 which only starts at the very beginning of RP2), but will cover a whole range of predictive policing platforms.
The Behaviour Profiler & Socioeconomic Analyser can cast new light on the costs of disinformation campaigns and will incorporate further data on the politically motivated crime landscape (to specify cost measurements).
Exploitation activities (required under T6.4 which only started in M13) will be significantly strengthened in RP2 as well, as the consortium can take advantage of the platform being fully integrated now. That being said, the consortium has already carefully selected the project’s KERs, including evaluating their relevance, clarifying and assessing their status, scope and feasibility, conducting initial market and competition analyses, assessing the risks etc.