Periodic Reporting for period 4 - COMPROP (Computational Propaganda:Investigating the Impact of Algorithms and Bots on Political Discourse in Europe)
Reporting period: 2020-07-01 to 2020-12-31
Misinformation on social media has emerged as a one of the most serious threats to democratic processes. Political actors with vested interests in interfering with such processes have been attempting to manipulate public opinion. Recent years have also seen the advent of algorithmically derived content systems and bots that deliberately work to amplify hate speech and polarizing misinformation. Democracy itself is under assault from foreign governments and internal threats, such that democratic institutions may not continue to flourish unless social data science is used to put our existing knowledge and theories about politics, public opinion, and political communication to work in their defence.
The project seeks to answer fundamental research questions: How are algorithms and automation used to manipulate public opinion during elections or political crises? What are the technological, social, and psychological mechanisms by which we can encourage political expression but discourage opinion herding or the unnatural spread of extremist, sensationalist, or conspiratorial news? What new scholarly research systems can deliver real time social science about political interference, algorithmic bias, or external threats to democracy?
In this context, the Computational Propaganda Project has been tracking important moments in public life, such as elections and referenda, and more recently the global response to Covid-19, to identify the proportions of misinformation that circulate on social media. To defend the public sphere requires a better understanding of digital citizenship and modern civic engagement. Junk news, and the deliberate spread of misinformation, often generates profitable advertising revenues for technology firms and miscreants. Online hate speech, in particular misogyny and racism, gets aimed at public figures from fake accounts. Personalised political advertising, as used in large-scale data-driven campaigns, delivers targeted interventions with hidden agendas. Political bots and highly automated social media accounts disrupt election campaigns and sow seeds of doubt in the minds of citizens making important decisions around their own health, such as whether to take vaccines. This project advances the social data science, applies it to advance our understanding of how contemporary civic engagement operates, and pioneers the social science of fake news production and consumption.
In addition to our impact and outreach activities to disseminate our research to a broad audience of policymakers, industry leaders, public, newspaper organisations, we have published several academic articles that have extended the state of the art in researching the use of computational methods for political benefit contributing to the fields of political science and computational social science. The project involved early career researchers from diverse backgrounds in this scholarship, providing mentoring and career development opportunities to meet colleagues, present their own original research, and learn the skills of presentation to specialized and public audiences.
Our work output primarily took the form of scholarly papers in peer review journals and books with major academic presses such as Oxford University Press and Yale University Press. These scholarly outputs, along with conference and workshop activities and guest editing academic journals, allowed us to engage in the scientific conversation with colleagues around the world. Our work therefore has been of fundamental importance in analysing the global phenomenon of computational propaganda and political misinformation on digital platforms.
This research therefore represents one the most comprehensive studies of computational propaganda undertaken. Our effort has required the collaboration of a diverse group of researchers including political scientists, sociologists, media scholars and computer scientists and is a unique multi-disciplinary research effort to scientifically study this problem using and extending state of the art tools in different disciplines. The research effort has been very agile in adapting research methods to suit the different cultures of social media use that prevail in various countries.
Systematic analysis of large volumes of social media posts, has required the combination of a unique set of qualitative, comparative, quantitative methods, and computational methods, which have been constantly updated as we studied a range of political events in diverse countries. For our computational analysis, the project has worked with data sourced from social media platforms in the crucial last few weeks of political campaigning, making our reports definitive analyses of the sources of misinformation circulating on social media platforms. Moreover, the reports have analysed suspicious high-frequency trending patterns of politically relevant hashtags and user accounts that become active only few days prior to major turning points in public life.
The Computational Propaganda’s research has been foundational to the social science of misinformation, having produced the first wave of research on how authoritarian regimes interfere in the elections of democracies using social media. Moreover, the team have used the project’s findings to inform and shape policy responses in Canada, the EU, UK, US and other democracies, and the team has been recognised by policymakers on both sides of the Atlantic as pioneers in the field of online disinformation.