Fighting terrorism online: prevention is better than cure
To what extent do internet and social media help to spread terrorist propaganda and recruiting? EU officials seem to think that they actually contribute a lot. The European Commission has recently proposed new rules “on preventing the dissemination of terrorist content online,” which generally means that national authorities should give platforms no more than one hour to delete terror-related videos and posts. If a service provider systematically fails to remove such content, it could face financial penalties of up to 4% of its overall turnover for the last business year. The reaction by internet giants to the announcement has been mixed. Google has responded positively: “We share the European Commission’s desire to react rapidly to terrorist content and keep violent extremism off our platforms. (…) We welcome the focus the Commission is bringing to this and we’ll continue to engage closely with them, member states and law enforcement on this crucial issue,” a YouTube spokesperson told us. On the other hand, Mozilla has publicly defined the Commission’s proposal “a poor step” in fighting illegal content online: “It would undermine due process online; compel the use of ineffective content filters; strengthen the position of a few dominant platforms while hampering European competitors; and, ultimately, violate the EU’s commitment to protecting fundamental rights.” Terrorist propaganda and recruitment on social media are one of the objects of an international study under the EU project PROTON, which aims at developing Agent-Based Modelling (ABM) simulations of how changes in society and in the environment affect organised crime and terrorist networks. The overall purpose is to offer new prevention tools for policymakers. “We found that terrorists were significantly more likely to post about the attacks by friends or family members in the months prior to their own attack,” says Michael Wolfowicz, research fellow at the Hebrew University of Jerusalem, which is working with the project. So these people “are at a heightened likelihood of engaging in subsequent attacks themselves, instead of other network members, perhaps radical themselves, who don’t make posts referring the attacks [by people close to them]”. “The study also highlighted that terrorists’ posts are shared much more than those by non-violent radical counterparts,” explains Wolfowicz. “Such individuals therefore may be exposed to peer pressure to act as opinion leaders by engaging in the types of behaviours that are expected of them; in the case of radicals this would mean radical violence.” For Wolfowicz the “one-hour rule” proposed by the Commission is far from being the solution: “One reason is that content removal may be used as evidence by radical groups to demonstrate legitimacy to their claims against the West as not really liberal or democratic, and that it specifically sets out to hamper the free speech of their particular group. This may lead sympathisers and fence sitters to further align themselves with the radical ideology or group, especially if their own content has been removed, leading them to personally identify with the group-based grievance.” Add to that the errors that will inevitably happen in identifying and removing the content, and the fact that radical groups may decide to move underground, switching over to encrypted applications such as Telegram to communicate. This is why projects such as PROTON are crucial, concludes Wolfowicz: “One of the problems in research on radicalisation and recruitment to terrorism is that there is a dearth of empirical evidence. It is our hope that the results will help policymakers to develop evidence-based policies, which are both more effective and proportional.” Read more: https://www.projectproton.eu/fighting-terrorism-online-prevention-better-cure/
Keywords
terrorism, security, crime, psychology, propaganda, social media
Countries
Belgium, Israel, United States