Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Article Category

Content archived on 2023-04-12

Article available in the following languages:

Trending Science: Artificial intelligence, a curse or a blessing? Experts call for action to face security threats

Artificial intelligence (AI) could be exploited by rogue states, terrorists and criminals, unless humanity is better prepared to defend itself against its potential malicious use, experts warn.

Fears over machine intelligence have been a science fiction staple for decades, with countless depictions of mankind’s self-destructive power. Yet, the advent of AI led to the emergence of opposing narratives in the public domain focusing on the promise or the threat, depending on your viewpoint. Although in existence since the 1950s as a concept, AI and machine learning technologies have only recently gained momentum. But thanks to their rapid development, they’re now used in various applications, including automatic speech recognition, machine translation, search engines, digital assistants, drones and driverless vehicles. With the potential for breakthrough advances in areas such as healthcare, agriculture, education and transportation, the use of such systems is seen as beneficial, actively improving people’s lives and creating positive change in the world. For example, AI already plays an important role in the everyday practice of medical image acquisition, processing and interpretation. Still, a growing chorus of experts, including physicist Stephen Hawking and entrepreneur Elon Musk, continue emphasising the need to prepare for – and avoid – the potential risks posed by AI. A new report, ‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation’, by a team of researchers from various universities, think tanks and non-profit organisations echoed similar concerns. It highlighted potential security threats from AI in the wrong hands, focusing on three areas: digital, physical and political. It suggested that AI could enable bad actors to carry out large-scale, finely targeted and highly efficient attacks. Shahar Avin, from Cambridge University’s Centre for the Study of Existential Risk (CSER), told BBC that the report concentrated on areas of AI that were available now or likely to be available within 5 years, rather than looking to the distant future. Referring to AI as a ‘dual-use’ technology, with potential military and civilian uses, “toward beneficial and harmful ends,” the report said: “As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the…expansion of existing threats…introduction of new threats…[and a] change to the typical character of threats.” In a written statement, Dr Seán Ó hÉigeartaigh, CSER Executive Director and one of the co-authors, said: “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.” The authors expect novel cyberattacks such as automated hacking, speech synthesis for impersonation, finely targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves, for example, through adversarial examples and data poisoning. The report also pointed to the possibility of deploying drones and cyber-physical systems for malevolent acts, such as crashing fleets of autonomous vehicles or weaponising ‘drone swarms’ to kill specific members of crowds using facial recognition technology. Fake videos manipulating public opinion, or the use of automated surveillance platforms to suppress dissent in the context of authoritarian states, are among the threats to political security also listed in the report. Although the report sounded the alarm about imminent threats from AI, it didn’t offer specific prevention measures against the misuse of these technologies. It also noted that there remain many disagreements between the co-authors themselves and other experts in this field. “We analyse, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defences are not developed,” the authors concluded.

Countries

United Kingdom, United States

Related articles