Periodic Reporting for period 1 - ALIGNER (Artificial Intelligence Roadmap for Policing and Law Enforcement)
Reporting period: 2021-10-01 to 2023-03-31
To achieve this, ALIGNER will establish a forum for exchange between practitioners from law enforcement and policing, civil society, policymaking, research, and industry to design an AI research and policy roadmap meeting the operational, cooperative, and collaborative needs of police and Law Enforcement. The workshops will be supported by an AI technology watch process as well as ethical and legal assessments.
To do so, ALIGNER will establish a series of regular workshops in which actors from policing and law enforcement, civil society, policymaking, research, and industry will exchange about topics with relation to the use of AI by law enforcement, covering emerging crime patterns, capability enhancement needs, as well as ethical, legal, and societal implications of the use of AI by law enforcement. The workshops will be supported by an AI technology watch process as well as ethical and legal assessments. The results from the workshops will be published in the AI research and policy roadmap.
ALIGNER's specific objectives are:
SO1: Facilitate communication and cooperation between state representative bodies, including police, LEA investigators, and IT analysts, in exchanging information on the changing dynamics of crime patterns relevant to the use of AI
SO2: Identify the capability enhancement needs of European LEAs
SO3: Identify, assess, and validate AI technologies with potential for LEA capability enhancement in the short-, mid-, and long-term
SO4: Identify ethical, societal, and legal implications of the use of AI in law enforcement
SO5: Identify means and methods for preventing the criminal use of AI
SO6: Identify policy and research needs related to the use of AI in law enforcement
SO7: Employ the gathered insights in order to incrementally develop and maintain an AI research roadmap meeting the operational, cooperative, and collaborative needs of police and LEAs
Based on discussions during workshops, ALIGNER developed an archetypical "world scenario" as well as topics for specific future AI scenarios, which will be further investigated over the course of the project. These topics include: (i) AI-enabled disinformation and social manipulation; (ii) AI-enabled cybercrime against individuals; (iii) AI-enabled cybercrime against roganisations; and (iv) AI-enabled cars, robots, and drones.
Major outputs of the projects were the AI Technolgy Assessment method, the ALIGNER Fundamental Rights Impact Assessment, the first set of policy recommendations, and the publication of two iterations of the AI roadmap, which compiles all results of the project.
Initial steps by ALIGNER towards this approach that go beyond the state of the art are
- The comprehensive identification and analysis of the ethical and legal framework relevant for AI
- The development of the ALIGNER Fundamental Rights Impact Assessment, which is a tool addressed to LEAs who aim to deploy AI systems for law enforcement purposes within the EU. The AFRIA is a reflective exercise, seeking to further enhance LEAs’ already existing legal and ethical governance systems, by assisting them in building and demonstrating compliance with ethical principles and fundamental rights while deploying AI systems. The AFRIA is freely available on the ALIGNER website.
- The adoption of a method for assessing emerging AI technologies and their potential for use by law enforcement agencies
- The development of four future scenarios for the potential criminal use of AI as well as how AI might be used in service for society by law enforcement egencies in future
- The development of a first set of policy recommendations
- The publishing of two iterations of the AI roadmap, which give inital pointers on where the European Commission and other actors should get active.