Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Artificial Intelligence Roadmap for Policing and Law Enforcement

Periodic Reporting for period 2 - ALIGNER (Artificial Intelligence Roadmap for Policing and Law Enforcement)

Reporting period: 2023-04-01 to 2024-09-30

The world is changing at an unprecedented rate, and AI is at the forefront of this change. While providing numerous benefits, many have raised concerns over the impact AI has or will have on matters such as security. The EU-funded ALIGNER project aimed to unite European actors who have concerns about AI, law enforcement and policing to jointly identify and discuss how to enhance Europe’s security whereby AI strengthens law enforcement agencies while providing benefits to the public. The project’s work will help pave the way for an AI research roadmap.

To achieve this, ALIGNER established a series of regular workshops in which actors from policing and law enforcement, civil society, policymaking, research, and industry will exchange about topics with relation to the use of AI by law enforcement, covering emerging crime patterns, capability enhancement needs, as well as ethical, legal, and societal implications of the use of AI by law enforcement. The workshops were supported by an AI technology watch process as well as ethical and legal assessments. The results from the workshops were published in the AI research and policy roadmap as well as dedicated policy recommendation documents. In addition, methods - e.g. on risk assessments for AI technologies or fundamental righst impacts assessments - were published in dedicated, public reports

ALIGNER's specific objectives are:
SO1: Facilitate communication and cooperation between state representative bodies, including police, LEA investigators, and IT analysts, in exchanging information on the changing dynamics of crime patterns relevant to the use of AI
SO2: Identify the capability enhancement needs of European LEAs
SO3: Identify, assess, and validate AI technologies with potential for LEA capability enhancement in the short-, mid-, and long-term
SO4: Identify ethical, societal, and legal implications of the use of AI in law enforcement
SO5: Identify means and methods for preventing the criminal use of AI
SO6: Identify policy and research needs related to the use of AI in law enforcement
SO7: Employ the gathered insights in order to incrementally develop and maintain an AI research roadmap meeting the operational, cooperative, and collaborative needs of police and LEAs
ALIGNER started its work with the establishment of two advisory boards - the Law Enforcement Agency Advisory Board LEAAB and the Scientific, Industrial, and Ethical Advisory Board SIEAB - with over 60 experts and connected to over 30 other research projects related to AI and law enforcement. This included ALIGNER's sister projects popAI and STARLIGHT.
Based on these connections ALIGNER conducted eight workshops covering a multitude of topics, among them:

1) Development of scenario for criminal use of AI, e.g. AI-enabled disinformation and social manipulation, AI-enabled cybercrime against individuals, AI-enabled cybercrime against roganisations, and AI-enabled cars, robots, and drones.
2) Development of scenarios for AI supporting law enforcement, e.g. how AI can support municipal policing;
3) The ethical and legal implications when using AI in the law enforcement context, with specific focus on the EU AI Act;
4) Identifying capability enhancement needs for the use of AI technologies by law enforcement agencies;
5) Identifying and adressing technological and organizational risks - including cybersecurity risks - related to the use of AI in law enforcement;
6) Identifying future trajectories for the criminal misuse of AI technology;
7) Identifying policy recommendations to enable societally acceptable use of AI by law enforcement agencies; and
8) Identifying further research needs at the cross section of AI and law enforcementas well as policy reocmmendations.

Supporting these activities was the development of an integrated assessment methodology for the technical, ethical, and legal assessment of emerging AI technologies, and an analysis of the existing ethical and legal framework relating to AI and law enforcement.
Based on discussions during workshops, ALIGNER developed an archetypical "world scenario" as well as topics for specific future AI scenarios, which were further investigated over the course of the project. For each scenario, ALIGNER identified suitable AI technologies and assessed their risks. These findings were summarized "scenario cards" that provided an overview of the most relevant information for each technology, e.g. current development acitiviy, Technology Readiness Level, or associated risks. Around these scenarios and technologies, ALIGNER facilitated discussions with European experts to adress the above mentioned topics (1)-8)).
Major outputs of the project were the AI Technolgy Assessment method, the ALIGNER Fundamental Rights Impact Assessment, the policy recommendations, and the AI policy & research roadmap, which compiles all results of the project.
The Fundamental Rights Impact Assessment was further exploited by providing input to European and International Standardization Work via project partner CBRNE Ltd.
Other results, like the policy or research recommendations, were summarized in policy briefs and provided to relevant decision-makers during events.
- Comprehensive methodology to evaluate the impact of AI systems on LEAs’ operational capabilities, as well as their technical, ethical and legal risks, consisting of three consecutive assessments (AI Technology Watch, Risk Assessment, ALIGNER Fundamental Rights Impact Assessment). The methodology is further detailed in a chapter of the ‘Paradigms on Technology Development for Security Practitioners’ book, which will be published on October 31st by Springer in its Security Informatics and Law Enforcement book series;
- First (and, currently, only) template for conducting a fundamental rights impact assessment of law enforcement AI, also ensuring compliance with Article 27 of the AI Act. The methodology of the template is further detailed in a paper published by Springer’s AI and Ethics journal accessible via https://doi.org/10.1007/s43681-024-00560-0;
- Comprehensive and systematised technical, ethical and legal assessment of AI technologies related to the ALIGNER scenarios, including possible mitigation measures, suitable assist LEAs’ decision-making on whether procuring certain AI systems and how to deploy them;
- A comprehensive Taxonomy of AI-suported crime, empirically validated against the needs of end users;
- Integrated requirements structure for cybersecurity issues;
- A comprehensive set of policy recommendations with specific focus on the EU AI Act and law enforcement; and
- A roadmap document providing an overall picture of issues related to the AI use by law enforcement agencies and criminal misuse of AI technologies, culminating in research and policy recommendations.

The results of the project have already been taken-up by several law enforcement agencies and other research projects to assess the compliance of AI technologies with the EU AI Act during their development and before their deployment. The findings from the Fundamental Rights Impact Assessment have also been taken-up by European and International Standardisation bodies.
ALIGNER project logo