Skip to main content

Artificial Intelligence and Robotics

Justice and AI

UNICRI Centre for Artificial Intelligence and Robotics

In response to rapid advancements in digital technologies, the United Nations Interregional Crime and Justice Research Institute (UNICRI) opened a specialized Centre for Artificial Intelligence (AI) and Robotics in September 2017. Located in the Hague, the Netherlands, the Centre was established with the support of the Municipality of the Hague and the Ministry of Foreign Affairs of the Kingdom of the Netherlands.

Ambassador-visit
Hosting a high-level visit to UNICRI Centre for AI and Robotics in The Hague, Netherlands.

This Centre is dedicated to understanding and addressing both the opportunities and challenges of AI and related new and emerging technologies from the perspective of crime prevention, criminal justice and the rule of law. In terms of opportunities, the Centre explores how to leverage AI’s potential in a responsible manner in order to promote public safety and reduce crime. In terms of challenges, these same technologies may be misused by malicious actors, or misapplied by legitimate actors if used without proper safeguards.


The Centre’s activities

Through research and awareness-raising, multi-stakeholder discussions and capacity-building activities, the Centre supports national authorities and relevant representatives within the criminal justice system (law enforcement, courts and corrections) through several programmes and projects. 

AI robotics
INTERPOL and UNICRI launch the Toolkit for Responsible AI Innovation in Law Enforcement in Singapore, a set of seven practical resources and guidance documents applicable across the AI lifecycle

These projects employ several approaches and tools, including:

  • Action-oriented research
  • Knowledge development and dissemination
  • Training and technical workshops
  • Advocacy with policy- and decision-makers
  • Development and maintenance of online platforms
Image
UNICRI presentation at c0c0n Hacking & Cyber Briefing Conference in Kochi, India.

Some priority areas for the Centre include:

  • Building knowledge on the possible malicious use of AI by criminals and terrorist groups, as well as potential counter-measures.
  • Enhancing awareness of the threats of AI-generated or manipulated voice or video content, such as deepfakes.
  • Fostering responsible AI innovation within the law enforcement community.
  • Promoting and supporting the development of policy frameworks for the deployment of facial recognition software.
  • Exploring the development of pilot AI applications in criminal investigations, in particular to combat the rise in online child sexual exploitation and abuse.
  • Enhancing cybersecurity through the use of AI to support the detection and investigation of and protection from cyberattacks
  • Building knowledge on the use of AI in counter-terrorism, in particular in the context of terrorist use of the internet and social media
  • Analysing the possible application of AI in the administration of criminal justice and corrections administration.
Image
At one of the Centre's AI for Safer Children trainings for law enforcement on the use of AI and related technologies to combat child sexual exploitation and abuse, in Singapore.

Network building and the creation of strategic partnerships has also been identified as being a fundamental part to the modus operandi of the Centre’s activities and are integral to its success. In this regard, the Centre has built an extensive international network of partners that it engages for its activities and to convene expert-level meetings, training courses and workshops worldwide, as well as high-level visibility events.

Image
The Centre organized a panel discussion at the UN’s flagship AI for Good summit in Geneva, Switzerland, uniting representatives from law enforcement, government, AI tool developers, and international and civil society organizations.
Future proofing the criminal justice system

Crime prevention, criminal justice, and in particular law enforcement and national security, are areas where AI and related emerging technologies have the potential to compliment or even greatly enhance traditional techniques. Given the increasingly data-heavy nature of criminal investigations and the evolving and complex nature of criminality, the criminal justice system is a domain that can derive substantial benefit from the potential of new and emerging technologies.


AI has already been used to help law enforcement to identify and locate long-missing children, scan illicit sex ads and disrupt human trafficking rings, flag financial transactions that indicate the possibility of money laundering and protect citizens’ privacy through automating the anonymization of surveillance footage. Such technologies may find application in the courts, where they can help with efficient research on jurisprudence to identify precedents and support legal professionals with case management to ensure a timely delivery of justice.


Masked behind these benefits, however, are a range of social, ethical and legal issues that have yet to be fully explored and analysed. For instance, there are concerns surrounding data collection and violations of the right to privacy in AI development, algorithmic bias and black boxes in decision-making systems, and unforeseen outcomes such as from the autonomous use of force. Of course, there is also the ever-present risk that criminals or terrorist organizations may misuse these technologies. Indeed, with every new technology comes vulnerability to new forms of crime and threats to security. However, with proper understanding and responsible development, the Centre continues to aim to build trust and belief in AI and robotics as agents for positive change.