The AI Now Institute’s Executive Director, Andrea Nill Sánchez, today testified before the European Parliament LIBE Committee Public Hearing on “Artificial Intelligence in Criminal Law and Its Use by the Police and Judicial Authorities in Criminal Matters.” Her message was simple: “Predictive policing systems will never be safe… until the criminal justice system they’re built on are reformed.”
Sanchez argued that predictive policing systems are built with “dirty data” compiled over decades of police misconduct, and that there’s no current method by which this can be resolved with technology.
In a recent study, my colleagues at the AI Now Institute examined 13 US police jurisdictions that had engaged in illegal, corrupt, or biased practices and subsequently built or acquired predictive policing systems. Specifically, my colleagues found that in nine of those jurisdictions, there was a high risk that the system’s predictions reflected the biases embedded in the data.
During the hearing, Sanchez described predictive policing systems as little more than a method by which to automate corruption:
Left unchecked, the proliferation of predictive policing risks replicating and amplifying patterns of corrupt, illegal, and unethical conduct linked to legacies of discrimination that plague law enforcement agencies across the globe.
AI Now warned US regulators last year that predictive policing was a problem, and the message hasn’t changed much for the international audience. Per Sanchez today:
Ultimately, predictive policing systems and the data they process are the offspring of an unjust world. While the United States’ criminal justice system is a vestige of slavery and centuries of racism against Black and Brown people, discriminatory policing is endemic across the globe, including in Europe.
The reason these systems are so dangerous? Simply put, a long history of corrupt police practices has created a pool of untrustworthy data. For example, while researching the Chicago Police Department (CPD) – an agency that settles an average of one misconduct suit every other day – AI Now identified a pipeline between police corruption and biased AI predictions. As Sanchez explained:
Our researchers concluded that the CPD’s discriminatory practices generated “dirty data” that the city’s predictive policing system directly ingested, creating an unacceptably high risk that the technology was reinforcing and amplifying deeply ingrained biases and harms. By relying on such biased policing, predictive policing effectively put innocent people who were wrongfully stopped and arrested on a Strategic Subject List, thereby reflecting and—when acted upon—perpetuating the CPD’s harmful practices.
AI Now’s warnings have, so far, been largely ignored. A few jurisdictions in the US have put a stop to predictive policing, and there’s mutterings from the UK and Europe about “pausing” its use in some areas. Yet the use of both predictive policing and facial recognition by law enforcement continues to rise globally.
Read the full transcript of of Andrea Nill Sanchez’ remarks here.
You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track.
Published February 20, 2020 — 20:05 UTC