This article was published on February 20, 2020

AI Now: Predictive policing systems are flawed because they replicate and amplify racism

The Institute's Executive Director recently testified in front of the European Parliament


AI Now: Predictive policing systems are flawed because they replicate and amplify racism Image by: ArtOlympic/Shutterstock

The AI Now Institute’s Executive Director, Andrea Nill Sánchez, today testified before the European Parliament LIBE Committee Public Hearing on “Artificial Intelligence in Criminal Law and Its Use by the Police and Judicial Authorities in Criminal Matters.” Her message was simple: “Predictive policing systems will never be safe… until the criminal justice system they’re built on are reformed.”

Sanchez argued that predictive policing systems are built with “dirty data” compiled over decades of police misconduct, and that there’s no current method by which this can be resolved with technology.

Her testimony was based on a detailed study conducted by the AI Now Institute last year that detailed how predictive policing systems are inherently biased. She told the committee:

In a recent study, my colleagues at the AI Now Institute examined 13 US police jurisdictions that had engaged in illegal, corrupt, or biased practices and subsequently built or acquired predictive policing systems. Specifically, my colleagues found that in nine of those jurisdictions, there was a high risk that the system’s predictions reflected the biases embedded in the data.

During the hearing, Sanchez described predictive policing systems as little more than a method by which to automate corruption:

Left unchecked, the proliferation of predictive policing risks replicating and amplifying patterns of corrupt, illegal, and unethical conduct linked to legacies of discrimination that plague law enforcement agencies across the globe.

Read: Predictive policing AI is a bigger scam than psychic detectives

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

AI Now warned US regulators last year that predictive policing was a problem, and the message hasn’t changed much for the international audience. Per Sanchez today:

Ultimately, predictive policing systems and the data they process are the offspring of an unjust world. While the United States’ criminal justice system is a vestige of slavery and centuries of racism against Black and Brown people, discriminatory policing is endemic across the globe, including in Europe.

The reason these systems are so dangerous? Simply put, a long history of corrupt police practices has created a pool of untrustworthy data. For example, while researching the Chicago Police Department (CPD) – an agency that settles an average of one misconduct suit every other day – AI Now identified a pipeline between police corruption and biased AI predictions. As Sanchez explained:

Our researchers concluded that the CPD’s discriminatory practices generated “dirty data” that the city’s predictive policing system directly ingested, creating an unacceptably high risk that the technology was reinforcing and amplifying deeply ingrained biases and harms. By relying on such biased policing, predictive policing effectively put innocent people who were wrongfully stopped and arrested on a Strategic Subject List, thereby reflecting and—when acted upon—perpetuating the CPD’s harmful practices.

AI Now’s warnings have, so far, been largely ignored. A few jurisdictions in the US have put a stop to predictive policing, and there’s mutterings from the UK and Europe about “pausing” its use in some areas. Yet the use of both predictive policing and facial recognition by law enforcement continues to rise globally.

Read the full transcript of of Andrea Nill Sanchez’ remarks here.


You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top