This article was published on July 26, 2021

Lying, corrupt, anti-American cops are running amok with AI

Pandora doesn't go back into the box


Lying, corrupt, anti-American cops are running amok with AI

Hundreds of thousands of law enforcement agents in the US have the authority to use blackbox AI to conduct unethical surveillance, generate evidence, and circumvent our Fourth Amendment protections. And there’s little reason to believe anyone’s going to do anything about it.

The problem is that blackbox AI systems are a goldmine for startups, big tech, and politicians. And, since the general public is ignorant about what they do or how they’re being used, law enforcement agencies have carte blanche to do whatever they want.

Let’s start with the individual officers.

Any cop, regardless of affiliation or status, has access to dozens (if not hundreds) of third-party AI systems.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

When I mention an “AI system,” you may be imagining a server with a bunch of blinking lights or a well-dressed civilian leaning over a console with half a dozen monitors.

But I’m talking about an Android or iPhone app that officers and agents can use without their supervisors even knowing.

Here’s how how it works

A cop installs software from a company such as Clearview AI on their personal smartphone. This allows them to take a picture of anyone and surface their identity. The cop then runs the identity through an app from a company such as Palantir, which surfaces a cornucopia of information on the individual.

A screenshot of an app's databases
Credit: Vice / DOJ

So, without a warrant, officer Friendly now has access to your phone carrier, ISP, and email records. They have access to your medical and mental health records, military service history, court records, legal records, travel history, and your property records. And it’s as easy to use as Netflix or Spotify.

Best of all, at least for the corrupt cops using these systems unethically, there’s absolutely no oversight whatsoever. Cops are often offered these systems directly from the vendors as “trials” so they can try them before they decide whether to ask their departments to adopt them at scale.

The reason officers use these systems is because they make their jobs much easier. They allow a police officer to skip the warrant process and act as judges themselves.

What about police departments and other agencies?

Law enforcement agencies around the country spend billions on AI services every year, many of which are scams or surveillance tools. These include facial recognition systems that don’t work for Black faces, predictive-policing systems that allow cops to blame the over-policing of poor minority communities on the algorithm, and niche services whose only purpose is generating evidence.

Predictive-policing is among the most common unethical AI systems used by law enforcement. These systems are basically snake oil scams that claim to use “data” to determine where crimes are going to happen. But, as we all know, you can’t predict when or where a crime is going to happen. All you can do is determine, historically, where police tend to arrest the most people.

What predictive policing systems actually do is give the police a scapegoat for over-policing minority and poor communities. The bottom line is that you cannot, mathematically speaking, draw inferences from data that doesn’t exist. And there is no data on future crime.

Anyone who says these systems can predict crime is obviously operating on faith alone, because nobody can explain why a blackbox system generates the output it does – not even the developers who created it.

What about other AI systems?

Simply put: any time an AI system used by law enforcement can, in any way, affect an outcome for a human, it’s probably harmful.

Vice published an article today detailing the Chicago police department’s use of ShotSpotter, an AI system purported to detect gunshots.

According to the company, it can detect gunshots in large areas with up to 95% accuracy. But in court they claim that’s just a marketing guarantee and that background noise can affect the accuracy of any reading.

Which means it’s a blackbox system that nobody can explain, and no legal department will defend.

Vice reports that police instructed ShotSpotter employees to alter evidence to make it appear as though the system detected gunshots it didn’t in several cases. In one, the police had an employee change the location of a detection to reflect the location of a crime. And in another they had an employee change the designation “fireworks” to “gunshot” in order to facilitate an arrest.

When challenged in court, prosecutors merely withdrew the evidence. That’s it. To the best of our knowledge nobody was arrested or indicted.

The problem here isn’t that ShotSpotter doesn’t work (although, if you have to use the Tucker Carlson defense in court it probably doesn’t). It’s that, even if it did work, it serves absolutely no purpose.

Have you ever heard a firearm discharge? They’re loud. They don’t go undetected if there are people around, and a gun fired in any given area of Chicago would be heard by tens of thousands of people.

And people, unlike blackbox algorithms, can testify in court. They can describe what they heard, when they heard it, and explain to a jury why they thought what they heard was or was not a gunshot.

If we find out that prosecutors told them to say they heard a gunshot and then they admit in court that they lied, that’s called perjury and it’s a crime. We can hold people accountable. 

Ignorance-based capitalist apathy

The reason there’s so much unethical cop AI is because it’s incredibly profitable. The startups and big tech outfits selling the AI are getting paid billions by taxpayers who either don’t care or don’t understand what’s going on.

The politicians authorizing the payouts are raking in money from lobbyists. And the cops using it can ignore our Constitutional rights at their leisure with absolutely no fear of reprisal. It’s a perfect storm of ignorance, corruption, and capitalism. 

And it’s only going to get worse.

The US founding fathers, much like AI, could not predict the future. When they drafted the Second Amendment, for example, they had no way of knowing that hundreds of thousands of heavily-armed government agents would one day patrol our communities around the clock — thus making our right to keep and bear arms a moot form of protection against tyranny. 

And now the same has happened to our Fourth Amendment rights. When our private information was locked away in filing cabinets and the only way to access it was with a judge’s signature on a search warrant, our right to privacy was at least somewhat safeguarded against corruption.

Now those protections are gone. You don’t need a predictive algorithm to understand, historically speaking, what happens next.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with