Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on July 23, 2019

Why US public schools’ creepy use of surveillance AI should frighten you


Why US public schools’ creepy use of surveillance AI should frighten you

Public schools across the US continue to spend millions implementing AI-powered surveillance solutions alleged to prevent or mitigate violence. The only problem: most of them don’t work. US schools now rival China’s when it comes to ubiquitous surveillance, yet our students remain at the highest risk for violence among developed nations. What gives?

The ideas seem sound. Adults can’t possibly see and hear everything that happens on a school campus, so startups are marketing automated surveillance solutions to cover the gaps.

One company says their facial recognition systems could have prevented the Parkland massacre. Another startup specializing in gunshot detection says its ‘aggression detectors’ can alert staff to violence before it even happens. But politicians, public school administrators, and teachers might not be in the best position to determine the efficacy of these programs.

A recent report from Pro Publica and Wired showed that aggression detectors are basically useless. After extensive testing and experimentation they determined that these systems were inexplicably prone to both false-positives and missing auditory signs of aggression all together. According to their findings:

To test the algorithm, ProPublica purchased a microphone from Louroe Electronics and licensed the aggression detection software. We rewired the device so we could measure its output while testing pre-recorded audio clips. We then recorded high school students and examined which types of sounds set off the detector.

We found that higher-pitched, rough and strained vocalizations tended to trigger the algorithm. For example, it frequently triggered for sounds like laughing, coughing, cheering and loud discussions. While female high school students tended to trigger false positives when singing, laughing and speaking, their high-pitched shrieking often failed to do so.

And those facial recognition systems? They’re a logistical nightmare that rely on aggressors to follow a very specific protocol. Kevin Freiburger, director of identity solutions at Valid told TNW:

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Face recognition might be another application which can reduce violence. For example, if a person makes a threat to do harm to an institution or critical infrastructure, that face can be added to a blacklist from any photo of the subject. If a camera or sensor recognizes that face and matches it to a blacklist, it can produce a flag which may drive an action (locks a door, alerts security to go identify the person, etc.).

In the case of Nikolas Cruz, who was charged with killing 17 people at a Florida high school, he had already trespassed at the school previously and was escorted off the property and banned with standing orders to not allow him entrance to the building(s). Could a facial recognition-based access control system have produced a different outcome?

The problem here is that, for this to work, schools have to become like prisons. Locking down a campus is only effective if the entire compound is secure enough to prevent ingress. Worse, in order to deny a potential shooter entry, they need to be on a banned list and the AI has to recognize their face. While this sort of system may have prevented Cruz from entering because he was no longer a student, it wouldn’t have prevented Eric Harris and Dylan Klebold from entering Columbine High School had this technology been available and in use 20 years ago.

In other words: audio monitoring and facial recognition may alert authorities once a shooting begins – and thus potentially save lives by reducing response time – but they probably won’t be able to stop or prevent catastrophic violence in schools.

TNW spoke to Sean McGrath, a digital privacy expert at ProPrivacy, who told us:

Increased surveillance is unlikely to stop someone wishing to cause harm to others. Even if AI-based surveillance software could somehow pre-warn security professionals of an escalating situation, it’s unlikely to change the outcome. After all, a bullet will always travel faster than those charged with responding to a surveillance alert.

The reality is that these technologies pose a much greater threat to society than the threat faced by mass violence. It’s perfectly understandable that academic institutions and other organizations would want to utilize any and all tools in order to safeguard the public, but listening technologies are nothing more than a form of function creep.

The bigger problem here is that AI-powered audio and video surveillance is becoming accepted as a way of life for US citizens — much like it is in China — and the government’s claiming it’s for our own good, but experts say it won’t solve the problems it’s being deployed for. If our privacy is being eroded, we should get something tangible in return.

This isn’t to say AI shouldn’t be used to mitigate the violence problem in the US. AI, as a general technology, can certainly intervene in situations where violence has occurred. For example, Freiburger also told us:

Take NYC’s license plate system as an example. The readers can be mounted to city vehicles, intersections and other city infrastructure. The data can then be collected by the readers feeds through a database which allows authorized users to track the movement of a vehicle throughout the coverage area. If someone commits a violent act, and an eye witness captures a license plate number, this system can provide real-time feedback for the movement of that vehicle as it moves, which can prevent future violent acts from occurring.

But these systems are reactive and targeted. They don’t record everything your children say and do in schools, and they don’t claim to actually stop or prevent mass violence. Audio and video surveillance, even when powered by AI, can’t intervene before a shooting occurs. Social media crawlers – targeted surveillance AI that searches student social media accounts for threats – on the other hand, use publicly available data to do just that.

McGrath thinks we should focus on fighting the root causes of violence in society rather than adopting “just-in-time” mass-surveillance technologies. He continues:

Persistent surveillance affects human behavior at a fundamental level. Our schools and universities have always been environments that promote academic exploration and nurture inquisitive minds. By introducing overbearing surveillance technologies, we are threatening those principles.

But it might be too late for the US to extricate mass-surveillance systems from the public or start regulating these so-called solutions to mass violence. They’re already in our public schools, libraries, and mass-transit systems. As David Carroll, the US professor who took on Cambridge Analytica, puts it: the US and China are both surveillance states, China just embraces it.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with