Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on September 3, 2021

Why are so many scummy, scammy AI companies thriving?

$urely, there'$ a $imple rea$on, let'$ $ee if we can figure it out.


Why are so many scummy, scammy AI companies thriving?

Did you know Neural is taking the stage this fall? Together with an amazing line-up of experts, we will explore the future of AI during TNW Conference 2021. Secure your online ticket now!

After about the fifth or sixth time you read an article about a Black man being wrongfully arrested due to faulty facial recognition AI, you start to wonder how nobody seems to be doing anything to stop this from happening.

Sure, whenever something goes wrong, the company behind the software is always working to improve results and the law enforcement agency using the software is always reviewing procedures to ensure this doesn’t happen again.

But it does. It seems like a day doesn’t go by where a law enforcement agency isn’t exposed for misusing facial recognition or predictive-policing systems.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

And it’s not just the government. Big business, small business, and everything in between are caught up in the AI snake oil craze.

Hiring algorithms that judge human emotion, honesty, or sentiment are inherently unethical and biased. AI systems that allege to predict human behavior before it happens are almost always scams.

Yet hundreds – perhaps even thousands – of companies that specialize in BS AI are thriving. Why?

The short answer: money. It’s money. It’s always money.

Must be the money

Most AI companies and organizations are in pursuit of useful technology. But the one’s we’re focused on in this article are those who know they’re pushing snake oil and rely on hyperbole, “human in the loop” BS, and fuzzy statistics to obscure what their products can do.

And this is mostly about startups, we’ll get to big tech and academia’s role in the crapshow that is the AI world in future articles.

But, the reason companies use scammy hiring algorithms that clearly discriminate against Black applicants or the police don’t mind using scheduling software that makes mathematically impossible claims about predicting crime is because everyone involved in the entire process gets paid.

Imagine you’re a business person who’s interested in AI and you come up with a really cool idea, we’ll use predictive policing AI as an example:

Wouldn’t it be cool if we could predict where crime was going to happen?

You’re not an AI expert, but it seems like this should be possible using modern technology. After all, isn’t there AI that can tell if someone’s gay (no), determine if someone’s a terrorist by looking at their face (hell no), and AI that can fool humans into thinking the things it writes were written by humans (also, hell no)?

Luckily for you, there are plenty of AI developers who definitely think they can create algorithms capable of predicting, based on historical data, where police-presence will be most needed in a given area. With enough data, you can predict anything right (no)?

Now you just need funding. VC’s will fund anything so long as there’s a market for it, it’s not explicitly illegal, and there’s money to be made.

Once the product is funded, developed, and packaged, it’s up to the sales and marketing teams to figure out the rest.

If you’re the police officer responsible for purchasing new software for your department, and someone tells you they’ve got research demonstrating their system is better at predicting crime than your current method, it sounds like you might be getting a good deal.

At no point from inception to implementation is anyone involved obliged to wonder if it’s ethical to use this software, because anyone who actually believes a machine can predict crime is in no position to opine on its ethical implementation and everyone else involved is in on the scam.

Basically, the good apples believe systems such as hiring algorithms, predictive-policing, and facial recognition will take the human bias out of situations where it can be a problem, and the bad apples know it’ll do the opposite, but they don’t care so long as there’s money to be made.

The founders make money up front. The VCs get their’s later, and the organizations implementing scammy AI can typically replace several useful systems and humans with a so-called all-in-one package – or, in the case of government orgs, they can justify budget increases with the AI’s output.

Human in the loop

You’d think there’d be somebody somewhere with the power to say “Hey, I’m versed in computer science 101 and basic mathematics and your research papers are a joke. We shouldn’t create/sell/purchase/use this product.”

But you’d be wrong.

CEOs often don’t understand the finite details of their products. If your AI head says they can build a system to predict crime, who are you to tell them they’re lying?

Often the AI person isn’t lying, they just have a myopic view of what “predicting crime” means because they’re in no position to understand the actual nature of crime – something criminology and sociology experts spend their entire lives studying.

Your average tech startup doesn’t tend to hire IT talent based on their sociology credits.

And when it comes to sales teams, marketers, and purchasing agents: it’s in everybody’s best interest to believe the hype and nobody’s qualified to dispute it.

The average PR representative or HR manager is not going to read an AI startup’s research papers and suddenly exclaim “hey, wait, these statistics were run against a survey with no ground truth. This math doesn’t add up, we’re being scammed!”

Unfortunately the media tends to make things worse. The sheer number of reporters who take press releases as gospel when reporting on these companies is staggering.

Unfortunately, it’s another case where many journalists don’t know as much about the topic as the companies and developers they’re interviewing and quoting.

The Blue Fairy

And, finally, the main reason why the BS AI ecosystem seems to thrive is because almost everybody wants to believe the products it produces are real.

Everybody but criminals wants to believe an AI could predict crime. Everybody should want to believe that an AI hiring algorithm could solve the problem of bigoted hiring practices.

A lot of people want to believe that facial recognition can accurately identify people, sentencing algorithms can be fair, and computer vision can determine if someone’s gay, a terrorist, or being sincere.

And there are a lot more marketing and PR agents in the world than there are journalists who know what they’re talking about or AI experts willing to publicly call out BS when they see it.

Until those things change, we’ll continue to be treated to a never-ending series of quiet reports demonstrating how flawed these AI systems are followed up by very loud articles detailing how the companies responsible are working diligently to “improve” their systems.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top