Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on October 29, 2021

Why flat-Earthers are a clear and present threat to an AI-powered society

Wait for the twist


Why flat-Earthers are a clear and present threat to an AI-powered society

“Fool me once, shame on…shame on you. Fool me – you can’t get fooled again.” Former US President George W Bush.

It’s easy to laugh at someone who believes the Earth is flat. Dunking on pro-diseasers (AKA: antivaxxers) has become one of the internet’s favorite sports. And it’s pretty posh by social media standards to ridicule anyone who questions whether the climate-crisis is real.

There’s always going to be a small, vocal contingency of people who simply cannot be convinced of a ground truth.

Whether these people deserve ridicule is a question for our personal consciences, but one thing is certain: they need to be educated.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Unfortunately, we’ve been trying to do that for decades.

Boston University’s Lee McIntyre, an expert on “science denial,” did an interview in Physics World last year where they described the scope of the problem.

Per the article:

From McIntyre’s perspective, flat-Earth conspiracies are a danger and need confronting. “Maybe 10 or 20 years ago, I would have said, just laugh at them, how much traction are they going to get? I no longer feel that way.”

If these ideas are not challenged, he fears that as with supporters of “intelligent design,” proponents of a flat Earth will start running for US school boards, looking to push their ideas into the US education system.

Experts such as McIntyre have spent entire careers trying to understand and counter science denial. But the core of the problem is much more complex than just challenging bad science or faulty math.

The article goes on to say:

McIntyre, for example, recalls asking one flat-Earther why planes flying over Antarctica from, say, Chile to New Zealand don’t have to refuel, which they’d have to if the continent were (as they believe) an ice wall tens of thousands of kilometres long. He was simply told that planes can fly on one tank of fuel and refuelling (sic) planes could just be a giant hoax to stop us realizing that the Earth is flat.

If we use Physics 101 to clearly and unequivocally demonstrate that the “curvature of the Earth” doesn’t prevent us from seeing distant objects on the horizon, they’ll just claim we’re using the wrong cameras or angles.

Simply put: It’s easier for a large camel to pass through a skinny crevice than it is to convince someone the Earth is not flat when we all have the same information available to us.

But here’s the twist: The majority of people on this planet – whether it’s flat or round – believe ludicrous things about artificial intelligence that could be far more dangerous than any other science-denial belief save, perhaps, climate-crisis denial.

There are almost certainly people from every walk of life, in every industry, at every school, in every police precinct, and working for just about every news outlet who believe things about artificial intelligence that are simply not true.

Let’s make a short list of things that are demonstrably untrue that the general public tends to believe:

  1. Big tech is making progress mitigating racial bias in AI
  2. AI can predict crime
  3. AI can tell if you’re gay
  4. AI writing/images/paintings/videos/audio can fool humans
  5. AI is on the verge of becoming sentient
  6. Having a human in the loop mitigates bias
  7. AI can determine if a job candidate will be successful
  8. AI can determine gender
  9. AI can tell what songs/movies/videos/clothes you’ll like
  10. Human-level self-driving vehicles exist

And that list could go on and on. There are thousands of useless startups and corporations out there running basic algorithms and claiming their systems can do things that no AI can do.

Those that aren’t outright pedaling snake oil often fudge statistics and percentages to mislead people concerning how efficacious their products are.

These range from startups claiming they’ll be able to let you speak with your dead loved ones by feeding an AI system all their old texts and then creating a chatbot that imitates them, all the way to multi-billion dollar big tech outfits such as PredPol.

The people running these companies are either ignorant or disingenuous hucksters who know they’re using the same technology that, for example, IBM’s Watson uses to power the chat bots that pop up when you go to your bank’s website. “How can we help?”

And it’s just as bad in academia. When researchers claim that a text generator or style imitation algorithm can “fool” people with its text or “paintings,” they’re not employing expert opinions, they’re asking Mechanical Turk Workers what they think.

You can’t predict crime.

You can’t tell if someone’s queer using AI.

You can’t tell someone’s politics using AI.

You can’t tell if someone’s a terrorist by looking at their face.

Tesla’s automobiles cannot safely drive themselves no matter what a video you’ve watched on YouTube or Twitter appears to show you.

Netflix doesn’t know what you want to watch, Spotify doesn’t know what you want to hear, and Twitter certainly doesn’t know what conversations you want to engage in.

Big tech is not making any demonstrable progress on mitigating racial bias. They’re simply tweaking their individual systems to demonstrate mathematically insignificant “increases” in algorithmic bias detection.

And AI cannot tell if a job candidate is right for a job.

The idea of sentient, living, or conscious AI is fiction, for now.

So, why do entire governments endorse shitty products such as those pedaled by the snake oil salespeople at PredPol and ShotSpotter?

Why does Stanford continue to support research from a team that claims it can use AI to tell if a person is gay or liberal?

Why is Tesla’s Full Self Driving and Autopilot software so popular when it clearly and demonstrably is nowhere near safely self-driving or autopiloting a vehicle?

Why do so many people believe that GPT-3 can fool humans?

Because trillions upon trillions of dollars are at stake and because these systems all provide a clear benefit to their users aside from their advertised use cases.

When police are accused of over-policing Black neighborhoods, the only logical defense would be to demonstrate an equal distribution of officer-coverage over areas of racially-disparate geography.

But, that would mean the police can’t over-police Black neighborhoods anymore. PredPol’s software allows the police to lie to the public by claiming that the number of historical arrests is a predictor of the number of future crimes.

And, sadly, it appears as though this simple trick of math is something the general public simply doesn’t get.

It’s the same with companies that use hiring software to determine who the best candidates are.

AI can’t tell you who the best candidate is. What it can do, is reinforce your existing biases by taking your records on candidates that have traditionally done well and applying them as a filter over potential candidates. Thus, the ultimate goal of these systems is to empower a business to choose the candidates it wants and, if there are accusations of bias, HR can just blame the algorithm.

But there’s a pretty good chance that most people, even many who develop AI technologies themselves, believe at least one of these big lies about what AI can and can’t do.

And, to one degree or another, we’re all being exploited because of those misguided beliefs.

If we should be afraid of electing flat-Earthers, or having pro-diseasers in our schools, or cops on the force who believe the US election was rigged by democrats who literally eat babies, then we should be absolutely terrified about what’s happening in the world of AI.

After all, if the flat-Earth and antivaxx movements are growing exponentially year over year, what possible hope could we have of stalling out the mass-held belief that AI can do all sorts of things that are demonstrably impossible?

How much harm can ignorance cause?

Have a Happy Halloween.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with