Increase your ROI and get inspired when you attend TNW Conference with your team 🎟 Save up to 40% today when you buy in bulk →

This article was published on February 18, 2021

Why developing AI to defeat us may be humanity’s only hope


Why developing AI to defeat us may be humanity’s only hope

One glance at the state of things and it’s evident humanity’s evolved itself into a corner. On the one hand, we’re smart enough to create machines that learn. On the other, people are dying in Texas because elected officials want to keep the government out of Texas. Chew on that for a second.

What we need is a superhero better villain.

Independence Day

Humans fight. Whether you believe it’s an inalienable part of our mammalian psyche or that we’re capable of restraint, but unwilling, the fact we’re a violent species is inescapable.

And it doesn’t appear that we’re getting better as we evolve. Researchers from the University of Iowa conducted a study on existing material covering ‘human aggression’ in 2002 and their findings, as expected, painted a pretty nasty picture of our species:

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

In its most extreme forms, aggression is human tragedy unsurpassed. Hopes that the horrors of World War II and the Holocaust would produce a worldwide revulsion against killing have been dashed. Since World War II, homicide rates have actually increased rather than decreased in a number of industrialized countries, most notably the United States.

The rational end game for humanity is self-wrought extinction. Whether via climate change or mutually assured destruction through military means, we’ve entered a gridlock against progression.

Luckily for us, humans are highly adaptive creatures. There’s always hope we’ll find a way to live together in peace and harmony. Typically, these hopes are abstract – if we can just solve world hunger with a food replication machine like Star Trek then maybe, just maybe, we can achieve peace.

But the entire history of humanity is evidence against that ever happening. We are violent and competitive. After all, we have the resources to feed everyone on the planet right now. We’re just choosing not to.

That’s why we need a better enemy. Choosing ourselves as our greatest enemy is self-defeating and stupid, but nobody else has stepped up. We’re even starting to kick the coronavirus’ ass at this point.

Simply put: we need the aliens from the movie Independence Day to come down and just attack the crap out of us.

Or… killer robots

Just to be clear, we’re not advocating for extraterrestrials to come and exterminate us. We just need to focus all of our adaptive intelligence on an enemy other than ourselves.

In artificial intelligence terms, we need a real-world generative adversarial network where humans are the learners and aliens are the discriminators. That’s pretty much the plot of Independence Day, the 1996 film, starring Will Smith (spoilers):

  • Humans are so war-like that two adorable men, Will Smith and Harry Connick Jr, are forced to become warfighters in the military
  • Aliens, probably having watched The Fresh Prince of Bell Air and Little Man Tate, decide this is a travesty and come to destroy us
  • Humans band together in one united front and defeat the aliens

The aliens created a problem and challenged us to solve it. The only solution was optimization. We optimized and solved the problem, thus creating an acceptable output. Anything less than total cooperation and our species would have failed to pass the discriminator’s test and the aliens would have swatted our attempt away like a cosmic Dikembe Mutombo.

The real problem

Humans are often evil, bigoted, and full of malice. But we’re still people. The real problem is that aliens won’t just cooperate and come attack us. It’d be hard to focus on something like Brexit, a US election, or whether Google’s latest plan to deal with ethics in AI is a good one if aliens were currently firing lasers at cities all over the world.

We can’t control aliens. In fact, it’s possible they don’t even exist. Aliens are not dependable enemies.

We do, however, have complete control over our computers and artificial intelligence systems. And we should definitely start teaching them to continuously challenge us.

The easy solution

With AI, we can dictate how powerful an opponent it becomes with smart, well-paced development. We could avoid the whole shooting lasers at cities part of the story and just slowly work our way towards the rallying part where we all work together to win.

The current paradigm for AI development is creating things that help us. And maybe doing so in a void is what’s hurting us. Our soldiers use AI to help them fight other humans. Our teachers and business leaders use AI to organize classrooms and workplaces. Some of this is good on its surface, some of it’s supposedly for the greater good.

But how much easier can we possible make it to be a human before we get a full blown case of Wall-E Syndrome? We should be developing artificial intelligence that challenges every one of us in tandem with life-saving and life-affirming AI.

Take the domain of Chess, for example. There’s no further question whether humans or AI dominate the Chess board. The greatest human Chess players can be beaten by robots running on smartphone processors; ladies and gentlemen it’s a wrap. And that’s a good thing.

For centuries we’ve used Chess as an analogy for war strategy. Now, if we get into a fight with a future evil, sentient AI, we know it’ll probably be able to best us tactically.

We need to focus on training our troops to fight an enemy that’s stronger and more capable than humans and, more importantly, on developing defense methods that don’t rely on killing any humans, but on protecting all of them. 

Humans obviously crave challenge. We compete for sport and for fun. The only reason billionaires and trillionaires exist is because sheer human hubris makes the idea of being “the best” seem like a better idea than being “the best for us all.”

Perhaps a technology shift towards developing ways to challenge humans in ways we can’t challenge one another could turn the tides back in our favor.

Imagine a video game that continuously challenged you in deeply personal ways or a workplace evaluation system that adjusted to your unique personality and experiences, thus always prompting you to do your best work.

The overall goal would be to create a system by which tribalism is replaced with cooperation. Our DNA itself seems laden with humanity’s birth trauma and we’ve spent the last 5,000 years (at least, per recorded history) coming up with ways to work around that. But, with concentrated redirection, maybe our passion for adversity could become a strength for our species. 

Maybe we need an AI adversary to be our “Huckleberry” when it comes to the urge for competition. If we can’t make most humans non-violent, then perhaps we could direct that violence toward a tangible, non-human opponent we can all feel good about defeating.

We don’t need killer robots or aliens for that. All we need is for the AI community and humanity at large to stop caring about making it even easier to do all the violent things we’ve always done to each other and to start giving us something else to do with all those harmful intentions.

Maybe it’s time we stopped fighting against the idea of robot overlords, and came up with some robot overlords to fight.

Will Smith could not be reached for comment.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top