This article was published on April 16, 2019

Oxford philosopher’s newest hypothesis predicts the rise of super villains


Oxford philosopher’s newest hypothesis predicts the rise of super villains

Oxford philosopher and founding director of the Future of Humanity Institute Nick Bostrom’s latest research paper seems to indicate our species could be on a collision course with a technology-fueled super villain.

Will a deranged lunatic soon have the capabilities to take the entire world hostage? Can our nation’s leaders do anything to stop this inevitable tragedy? Will the caped crusader rescue his sidekick before the Joker’s sinister trap springs?

In the paper, titled “The Vulnerable World Hypothesis,” Bostrom posits the whole of human technological achievement can be viewed as a giant urn filled with balls that we pull out each time we invent something. Some of the balls, says Bostrom, are white (good), most are gray (neutral), but so far none have been black (apparently eradicates civilizations re: Pandora’s Box). Bostrom says:

What if there is a black ball in the urn? If scientific and technological research continues, we will eventually reach it and pull it out. Our civilization has a considerable ability to pick up balls, but no ability to put them back into the urn. We can invent but we cannot un-invent. Our strategy is to hope that there is no black ball.

That’s a terrible strategy. And that’s probably why Bostrom’s put his considerable mental faculties to work on the new paper, a work in progress that explores some “concepts that can help us think about the possibility of a technological black ball, and the different forms that such a phenomenon could take.”

Put succinctly, Bostrom’s ultimate reckoning for the Vulnerable World Hypothesis (VWH) is:

If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Anyone else catch the episode of the 1998 science fiction anthology TV show “The Outer Limits” called Final Exam? In it, a college student wreaks havoc on those who’ve wronged him when he demonstrates he’s discovered cold fusion, and can make nukes with it. The plot centers around the inevitability of technology, showing that even if we stop one evil genius from discovering and using something horrible someone else will figure it out.

Bostrom points this out in his paper – he says if we assume there’s at least one “black ball” in the urn, then we must also assume someone’s going to pull it out one day. He predicts this could play out in a number of ways – “easy nukes,” “even worse climate change,” and a “War Games” style paradigm where the world’s super powers realize that whoever strikes first will be the sole survivor, are among those hypothesized in the paper.

But the scariest part isn’t how we’ll all be destroyed; it’s how we’ll have to prevent it from happening. Bostrom outlines four potential possibilities for achieving “stabilization,” or ensuring we don’t make ourselves obsolete with our own technology. They’re terrifying:

  1. Restrict technological development.
  2. Ensure that there does not exist a large population of actors representing a wide and recognizably human distribution of motives.
  3. Establish extremely effective preventive policing.
  4. Establish effective global governance.

In other words, all we need to do is stop Google, get everyone in agreement on our collective morals, create an ubiquitous surveillance state, and establish a one-world government.

It’s worth pointing out that Bostrom isn’t endorsing the view as correct – he’s a philosopher, they come up possibilities and probabilities, but there’s nothing proving his hypothesis is right. Though, as he puts it, “…it would seem to me unreasonable, given the available evidence, to be at all confident that VWH is false.”

And, as for the scary list above, Bostrom advises weighing the pros against the cons:

A threshold short of human extinction or existential catastrophe would appear sufficient. For instance, even those who are highly suspicious of government surveillance would presumably favour a large increase in such surveillance if it were truly necessary to prevent occasional region-wide destruction. Similarly, individuals who value living in a sovereign state may reasonably prefer to live under a world government given the assumption that the alternative would entail something as terrible as a nuclear holocaust.

If you’re in the mood to face our species’ mortality head on, you can read the entire paper here on Bostrom’s website. It’s a work in progress, but it’s a fascinating portrayal of our imminent doom. And well worth the horrifying read.


TNW Conference 2019 is coming! Check out our glorious new location, inspiring line-up of speakers and activities, and how to be a part of this annual tech bonanza by clicking here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with