A group of physicists working on the atomic bomb program at Los Alamos Research Laboratory in 1950 were having a discussion about aliens over lunch one day. The general consensus was that the reason humans hadn’t yet met any was because life was scarce in the universe and we just hadn’t been lucky enough to encounter them.
One of the physicists, Italian researcher Enrico Fermi, scoffed at this notion. He did some math and came up with a number to dispute the other researcher’s conjectures. In Fermi’s estimation, if aliens existed they should have already visited us by now. This became known as the Fermi paradox.
At it’s most basic, it goes like this: If life isn’t unique to our planet, and the universe is infinite, then the odds of it being flooded with life are great. And if it’s flooded with life, the odds are great that at least one of those life forms should have visited us by now. So the most simple explanation – the one Occam would approve of – is that aliens don’t exist because life is really special.
This doesn’t bother enthusiasts. An alien enthusiast will look you straight in the eye and declare no less than 12 excellent reasons why aliens exist but either don’t want to visit us or can’t. And it’s incredibly hard to argue with the suggestion that they know we’re here but are pretending not to see us. We’re not exactly putting our best foot forward as the “intelligent” life forms of planet Earth right now.
None of these arguments are new. And Fermi wasn’t the first person to dispute the fact that aliens exist by pointing out there’s absolutely no evidence for them. But Alexander Berezin, a researcher from the National Research University of Electronic Technology in Russia, has a new spin that could put both camps in their place.
What if aliens do exist, but the reason we haven’t found them yet is because we’re doomed to destroy them? Berezin says the simplest way to resolve the Fermi paradox is a “first in, last out” solution:
I argue that the Paradox has a trivial solution, requiring no controversial assumptions, which is rarely suggested or discussed. However, that solution would be hard to accept, as it predicts a future for our own civilization that is even worse than extinction.
… what if the first life that reaches interstellar travel capability necessarily eradicates all competition to fuel its own expansion?
The crux of the situation, they argue, is that it doesn’t really matter whether aliens exist or not but only if we can observe them.
If we assume the reason we don’t observe aliens is because they’re on one of the infinite planets surrounding one of the infinite stars that isn’t our sun, then the important bit is whether or not we or any of the potentially infinite aliens have developed interstellar travel yet. And, whoever does develop it first, will probably just destroy all the other lifeforms in the universe.
Berezin isn’t saying that Trump’s Space Force is going to go out and kill all the aliens for their resources (although they don’t explicitly rule it out). The point of the paper is that some unintended ripple effect could have imperceptible consequences throughout the universe. They write:
I am not suggesting that a highly developed civilization would consciously wipe out other lifeforms. Most likely, they simply won’t notice, the same way a construction crew demolishes an anthill to build real estate because they lack incentive to protect it. And even if the individuals themselves try their best to be cautious, their von Neumann probes probably don’t.
As far as we can tell, Berezin’s basically saying that if we assume the most probable scenarios are true, then it’s likely that aliens exist all over the universe but the first ones to achieve interstellar travel will inadvertently destroy all the others. Yikes!
And how will this happen? Artificial intelligence, probably. If it doesn’t kill all humans, apparently it’ll just kill all… everything else. Berezin writes:
This problem is similar to the infamous “Tragedy of the commons”. The incentive to grab all available resources is strong, and it only takes one bad actor to ruin the equilibrium, with no possibility to prevent them from appearing at interstellar scale. One rogue AI can potentially populate the entire supercluster with copies of itself, turning every solar system into a supercomputer, and there is no use asking why it would do that. All that matters is that it can.
The good news is that, based on the fact nobody’s accidentally deleted us yet, it’s a near certainty that we’re the ones who’ll the destroying.
Read the whole paper here.
H/t: Peter Dockrill, Science Alert
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural