Researchers dish the dirt on their AIs outsmarting them

Researchers dish the dirt on their AIs outsmarting them
Credit: Nicole Gray

Machines have become deft at exploiting rules and seeking the greatest possible rewards in surprising and creative ways. Facebook’s AI once created its own language to make arguing with itself easier and another AI found a way to cheat at Q-bert which had eluded humans for decades. Yet, most of these tales of the wild and weird ways AI finds to approach a problem have gone unpublished until now.

They’re usually shared from one researcher to another as amusing anecdotes, just something weird that happened while training the latest neural networks.  But one research team decided there was value in such tales beyond just entertainment.

The researchers, Joel Lehman and Jeff Clune at Uber AI Labs, and Dusan Misevic at the Center for Research and Interdisciplinarity in Paris, recently published a white paper that details several of the stories, as science, in order to facilitate the discussion and study of how AI evolves solutions.

The paper, which can be downloaded here, explains that these “mistakes,” or unexpected evolutions, are usually overlooked by scientists. When researchers are focused on achieving a task, often the things that don’t work as planned are written off as failures. But, sometimes the so-called mistakes can actually provide valuable insight into the machinations of algorithms. The paper sets out to show the importance of digital evolution using anecdotal evidence:

It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.

In one experiment, conducted by Uber AI Labs, we can see the very essence of evolution: breeding generational growth as a result of problem solving for environmental challenges.

Another, created by a team of French researchers, demonstrates how AI can unexpectedly adapt to rules changes. When the AI’s ability to grasp objects was turned off, rather than slide a block around to meet its goal, it instead found a way to use its gripper anyway.

These are fascinating, if not beautiful, permutations of science that exist only because an AI ‘thought’ of them. And that’s precisely what makes them terrifying as well. Most of the dozens of examples in the paper are rare exceptions to otherwise ‘normal’ AI that functions as expected. According to the researchers who put the stories together:

The many examples of “selection gone wild” in this article connect to the nascent field of artificial intelligence safety: Many researchers therein are concerned with the potential for perverse outcomes from optimizing reward functions that appear sensible on their surface.

Eventually the field of AI will feature evolutionary models that learn and breed in real time. This opens the potential for “perverse outcomes” like the entire planet becoming a giant paperclip factory, for example.

There’s plenty of scientists dedicated to building the perfect AI, perhaps it’s time we had more dedicated to studying the mutants, anomalies, and weirdos that occur. Because if movies about scientific experiments has taught us anything, it’s that unexpected results are what usually cause catastrophes.

The Next Web’s 2018 conference is just a few months away, and it’ll be 💥💥. Find out all about our tracks here.

Read next: Chrome OS looks to replace Android on tablets. Good riddance