This article was published on February 23, 2018

Are Asimov’s Laws of Robotics still good enough in 2018?


Are Asimov’s Laws of Robotics still good enough in 2018?

It’s been 76 years since renowned science fiction author Isaac Asimov penned his Laws of Robotics. At the time, they must have seemed future-proof. But just how well do those rules hold up in a world where AI has permeated society so deeply we don’t even see it anymore?

Originally published in the short story Runaround, Asimov’s laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

For nearly a century now Asimov’s Laws seemed like a good place to start when it comes to regulating robots — Will Smith even made a movie about it. But according to the experts, they simply don’t apply to today’s modern AI.

In fairness to Mr. Asimov, nobody saw Google and Facebook coming back in the 1940s. Everyone was thinking about robots with arms and lasers, not social media advertising and search engine algorithms.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Yet, here we are on the verge of normalizing artificial intelligence to the point of making it seem dull — at least until the singularity. And this means stopping robots from murdering us is probably the least of our worries.

In lieu of sentience, the next stop on the artificial intelligence hype-train is regulation-ville. Politicians around the world are calling upon the world’s leading experts to advise them on the impending automation takeover.

So, what should rules for artificial intelligence look like in the non-fiction world?

According to a report published this week by Cambridge Consultants, titled “AI: Understanding And Harnessing The Potential,” there are five key areas that rules for AI should address:

Regardless of the way in which rules are set and who imposes them, we think the following principles identified by various groups above are the important ones to capture in law and working practices:

  1. Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes.
  2. Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is.
  3. Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed.
  4. Transparency: It needs to be possible to test, review (publicly or privately), criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained.
  5. Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded.

You’ll notice there’s no mention of AI refraining from the willful destruction of humans. This is likely because, at the time of this writing, machines aren’t capable of making those decisions for themselves.

Common sense rules for the development of all AI needs to address real-world concerns. The chances of the algorithms powering Apple’s Face ID murdering you are slim, but an unethical programmer could certainly design AI that invades privacy using a smartphone camera.

This is why any set of rules for AI should focus on predicting harm, mitigating risk, and ensuring safety is a priority. Google, for example, has guidelines set for dealing with machines that learn:

We’ve outlined five problems we think will be very important as we apply AI in more general circumstances. These are all forward thinking, long-term research questions — minor issues today, but important to address for future systems:

  1. Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
  2. Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
  3. Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
  4. Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
  5. Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

The future of AI isn’t just a problem for companies like Google and Cambridge Consultants though, as machine learning becomes a part of more and more devices — including the majority of smartphones and computers — its effects will be exacerbated. Unethical codes could propagate in the wild, especially since we know that AI can be developed to create better algorithms than people can.

It’s clear that the regulatory and ethical problems in the AI space have little to do with killer robots, with the exception of purpose-built machines of war. Instead governments should focus on the dangers AI could pose to individuals.

Of course, “don’t kill humans” is a good rule for all people and machines whether they’re intelligent or not.

Want to hear more about AI from the world’s leading experts? Join our Machine:Learners track at TNW Conference 2018. Check out info and get your tickets here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Published
Back to top