Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on November 18, 2017

How much autonomy is too much for AI?


How much autonomy is too much for AI?

AI has the power to make decisions on our behalf and the world is getting excited. But there’s always that nagging question: Who will be the servant and who will be the master? Should we approach AI from a completely different angle? Should we actually hand over control at all and let the AI make its own decisions?

Well, to an extent, in some circumstances, we’re going to have to let go of the reins. AI with no decision-making power really is not AI at all.

Machine learning, at times, must be input-free

There’s no way to reap the massive benefits of machine learning if it has to wait for human input every time there’s a decision to be made. A self-driving car, for instance, simply couldn’t happen if it needed to ask a human for input whenever it had to turn or slow down. Such a vehicle would be exactly like the cars we all know today.

We want to hand over some decisions to machines because we’re forced to make so many. If you believe the Microsoft To-Do adverts, they say we make 35,000 every single day. There’s no study to back that up, but Cornell University revealed we make more than 200 decisions a day purely based on food, so it’s a reasonably educated guess.

But the interesting point is that most of them are instinctive. We don’t even think about them because they’re hardwired into our brain. When we drive by ourselves and steer away from a potential hazard, or slow down because a situation is unfolding up ahead, we make a decision. When we turn our head toward a noise, we make another.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

My belief is that if we want to automate tasks, we need to empower machines to make similar decisions. I don’t see an issue with handing over many unconscious decisions to AI. Steering our car around a hazard is different than making a decision about where we’re driving. We can draw this line in the sand and this is something that we must do.

Fortunately, I’m not alone in my belief that a line has to be drawn. How and where to draw it, however, is subject to debate.

Where do we draw the line in the sand?

Stephen Hawking, Elon Musk, Bill Gates and just about everybody else in technology have warned that AI could take over the world and wipe us off the face of the Earth. Musk is a major benefactor of the Future of Life Institute, which is determined to maintain control over AI and is prepared to fight for legislation preventing us from handing over control. Legislative action would be one approach to drawing a line in the sand.

One big issue is whether a machine designed for a specific task can become self-aware and influence other machines through the Internet and the Internet of Things.

It is a logical leap to suggest that a car or a robot could become self-aware, hack into the system from behind, and launch a revolution against us. But this is the loophole that concerns many critics of AI. If such a loophole exists, legislative action would have to be taken early on. By the time we discovered someone created this loophole, it would be too late to control it. So the question of whether this danger exists can’t be answered.

Another angle of the AI debate goes well beyond the issue of danger to the very nature of our place in this world. The question is why do we need AI at all, and why are we so keen to hand over control to machines? What, if not intelligent reasoning, is our own purpose in this world? If we pass our major decisions to machines, what’s left for us?

Will we willingly hand over control to AI?

AI has recently made its debut in the legal system, and work is underway to create the perfect robot judge that could eventually make rulings instead of a human judge.

Another suggestion is to let a perfect robot judge support a human judge, who has the benefit of empathy, gut feeling and human qualities that machines can’t replicate. This sounds like a more sensible solution, as the AI can apply the letter of the law and offer a suggested range of sentences, from which the judge can choose as a human being.

In a similar way, product designers could use the undoubted talents of machine learning to run through thousands of permutations when it is innovating a new product. We could always have a person at the end of the chain to apply common sense and ensure that the computers indeed have our best interests at heart.

Surgeons are set to be replaced with robots that will have a steadier hand, can see effectively through the patient at all times, and can complete operations faster and quicker. But should they have a qualified surgeon in attendance?

This is where another problem arises. If mortality rates fall, if AI surgeons prove safer and more successful than human doctors, which is perfectly possible, then it makes logical sense to give robot surgeons an even greater level of autonomy.

So, finally, even an AI guided by a human could become autonomous.

We have to consider human nature. If computers are always right, then, in the end, we will hand over control.

In the same way, designers finding that the computer is simply better will be reduced to rubber stamping its work. If the AI judge returns a perfect ruling each time, in the end, the human judge will simply pass on rendering its own verdict.

So perhaps we don’t need a Terminator-style uprising at all. Perhaps that’s the fictional analogy for what will really occur. We will willingly hand over our decision-making power to a superior computer-based intellect. A powerful AI could render our human decision-making capability irrelevant in just one generation. Children growing up with perfect AI would very likely not develop any urge to make their own decisions.

Through a combination of laziness and the simple knowledge that AI has more processing power than the human brain, we could hand over the world and every major decision in it to computers that never even sought power or control.

AI must be used to empower humankind

Instead of handing control of the world over to computers, should we not use AI to empower our own decisions?

This is the objective I have in mind whenever I think about how AI should work — and specifically, as we continue to build our full-stack AI building platform in Germany.

Humans today are constantly distracted. Our ability to focus on the important decisions in our lives is limited by the information overflow we live in, the constant distractions of our smartphones and the pressure to do more than we did yesterday. We need to hand over our most mundane tasks to AI, but always make sure this is a conscious decision.

Unfortunately, though, it looks like most companies are moving in the opposite direction. We could, of course, reverse the direction in which AI is going and remove all autonomy and reduce AI to a support role. But that’s not going to happen as long as commercial interests drive us toward full AI autonomy. Why should a Fortune 50 company not want to assume customer decision-making power, and let AI make all the buying decisions?

Today, there’s no incentive for companies not to work on such AI use cases and there’s no legislation that would prohibit them from doing so either. Companies are already creating products that take unfair advantage of human nature by creating buying habits that are terribly hard to break for most people.

Just think about the Amazon Dash. It’s not a far stretch to believe that, someday, the customer will hand over control of pushing that button to AI. And, with that, the decision of what, and how much to buy as well.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top