Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 26, 2014

Machines are taking control of the world, so why stop them?


Machines are taking control of the world, so why stop them?

Wojtek Borowicz is a community evangelist at Estimote, freelance writer and a strong believer in the Internet of Things.


That might sound like a nihilistic newspaper headline from the day before Skynet woke up and bombed us, but it’s actually pretty far from that.

Leaving more control in the hands of machines is not only something we cannot stop, but something we shouldn’t stop. Soon we’ll have so many connected devices crunching such enormous amounts of data that effective and independent machine-to-machine communication will become essential to sustaining further growth and development of all things digital.

Stephen Hawking, a brilliant physicist and one of the brightest minds out there, isn’t the biggest fan of the concept of artificial intelligence. He warns that invention of sentient machines would be the biggest event in the history of mankind… and possibly the last one as well.

In a recent interview with John Oliver, Hawking goes as far as to recall a short story by Fredric Brown, about a scientist who builds an intelligent computer. It concludes like this:

“It shall be a question that no single cybernetics machine has been able to answer.”

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of single relay.

“Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

That’s a rather unsettling vision. Fortunately for us – unlikely to come true, at least not in the foreseeable future.

However, what is true already, is the Internet of Things (as I tried to explain previously). And what it needs for further development is a strong backbone of machines, able to transfer data between each other, then process and perform actions based on this data.

In other words: if we want our technology to become smarter (and I guess we do), we also need to grant it more freedom and control – as much as it’s possible to talk about freedom in regard to machines.

Why is that?

Because the model of a human micro-managing machines on enterprise scale is simply unsustainable anymore, or soon will be. Of course, we’ll still be on the top of the structure, to make the most important decisions and drive things into a given direction, but below us there will be layers of machines, on top of layers of machines, on top of layers of machines… making independent decisions, because we won’t have the capacity.

It might sound like an overstatement to those of you who are familiar with the concept of M2M communication. It’s not a new term or idea, and didn’t prompt us to build anything close to AI as of yet.

Keep in mind though, that with Internet of Things knocking on our door – with sensors, beacons and transmitters possible to deploy everywhere and to connect to the global network – we’re reaching an unprecedented level of complexity. The entire digital infrastructure contains immeasurable amount of possible interactions.

Imagine your car communicating in every second with your smartphone, your kid’s tablet, diagnostics station, insurance company, sensors in the road, transmitters in traffic lights, GPS satellite in the orbit, other cars in range, exchanging lots of data and taking relevant actions to make the ride smoother, safer and easier – and in the future who knows, maybe even fully autonomous (after all, the latest Google Car prototype doesn’t even have a steering wheel).

0528_googlecar_1

Still think this vision is over the top?

Well, did you know that in Santa Clarita, California, the city started using technology developed by HydroPoint to upgrade its irrigation system with smart sensors? Thanks to that, it’s saving millions of gallons of water despite the drought.

Meanwhile, GE is looking into putting multiple sensors and gigabytes of smart software onto freight trains. Expected result? More trains on tracks simultaneously, with average velocity improved by about 20 percent. That’s huge!

Not enough? Then what about the fact that a number of insurance companies is already piloting Usage Based Insurance policies? Right now, they’re mostly tailored to driving behavior, but with smart home becoming a thing now with all the humidity, air quality, temperature, and ambient light sensors, it’s not a longshot to predict insurance companies taking our daily habits into account as well.

Is there anything to be afraid of? Of course there is. After all, we’re still talking about giving control over loads of sensitive data to a third party that isn’t even a human being!

The problem is, however, not the machine being too smart (HAL 9000 kind of smart), but the exact opposite – the machine being too dumb (“Why won’t you boot up, goddammit!” kind of dumb).

HAL 9000

Couple of weeks ago, a Twitter bot for Bank of America’s customer service dove head first into a conversation…. with two other Twitters bots. Long story short: exactly as expected, it didn’t make any sense at all.

Of course, that’s pretty minor and had zero actual impact on anything, anywhere. But only because it happened in a context were almost no harm could be done. You don’t want machines to make even the tiniest mistake when it comes to your insurance. Or to hospital machinery, freight trains, road traffic, banking, waste disposal or hundreds upon hundreds of other things that we’re working on making machines not only capable of but, more importantly, responsible for.

Mistakes will happen. They’re bound to.

No system is ever going to be perfect, whether it’s human or machine controlled. In case of the latter though, there are three huge challenges laying ahead. What makes them exceptionally difficult is the fact that they’re unprecedented by anything we’ve faced to date.

The first one is maintenance. Right now, tracking and fixing bugs and errors in our electronic systems can be a pain, but we’re handling it quite well.

But it’s much easier when the interface is connecting a human and a machine. If something breaks down under a set of layers of strictly machine-oriented interfaces, finding out what went wrong and what to do to make it work again might be daunting.

The second challenge is blame, or to put it more lightly: responsibility. Part of why society works well (I know it’s a risky thing to say, but for the needs of this piece let’s just assume I’m right here) is this chain reaction: something fails, we find the person responsible, he or she either does something to amend it or is punished accordingly.

It’s a good system because it scales on so many levels. No matter if you’re committing broken code and have to buy shots for the rest of the team, or are a corrupt politician and end up in jail. The basic rule remains the same.

In case of M2M communication and Internet of Things, naming a culprit will become way more difficult in a number of cases. The lines showing were my responsibility ends and your starts will be really blurry. Fast forward 20 years and imagine what a nightmare self-driving cars will become for the insurance industry.

No matter how smart the vehicles become, we will still have accidents, although most likely much, much less than today. But who to blame for them?

killer-robots5

Medical malpractice won’t disappear either. But it might be too late to wonder whether it was the heart rate monitor, the hospital’s data center, the system relaying information between those two or maybe just human negligence that caused someone to suffer.

The third challenge, maybe the biggest one, is ethics. We’re capable of taking actions based on a given set of values. You could argue that machines do that all the time too. But their values are strings of 0s and 1s. Nothing as abstract as empathy, mercy or justice.

And if the systems that are soon to be governing huge part of our daily lives are to do that well, they need to account for our values. It’s a super-complex philosophical problem, so instead of going deeper, I’ll just leave you with a quote from an amazing article about this issue by Patrick Lin: “Can ethics be reduced to algorihtms?”

One more interesting thing Hawking says in the interview I mentioned at the beginning is that intelligent machines could design improvements for themselves. Well, maybe they should start now, because if we want to make a leap forward with technology, in the coming years, we will need them to become way smarter. Artificial intelligence may not cut it – we might need a real one.

Featured image credit: Flickr/i k o

Read next: Why the Internet of Things narrative has to change

Get the TNW newsletter

Get the most important tech news in your inbox each week.