You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on December 1, 2017

AI and the future of drones


AI and the future of drones

For many, drones are simply a novel gadget, a fun toy to fly around the neighborhood, snapping aerial images or even spying on neighbors. Rapidly growing in popularity, the unmanned aerial vehicles (UAVs) already have been purposed in a variety of scenarios, far beyond their use as robotic toys.

In just a few years, drones have enhanced and redefined a variety of industries. They are used to quickly deliver goods, broadly study the environment and scan remote military bases. Drones have been employed in security monitoring, safety inspections, border surveillance and storm tracking. They even have been armed with missiles and bombs in military attacks, protecting the lives of armed-forces personnel that would otherwise be required to enter these combat zones.

Entire companies now exist to provide drones for commercial use. The potential of these remote-controlled flying robots is unlimited.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

“Drone-captured data is an innovative solution for delivering sophisticated analytics to stakeholders and provides an affordable way to improve estimating, designing, progress tracking, and reporting for worksites,” Drone Base’s Patrick Perry wrote in a blog post.

Still limited by their human controllers, the next generation of drones will be powered by artificial intelligence. AI allows machines such as drones to make decisions and operate themselves on the behalf of their human controllers. But when a machine gains the capacity to make decisions and “learn” to function independently of humans, the potential benefits must be weighed against the possible harm that could befall entire societies.

When it comes to AI, we are entering unknown territory, and the only guide is our imagination. Some of the brightest minds of the past century have already forecast what might happen. Could we be facing a world in which an army of Terminator-akin cyborgs send the world into a nuclear holocaust?

For many, the threat of autonomous robots is nothing more than a fictional account by none other than early American sci-fi writer Isaac Asimov. After all, I, Robot is more than a popular Will Smith action film. Between 1940 and 1950, Asimov published a series of short stories depicting the future interactions of humans and robots.

It was in this collection that the author introduced us to the Three Laws of Robotics, the set of rules that dictated how AI could harmoniously co-exist with man. For those unfamiliar, the Three Laws state:

  1. A robot may not injure a human being or, through inaction, allow a human to be harmed.
  2. A robot must obey orders given to it by humans unless the orders conflict with the First Law.
  3. A robot must protect its own existence unless such protection conflicts with the First or Second Laws.

Sure, the Three Laws create compelling fiction, but Asimov introduced readers to a very real and dangerous concept. When a machine is able to function independently of humans, if it can learn and make choices based on its advancing knowledge, what prevents it from overtaking a mortal society?

As AI jumps from the pages of science fiction into reality, we are faced with real-life scenarios in which those Three Laws could come in handy. What happens when robotic military weapons are deployed with the potential to kill millions in a single raid? What if these autonomous killers evolve to the point of ignoring the orders of their creators? In 2013, Mother Jones visited the potential consequences of autonomous machines:

“We are not talking about things that will look like an army of Terminators,” Steve Goose, a spokesman for the Campaign to Stop Killer Robots told the publication. “Stealth bombers and armored vehicles—not Terminators.”

And while the technology was forecast to be, “a ways off” in 2013, AI weapons, specifically drones, are approaching much sooner than anticipated. Though the Pentagon issued a 2012 directive calling for the establishment of “guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapons systems,” unmanned combat drones have already been developed and even deployed along the South Korean border. The developments have led to major figures in the tech industry – including well-known names such as Elon Musk – to call for a ban on “killer robots.”

“We do not have long to act,” Musk, Stephen Hawking, and 114 other specialists wrote. “Once this Pandora’s box is opened, it will be hard to close.”

The Future of Life Institute drove the point home with its recent release of Slaughterbots, a terrifying sci-fi short film that explores the consequences of a world with unregulated autonomous killing machines.

“I participated in the making of this film because it makes the issues clear,” Stuart Russell, an AI researcher at UC Berkeley and scientific advisor for the FLI, told Gizmodo. “While government ministers and military lawyers are stuck in the 1950s, arguing about whether machines can ever be ‘truly autonomous’ or are really ‘making decisions in the human sense’, the technology for creating scalable weapons of mass destruction is moving ahead. The philosophical distinctions are irrelevant; what matters is the catastrophic effect on humanity.”

The film, set in the near future, depicts the launch of an AI-powered killer drone that eventually falls into the wrong hands, becoming an assassination tool, targeting politicians and thousands of university students. The production supports FLI’s call for a ban on autonomous killing machines. That and similar movements were the focus of the recent United Nations Convention on Conventional Weapons, attended by representatives from more than 70 nations.

Are we too late to stop a future robotic apocalypse? The technology is already available, and Stuart warns the failure to act now could be disastrous. According to him, the window to prevent such global destruction is closing fast.

“This short film is more than just speculation, it shows the results of integrating and miniaturizing technologies that we already have,” Russell warns in the film’s conclusion. “[AI’s] potential to benefit humanity is enormous, even in defense. But allowing machines to choose to kill humans will be devastating to our security and freedom – thousands of my fellow researchers agree.”

Russell is correct in at least two ways. The technology is already available. Roboticists from Carnegie Mellon University published a paper earlier this year, entitled “Learn to Fly by Crashing.” The research explores the roboticists tests of an AR Drone 2.0 that they watched teach itself to navigate 20 different indoor environments through trial and errors. In just 40 hours of flying time, the drone mastered its aerial environment through 11,500 collisions and corrections.

“We build a drone whose sole purpose is to crash into objects,” the researchers wrote. “We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation.”

Russell was also right about the potential and actual benefits of AI-powered drones. Intel recently employed the technology to gather video and other data on wildlife to more efficiently and less invasively aid scientists in important research.

“Artificial intelligence is poised to help us solve some of our most daunting challenges by accelerating large-scale problem-solving, including unleashing new scientific discovery,” Naveen Rao, vice president and general manager of Intel’s AI products group, said in a statement.

Likewise, GE subsidiary Avitas Systems has begun deploying drones to automate inspections of infrastructure, including pipelines, power lines and transportation systems. The AI-powered drones not only perform the surveillance more safely and efficiently, but their machine-learning technology can also instantly identify anomalies in the data.  

BNSF Railway has also utilized drones in its inspections.

“They can pre-program [the drone] to actually follow the tracks and while it’s following the tracks,” TE Connectivity’s Pete Smith told Aviation Today. “It’s collecting data. It has cameras on board taking pictures of the tracks. It’s taking huge amounts of data; these are high-resolution cameras. And what’s happening now is they’re using artificial intelligence to do analytics on the data.”

So are AI-powered drones more helpful or harmful? It all depends on what we do next. The potential benefits are too numerous to count if we wisely enter into the realm of machine learning, but the risks of inaction are insurmountable.

It’s no wonder that Musk, Hawking and their group of signatories are calling for a United Nations ban on autonomous weapons. If nothing else, an international moratorium regulating the use of AI is needed to protect mankind from its own creation. We already have an ideal guide simply by taking a page from Asimov.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with