Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on May 30, 2018

Opinion: There’s more to the Google military AI project than we’ve been told


Opinion: There’s more to the Google military AI project than we’ve been told

Google, a company whose motto used to be “don’t be evil,” has had its ethics questioned lately over its insistence on developing AI for the Pentagon. If you’re among the many people who don’t understand why, against the grain, the Mountain View company would risk such damage to its reputation, you’re not alone.

It’s not the money. According to a report from Gizmodo, Google is getting around $9 million. Sure, for most of us that would set us up for life, but let’s not forget that Google is worth nearly a trillion dollars. It can afford to skip a project that doesn’t suit its ethical makeup.

And it certainly isn’t the prestige, you don’t hear many pundits calling on big tech companies to more deeply involve themselves with the military.

Google’s involvement in Project Maven, which it claims is little more than using TensorFlow to build AI to sort through some old declassified drone footage, remains an enigma.

Its defense, that the project is limited to “non-offensive” uses, smacks of the useless “guns don’t kill people” argument. Except the particular worry in this situation is that the military will develop AI that doesn’t need human guidance to kill people, so our concern is that AI does and will kill people. Whether it uses guns, bombs, lasers, or robot kung-fu doesn’t really matter.

TensorFlow is an open-source platform. It beggars the imagination that the Pentagon doesn’t have personnel capable of handling a simple image processing project without the assistance of Google engineers. Actually, I’ll just come right out and say it: it’s a bunch of malarkey.

In the past, as an information systems technician in the US Navy, I’ve worked with both military and civilian computer specialists at the Pentagon, and with others who were assigned to the Pentagon but working off-site. It’s my opinion that the US military is more than qualified to handle its own TensorFlow questions.

It feels like a situation where we don’t have all the facts.

The government works with private-sector companies all the time — that’s nothing new. It’s probably safe to say that most people, with the exception of some extremists, aren’t upset that Google has the audacity to work with the military. Instead, the outrage is over the fact that both entities choose to press on without addressing any of our fears.

Project Maven’s purpose has been a bit obscured in the wave of coverage it’s recieved. It isn’t a project dedicated to sorting through drone footage — that was just its first mission. It was originally called the Algorithmic Warfare Cross-Functional Team. It’s not a one-shot deal relying on Google’s help, but instead the Mountain View company is part of some early tests to determine how feasible it is for the government to adapt private-sector AI for military purposes.

Credit: The Department of Defense
Hi kids! Look! It’s the Algorithmic Warfare Fun Bunch! These are definitely NOT killer robots. Seriously, this is the actual logo for the project. In Latin: “Our job is to help.”

The extent of Google’s assistance remains unknown. While it continues to maintain it isn’t working on weapons, thousands of its own employees remain uncomfortable with the level of involvement they’re aware of. They’ve written a petition asking the company to end the contract, and at least a dozen have quit.

Building AI isn’t the same thing as making a knife. While both can be used for good or bad, you can’t program a knife to kill specific types of people without being wielded by a human. There should be an ethical responsibility on the part of the US government and private-sector companies to regulate the development and use of AI, especially when it comes to warfare.

But, rather than stop and address these issues, the government and Google are engaged in some quasi-secretive mission which is related to a larger project that nobody’s really discussing.

I call into question the motives of Google, a company that can explain exactly how DeepMind’s AI beat the world’s greatest Go players but suddenly can’t find the right words to make sense out of its involvement with a government project to build a basic neural network for image recognition.

And as a citizen and veteran, I call into question the motives of a Federal government which won’t reveal the nature of an unclassified program that it’s claiming requires it to spend millions of tax-payer dollars on an open-source AI platform.

We reached out to Google, and the Pentagon’s public relations department, neither responded to requests for comment.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with