This article was published on December 11, 2019

Report: Palantir took over Project Maven, the military AI program too unethical for Google


Report: Palantir took over Project Maven, the military AI program too unethical for Google

Palantir, the surveillance company founded by Peter Thiel, has unsurprisingly stepped up to fill the void left behind after Google abandoned Project Maven earlier this year over ethical concerns.

Project Maven, for those unfamiliar, is a Pentagon program to build an AI-powered surveillance platform for unmanned aerial vehicles. Basically, the job is to build a system for the US military to deploy and monitor autonomous drones.

This system would, supposedly, give the government real-time battlefield command and control and the ability to track, tag, and spy on targets without human involvement. The limited, unclassified information available makes it appear as though the project stops just short of functioning as an AI weapons system capable of firing on self-designated targets as they become available in the battle space.

Google previously held the contract but, allegedly due to employee push back, the Mountain View company chose not to renew when it expired in March. At least a dozen employees walked out in 2018 after Google Cloud CEO Diane Greene announced the company would renew the contract. Subsequent internal pressure lead to Google ultimately declining to bid for the contract again.

The Pentagon didn’t have to look very far to find a company willing to pick up where Google’s ethics left off. Palantir, the company that powers ICE and CBP’s surveillance networks and builds software for police that circumvents the warrant process, is reportedly chugging away on Project Maven.

Business Insider broke the news today, reporter Becky Peterson writes that a person familiar with the project said:

Palantir is working with the Defense Department to build artificial intelligence that can analyze video feeds from aerial drones … Internally at Palantir, where names of clients are kept close to the vest, the project is referred to as “Tron,” after the 1982 Steven Lisberger film.

Previously, Peter Thiel, the founder of Palantir and long time Donald Trump associate, had referred to Google’s withdrawal from the project as tantamount to treason. In June of 2018, speaking at the National Conservatism Conference in Washington, Thiel referenced Google’s decision to push forward with Project Dragonfly (a controversial Google program to build Search out for China) while abandoning the Pentagon’s Project Maven. He said the CIA should investigate Google:

Is it because they consider themselves to be so thoroughly infiltrated that they have engaged in the seemingly treasonous decision to work with the Chinese military and not with the US military… because they are making the sort of bad, short-term rationalistic [decision] that if the technology doesn’t go out the front door, it gets stolen out the backdoor anyway?

It’s worth pointing out that Google was allegedly working with the Chinese government, not necessarily the Chinese military (which would have no reason to develop a censored search engine with Google).

Thiel’s supposition, one echoed by Palantir’s CEO Alex Karp, is that it’s big tech’s patriotic duty to do whatever the US government tells it. Of course, Thiel hasn’t always felt this way. He’s invested in Chinese startups, bid for contracts, and worked with Chinese agencies before.

However, what Thiel fails to mention, time and time again when he spouts his patriotism rhetoric, is that the US government doesn’t have an official policy on the ethical use of AI nor any publicly available guidance on how the military is authorized to use AI.

We stand on the cusp of AI-powered warfare and there’s no Geneva Convention for keeping countries like Russia, the US, and China from deploying legions of autonomous killing machines designed to seek and destroy civilian populations. While, as Thiel often puts it, it is important to maintain an edge on our nation’s potential enemies, it’s also important to ensure we’re acting with the bare minimum of ethics.

There are plenty of technologies we don’t use for warfare because they’re unethical. We don’t use mustard gas or white phosphorous anymore, for example, because they’re inhumane. Autonomous killing machines should fall into the same category. And, though the Pentagon claims Project Maven is just about taking pictures, any developer can tell you that training a drone to shoot something works about the same whether it’s carrying a camera or a weapon. There’s probably more to Project Maven than meets the eye.

Until the US government and its allies come up with firm regulations for the military use of AI, it seems unethical for anyone to build dual-use AI for it.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top