This article was published on February 25, 2020

Pentagon unveils toothless ethical principles for using AI in war

The guidelines won't allay the concerns about military AI


Pentagon unveils toothless ethical principles for using AI in war Image by: The U.S. Army

The Pentagon has announced five ethical principles for the use of AI by the US military. Defense Secretary Mark Esper said the guidelines would accelerate the adoption of lawful and ethical uses of the technology by both combat and non-combat operations, but the hazy proposals contain little detail about how they’ll be applied to the battlefield.

The first principle calls for servicepeople to “exercise appropriate levels of judgment and care” when using AI systems, a requirement that is open to numerous interpretations. It sets the tone for the ambiguous language that follows throughout the guidelines.

The Department of Defense (DOD) states that all AI capabilities “will have explicit, well-defined uses” subject to ongoing testing, but there are no restrictions on how the technology can be used in warfare.

Nor are there any details on specific applications of the technology by the military, despite growing calls for a ban of lethal autonomous weapons. They do promise to have “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior,” but this sounds like little more than adding an off switch.

[Read: Trump’s new budget pours billions into AI and quantum R&D]

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The DoD also pledges to “take deliberate steps to minimize unintended bias in AI capabilities,” but provides no information on how this will be done.

“I worry that the principles are a bit of an ethics-washing project,” Lucy Suchman, an anthropologist who studies the role of AI in warfare, told The Associated Press.

Silicon Valley’s tricky ties with the military

The principles were recommended by the Defense Innovation Board, a group of private-sector technology executives chaired by former Google CEO Eric Schmidt.

Schmidt has defended the relationship between Silicon Valley and the military, arguing that the industry could reach an agreement on principles for working with the government.

Google employees have proven less enthusiastic. In 2018, their protests over the company’s involvement with Project Maven, a DoD program that brought private-sector expertise to military AI, led Google to announce it wouldn’t renew the contract.

Here’s the full list of principles Schmidt hopes could win the critics over:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

You’re here because you want to learn more about artificial intelligence. So do we. So this summer, we’re bringing Neural to TNW Conference 2020, where we will host a vibrant program dedicated exclusively to AI. With keynotes by experts from companies like Spotify and RSA, our Neural track will take a deep dive into new innovations, ethical problems, and how AI can transform businesses. Get your early bird ticket and check out the full Neural track.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top