This article was published on June 24, 2021

The future of US AI policy may hinge on a pretend war against a fictional China

Fake war. Real consequences.


The future of US AI policy may hinge on a pretend war against a fictional China

War is coming. Later this year the US military will fight its most advanced war campaign ever as it faces off against a fictionalized version of China.

The battles will be fake, but the results should provide the government with everything it needs to justify the mass development of lethal autonomous weapons systems (LAWS).

The era of government-controlled killer robots is upon us.

Up front: US military leaders have increasingly come out in support of taking humans out of the loop when it comes to AI-controlled weapons. And there’s nothing in the current US policy to stop that from happening.

Per the Federation of American Scientists:

Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS. Although the United States does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the United States may be compelled to develop LAWS in the future if potential U.S. adversaries choose to do so. At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.

The Army has a program called “Project Convergence.” It’s mission is to tie the various military data, information, command, and control domains together in order to facilitate a streamlined battlefield.

A deep-dive into modern military tactics is beyond the scope of this article – but a short explanation is in order.

Background: Modern command and control is dominated by something called “the OODA loop.” OODA stands for “observe, orient, decide, and act.”

The OODA loop stops commanders from following the enemy into traps, it keeps us from firing on civilians, and it’s our strongest shield against friendly fire incidents.

The big idea: US military leaders fear the traditional human decision-making process may become obsolete because we can’t react as fast as an AI. The OODA Loop, theoretically, can be automated.

And that’s why Project Convergence will conduct a series of wargames this fall against a fictional country meant to represent China.

Some US military leaders fear China is developing LAWS technology and they assert that the People’s Republic won’t have the same ethical concerns as its potential adversaries.

In other words: The US military is planning to test our current military forces and AI systems – which require a human in the loop – against forces with AI systems that don’t.

Quick take: Project Convergence is playing chess against itself here. The fictional country US forces will wargame against in the fall may resemble China, but it was developed and simulated by the Pentagon.

What’s most important here is that you don’t have to be a military genius to know the country that skips OODA and just sends out entire fleets, armies, and squadrons of hair-trigger LAWS is likely to dominate the battlespace.

This is exactly what every AI ethicist has been warning about. Taking humans out of the loop and allowing LAWS to make the kill decision is more than just a slippery slope. It’s the next atomic bomb.

But when we “lose” the fight against the fake China, it’ll certainly be easier to sell Congress on taking humans and OODA out of the loop.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with