Human-centric AI news and analysis

US Army plans to bring human-AI interaction to the battlefield

The system will act as a "teammate" to soldiers

Killer robots may remain a dystopian vision of the future for now, but another military deployment of AI could be sooner to arrive on the battlefield.

Known as the Aided Threat Recognition from Mobile Cooperative and Autonomous Sensors (ATR-MCAS), the system is being developed by the US Army to transform how the military plans and conducts operations.

It’s comprised of a network of air and ground vehicles equipped with sensors that identify potential threats and autonomously notify soldiers. The information collected would then be analyzed by an AI-enabled decision support agent that can recommend responses — such as which threats to prioritize.

The system was developed by the Army‘s Artificial Intelligence Task Force (AITF), which was activated last year to improve the Army’s connections with the broader AI community.

[Read: Everything you need to know about the drone used by the US to assassinate an Iranian general

“ATR-MCAS is different than existing autonomous system efforts because it is not limited to specific-use cases,” Lieutenant Colonel Chris Lowrance, the AITF’s Autonomous Systems Lead, explained in a statement.

“It can be used to perform reconnaissance missions across the area of operations, or maintain a fixed position while performing area defense surveillance missions.”

The system could also be used for route reconnaissance, screening missions, verification of high-value targets. The Army‘s press release claims that this adaptable design “increases soldier lethality and survivability”.

Robot wars of the future

ATR-MCAS remains some way away from operational deployment, however. Currently, the Army is still training the algorithms on test data to help them better identify and classify objects.

Lowrence told Fedscoop that he ultimately envisions the system acting as a “teammate” to soldiers that helps reduce their “cognitive load”.

His language evokes the concept of augmented intelligence, which can offer a more human-centered vision of AI deployments. It may also somewhat allay concerns that such technology could be deployed in killer robots, a prospect that is attracting a growing range of opponents.

Last week, Democratic presidential candidate Andrew Yang added his name to the list, when he called for a ban on lethal autonomous weapons.

Published February 4, 2020 — 18:35 UTC