The People’s Republic of China recently published a “position paper” detailing the nation’s views on military AI regulation. Having thoroughly perused it, we’ve come to the following conclusion: it’s gibberish.
Up front: The first thing you want to know when a global superpower releases official government documentation detailing its views on the use of artificial intelligence for military applications is whether the signatory intends to develop lethal autonomous weapons systems (LAWS).
China’s position paper makes absolutely no mention of restricting the use of machines capable of choosing and firing on targets autonomously. Instead, it dances around the topic with obscuring language.
Per the paper:
In terms of law and ethics, countries need to uphold the common values of humanity, put people’s well-being front and center, follow the principle of AI for good, and observe national or regional ethical norms in the development, deployment and use of relevant weapon systems.
Neither the US or the PRC has any laws, rules, or regulations currently restricting the development or use of military LAWs.
Background: The paper’s rhetoric may be empty, but there’s still a lot we can glean from its contents.
Research analyst Megha Pardhi, writing for the Asia Times, recently opined it was intended to signal that China’s seeking to “be seen as a responsible state,” and that it may be concerned over its progress in the field relative to other superpowers.
According to Pardhi:
Beijing is likely talking about regulation out of fear either that it cannot catch up with others or that it is not confident of its capabilities. Meanwhile, formulating a few commonly agreeable rules on weaponization of AI would be prudent.
Our take: According to military leaders in China, the PRC’s AI ambitions on the battlefield aren’t necessarily focused on LAWS.
In fact, Colonel Yuan-Chou Jin, an associate professor at the Graduate Institute of China Military Affairs Studies and the former director of the Army Command Headquarters’ Intelligence Division, compares the PRC’s planned implementation of AI to the Third Reich’s use of Blitzkrieg tactics during World War II.
Per an article authored by Yuan-Chou Jin in The Diplomat:
Looking back in history, Wehrmacht highlighted the Blitzkrieg in its frontal attack to beat the rivals based upon its relative advantage of speed during WWII.
For Chinese familiar with martial arts, a relevant well-known phrase captures the same: “There is no impregnable defense, but for the swiftness.” Speed in history has been strongly featured as a critical factor that determines the outcome of war.
That is exactly the case of AI. One of the advantages of AI is to speed up military decision making. More specifically, AI is particularly fit for blitz tactics. In the scenario of the PLA waging a war against Taiwan, distance makes instant U.S. reinforcement difficult.
Blitzkrieg tactics were effective during the second World War because they effectively overwhelmed the enemy. It involved striking opponents’ airbases in order to render their planes ineffective while they were still on the ground, and then high-speed armor crashing through previously “impassable” areas to surprise unprepared infantry.
The final act to every Blitzkrieg attack involved the lagging German infantry moving into the battlefield as the armor moved out. There, the infantry could seek out and suppress any final bastions of resistance.
Speed and decisiveness were the essential factors which drove the tactic’s success. And, arguably, the only thing that could have made Blitzkrieg faster would have been handing over target acquisition and elimination duties to an AI.
Despite the fact that neither the colonel’s article nor the PRC’s position paper mention LAWs directly, it’s apparent that what they don’t say is what’s really at the heart of the issue.
The global community has every reason to believe, and fear, that both China and the US are actively developing LAWS.
Get the Neural newsletter
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.Follow @neural