Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on November 8, 2018

The US Air Force is working on general artificial intelligence

The AF's recruitment slogan used to be "Aim High." Just saying.


The US Air Force is working on general artificial intelligence Image by: Neil Conway

The US Air Force has a penchant for developing officers, but the ‘general’ it’s working on right now doesn’t have any stars on its uniform: it’s general artificial intelligence (GAI).

 

The term GAI refers to an artificial intelligence with human-level or better cognition. Basically, when people argue that today’s AI isn’t “real AI,” they’re confusing the terminology with GAI: machines that think.

Deep within the cavernous expanses of the US Air Force research laboratories a scientist named Paul Yaworsky toils away endlessly in a quest to make America’s aircraft intelligent beings of sheer destruction. Or, maybe he’s trying to bring the office coffee pot to life, we really don’t know his end-game.

What we do know comes from a pre-published research paper we found on ArXiv that was just begging for a hyperbolic headline. Maybe “US Air Force developing robots that can think and commit murder,” or something like that.

In reality Yaworksy’s work appears to lay the foundation for a future approach to general intelligence in machines. He proposes a framework by which the gaps between common AI and GAI can be bridged.

According to the paper:

We address this gap by developing a model for general intelligence. To accomplish this, we focus on three basic aspects of intelligence. First, we must realize the general order and nature of intelligence at a high level. Second, we must come to know what these realizations mean with respect to the overall intelligence process. Third, we must describe these realizations as clearly as possible. We propose a hierarchical model to help capture and exploit the order within intelligence.

At the risk of spoiling the ending for you, this paper proposes a hierarchy for understanding intelligence – a roadmap for machine learning developers to pin above their desks, if you will – but it doesn’t have any algorithms buried in it that’ll turn your Google Assistant into Data from Star Trek.

What’s interesting about it is that there currently exists no accepted or understood route to GAI. Yaworsky addresses this dissonance in his research:

Perhaps the right questions have not yet been asked. An underlying problem is that the intelligence process is not understood well enough to enable sufficient hardware or software models, to say the least.

In order to explain intelligence in a way beneficial to AI developers Yaworsky breaks it down into a hierarchical view. His work is early, and it’s beyond the scope of this article to explain his research on high-level intelligence (for a deeper dive: here’s the white paper), but it’s as good a trajectory for the pursuit of GAI as we’ve seen.

Related: One machine to rule them all: A ‘Master Algorithm’ may emerge sooner than you think

If we can figure out how high-level human intelligence works, it’ll go a long way toward informing computer models for GAI.

And, in case you’re skimming this article to find out if the US military is on the verge of unwittingly loosing an army of killer robots in the near-future, here’s a quote from the paper to dispel your worries:

What about the concerns of AI running amok and taking over human-kind? It is believed that AI will someday become a very powerful technology. But as with any new technology or capability, problems tend to crop up. Especially with respect to general AI, or artificial general intelligence (AGI), there is tremendous potential, for both good and bad.

We will not get into all the hype and speculation here, but suffice it to say that many of the problems we hear about today concerning AI are due to sketchy predictions involving intelligence. Not only is it difficult to make good scientific predictions in general, but when the science in question involves intelligence itself, as it does with AI, then the predictions are almost impossible to make correctly. Again, the main reason is because we do not understand intelligence well enough to enable accurate predictions. In any event, what we must do with AI is proceed with caution.

Get the TNW newsletter

Get the most important tech news in your inbox each week.