The US Air Force has a penchant for developing officers, but the âgeneralâ itâs working on right now doesnât have any stars on its uniform: itâs general artificial intelligence (GAI).
The term GAI refers to an artificial intelligence with human-level or better cognition. Basically, when people argue that todayâs AI isnât âreal AI,â theyâre confusing the terminology with GAI: machines that think.
Deep within the cavernous expanses of the US Air Force research laboratories a scientist named Paul Yaworsky toils away endlessly in a quest to make Americaâs aircraft intelligent beings of sheer destruction. Or, maybe heâs trying to bring the office coffee pot to life, we really donât know his end-game.
What we do know comes from a pre-published research paper we found on ArXiv that was just begging for a hyperbolic headline. Maybe âUS Air Force developing robots that can think and commit murder,â or something like that.
In reality Yaworksyâs work appears to lay the foundation for a future approach to general intelligence in machines. He proposes a framework by which the gaps between common AI and GAI can be bridged.
According to the paper:
We address this gap by developing a model for general intelligence. To accomplish this, we focus on three basic aspects of intelligence. First, we must realize the general order and nature of intelligence at a high level. Second, we must come to know what these realizations mean with respect to the overall intelligence process. Third, we must describe these realizations as clearly as possible. We propose a hierarchical model to help capture and exploit the order within intelligence.
At the risk of spoiling the ending for you, this paper proposes a hierarchy for understanding intelligence â a roadmap for machine learning developers to pin above their desks, if you will â but it doesnât have any algorithms buried in it thatâll turn your Google Assistant into Data from Star Trek.
Whatâs interesting about it is that there currently exists no accepted or understood route to GAI. Yaworsky addresses this dissonance in his research:
Perhaps the right questions have not yet been asked. An underlying problem is that the intelligence process is not understood well enough to enable sufficient hardware or software models, to say the least.
In order to explain intelligence in a way beneficial to AI developers Yaworsky breaks it down into a hierarchical view. His work is early, and itâs beyond the scope of this article to explain his research on high-level intelligence (for a deeper dive: hereâs the white paper), but itâs as good a trajectory for the pursuit of GAI as weâve seen.
Related: One machine to rule them all: A âMaster Algorithmâ may emerge sooner than you think
If we can figure out how high-level human intelligence works, itâll go a long way toward informing computer models for GAI.
And, in case youâre skimming this article to find out if the US military is on the verge of unwittingly loosing an army of killer robots in the near-future, hereâs a quote from the paper to dispel your worries:
What about the concerns of AI running amok and taking over human-kind? It is believed that AI will someday become a very powerful technology. But as with any new technology or capability, problems tend to crop up. Especially with respect to general AI, or artificial general intelligence (AGI), there is tremendous potential, for both good and bad.
We will not get into all the hype and speculation here, but suffice it to say that many of the problems we hear about today concerning AI are due to sketchy predictions involving intelligence. Not only is it difficult to make good scientific predictions in general, but when the science in question involves intelligence itself, as it does with AI, then the predictions are almost impossible to make correctly. Again, the main reason is because we do not understand intelligence well enough to enable accurate predictions. In any event, what we must do with AI is proceed with caution.
Get the TNW newsletter
Get the most important tech news in your inbox each week.