Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 15, 2020

What are neural-symbolic AI methods and why will they dominate 2020?

The AI commercial stage will be changed forever.


What are neural-symbolic AI methods and why will they dominate 2020?

The recent commercial AI revolution has been largely driven by deep neural networks.   First invented in the 1960s, deep NNs came into their own once fueled by the combination of internet-scale datasets and distributed GPU farms.   

But the field of AI is much richer than just this one type of algorithm. Symbolic reasoning algorithms such as artificial logic systems, also pioneered in the ’60s, may be poised to emerge into the spotlight — to some extent perhaps on their own, but also hybridized with neural networks in the form of so-called “neural-symbolic” systems.

Weaknesses of deep neural networks

Deep neural nets have done amazing things for certain tasks, such as image recognition and machine translation. However, for many more complex applications, traditional deep learning approaches cannot match the ability of hybrid architecture systems that additionally leverage other AI techniques such as probabilistic reasoning, seed ontologies, and self-reprogramming ability.

Deep neural networks, by themselves, lack strong generalization, i.e. discovering new regularities and extrapolating beyond training sets. Deep neural networks interpolate and approximate on what is already known, which is why they cannot truly be creative in the sense that humans can, though they can produce creative-looking works that vary on the data they have ingested.

This is why large training sets are required to teach deep neural networks and also why data augmentation is such an important technique for deep learning, which needs humans to specify known data transformations. Even interpolation cannot be done perfectly without learning underlying regularities, which is vividly demonstrated by well-known adversarial attacks on deep neural networks.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The slavish adherence of deep neural nets to the particulars of their training data also makes them poorly interpretable. Humans cannot completely rely or interpret their results, especially in novel situations. 

Combining the strengths of neural and symbolic AI methods

What is interesting is that, for the most part, the disadvantages of deep neural nets are strengths of symbolic systems (and vice versa), which inherently possess compositionality, interpretability, and can exhibit true generalization. Prior knowledge can also be easily incorporated into symbolic systems in contrast to neural nets.

Neural net architectures are very powerful at certain types of learning, modeling, and action — but have limited capability for abstraction. That is why they are compared with the Ptolemaic epicycle model of our solar system — they can become more and more precise, but they need more and more parameters and data for this, and they, by themselves, cannot discover Kepler’s laws and incorporate them into the knowledge base, and further infer Newton’s laws from them.

Symbolic AI is powerful at manipulating and modeling abstractions, but deals poorly with massive empirical data streams.

This is why we believe that deep integration of neural and symbolic AI systems is the most viable path to human-level AGI on modern computer hardware.

It’s worth noting in this light that many recent “deep neural net” successes are actually hybrid architectures, e.g. the AlphaGo architecture from Google DeepMind integrates two neural nets with one game tree. Their recent MuZero architecture, which can master both board and Atari games, goes further along this path using deep neural nets together with planning with a learned model.

The highly successful ERNIE architecture for Natural Language Processing question-answering from Tsinghua University integrates knowledge graphs into neural networks.  The symbolic sides of these particular architectures are relatively simplistic, but they can be seen as pointing in the direction of more sophisticated neural-symbolic hybrid systems.

Cisco’s successes with neural-symbolic street scene analysis

The integration of neural and symbolic methods relies heavily on what has been the most profound revolution in AI in the last 20 years — the rise of probabilistic methods: e.g. neural generative models, Bayesian inference techniques, estimation of distribution algorithms, probabilistic programming.

As an example of the emerging practical applications of probabilistic neural-symbolic methods, at the Artificial General Intelligence (AGI) 2019 conference in Shenzhen last August, Hugo Latapie from Cisco Systems described work his team has done in collaboration with our AI team at SingularityNET Foundation, using the OpenCog AGI engine together with deep neural networks to analyze street scenes.

The OpenCog framework provides a neural-symbolic framework that is especially rich on the symbolic side, and interoperates with popular deep neural net frameworks. It features a combination of probabilistic logic networks (PLNs), probabilistic evolutionary program learning (MOSES), and probabilistic generative neural networks.     

The traffic analytics system demonstrated by Latapie deploys OpenCog-based symbolic reasoning on top of deep neural models for street scene cameras, enabling feats such as semantic anomaly detection (flagging collisions, jaywalking, and other deviations from expectation), unsupervised scene labeling for new cameras, and single-shot transfer learning (e.g. learning about new signals for bus stops with a single example).

The difference between a pure deep neural net approach and a neural-symbolic approach in this case is stark. With deep neural nets deployed in a straightforward way, each neural network models what is seen by a single camera. Forming a holistic view of what’s happening at a given intersection, let alone across a whole city, is much more of a challenge.

In the neural-symbolic architecture, the symbolic layer provides a shared ontology, so all cameras can be connected for to an integrated traffic management system. If an ambulance needs to be routed in a way that will neither encounter nor cause significant traffic, this sort of whole-scenario symbolic understanding is exactly what one needs.

The same architecture can be applied to many other related use cases where one can use neural-symbolic AI to both enrich local intelligence and connect multiple sources/locations into a holistic view for reasoning and action. 

It may not be impossible to crack this particular problem using a more complex deep neural net architecture, with multiple neural nets working together in subtle ways. However, this is an example of something that is easier and more straightforward to address using a neural-symbolic approach. And it is quite close to machine vision, one of deep neural nets’ great strengths.

In other, more abstract application domains such as mathematical theorem-proving or biomedical discovery the critical value of the symbolic side of the neural-symbolic hybrid is even more dramatic.

2020: The year of neural-symbolic hybrid AI

Deep neural nets have done amazing things over the last few years, bringing applied AI to a whole new level. We’re betting that the next phase of incredible AI achievements are going to be delivered via hybrid AI architectures such as neural-symbolic systems. This trend has already started in 2019 in a relatively quiet way and in 2020 we expect it will pick up speed dramatically.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with