On May 19, I’m going to unveil the Semantic Bank to the The Next Web audience, and wrote a bit about it the other day here. I thought I would expand a little more on the background to the Semantic Bank by talking in more depth about artificial intelligence, machine learning and deep learning, which are the building blocks of the Semantic Bank. There is a difference between machine and deep learning, in that deep learning has been introduced with the objective of moving machine learning closer to its original goals of enabling artificial intelligence.
As MIT Review puts it:
Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.
In other words, these developments are trying to create a computer that is as smart as the human brain, if not smarter. This has been dreamed of for years, but it is only now with the availability of almost unlimited computation power that it is becoming a reality and is led by the internet giants: Facebook, Amazon, Tencent, Baidu, Alibaba and Google (FATBAG, as I call them).
The combined work of these giants is leading us rapidly into the second level of artificial intelligence, Artificial General Intelligence (AGI). In case you’re not aware, there are three defined levels of artificial intelligence:
ANI: Artificial Narrow Intelligence specializes in one area, such as IBM’s Deep Blue that beat Gary Kasparov at Chess by being good at one thing: playing chess;
AGI: Artificial General Intelligence where machines pass the Turing Test* and pass the intelligence levels of human beings with the ability to both apply logic and abstract thinking to complex ideas, learning quickly and learning from experience; and
ASI: Artificial Super Intelligence when machines become smarter than all of humanity combined.
These developments sit at the core of the semantic web and the semantic bank, and the firm that seems to be furthest ahead in this space seems to be Google. This is not to say that the other companies are behind, but the sheer number of public announcements Google has made over the last six years means that Google’s made the most noise about deep learning.
Their movement started in 2011 with the launch of Google Brain.
The first results of that project were released in 2012, when Google announced that their machines had learnt to recognize what cats looked like:
When computer scientists at Google’s mysterious X lab built a neural network of 16,000 computer processors with one billion connections and let it browse YouTube, it did what many web users might do — it began to look for cats.
The “brain” simulation was exposed to 10 million randomly selected YouTube video thumbnails over the course of three days and, after being presented with a list of 20,000 different items, it began to recognize pictures of cats using a “deep learning” algorithm. This was despite being fed no information on distinguishing features that might help identify one.
Picking up on the most commonly occurring images featured on YouTube, the system achieved 81.7 percent accuracy in detecting human faces, 76.7 percent accuracy when identifying human body parts and 74.8 percent accuracy when identifying cats.
In 2014, they beat Facebook to the line by acquiring the UK start-up firm Deep Mind for $600 million. Between Google Brain and Deep Mind, the firm started really pushing the boundaries of AI, with their machines able to beat other machines at playing video games.
Then the big headline last year was how Google’s machines were now able to beat the world’s Go master, a game that is so complex that we believed no machine could ever beat a human player, or not for a long time anyway.
That was followed soon after by the announcement that their machines were now so clever that they could create their own language. In this experiment, computers created their own form of encryption using machine learning, without being taught specific cryptographic algorithms.
The latest news is that the machines are now remembering the skills used for different tasks, a key requirement for reaching the second level of artificial intelligence, AGI.
Why Google is so committed to AI and deep learning can easily be seen, when you see its impact on their services, such as Google Translate. In December 2016, the service was converted from the old translation system to an AI-based services, and what a radical difference it made. By way of example, the famous quotation: “Uno no es lo que es por lo que escribe, sino por lo que ha leído” by Jorge Luis Borges would have converted to : “One is not what is for what he writes, but for what he has read” in the old Google Translate but, in the AI version, it reads much more clearly: “You are not what you write, but what you have read.”
The fact that we are getting rapidly to a stage, through the work of the internet giants, where machines are more intelligent than humans is a dramatic tipping point in the progress of systems to automate everything. We are already near the point where a chatbot can service you better than a human. What happens when that chatbot is put inside a human-looking robot or avatar? Welcome to the Semantic Web: it’s not an operating system, it’s a consciousness.
Chris Skinner is speaking about the intriguing concept of the Semantic Bank at TNW Conference today at 12:35, check out our other great speakers here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.