This article was published on April 17, 2018

One machine to rule them all: A ‘Master Algorithm’ may emerge sooner than you think


One machine to rule them all: A ‘Master Algorithm’ may emerge sooner than you think

It’s excusable if you didn’t notice it when a scientist named Daniel J. Buehrer, a retired professor from the National Chung Cheng University in Taiwan, published a white paper earlier this month proposing a new class of math that could lead to the birth of machine consciousness. Keeping up with all the breakthroughs in the field of AI can be exhausting, we know.

Robot consciousness is a touchy subject in artificial intelligence circles. In order to have a discussion around the idea of a computer that can ‘feel’ and ‘think,’ and has it’s own motivations, you first have to find two people who actually agree on the semantics of sentience. And if you manage that, you’ll then have to wade through a myriad of hypothetical objections to any theoretical living AI you can come up with.

We’re just not ready to accept the idea of a mechanical species of ‘beings’ that exist completely independently of humans, and for good reason: it’s the stuff of science fiction – just like spaceships and lasers once were.

Which brings us back to Buehrer’s white paper proposing a new class of calculus. If his theories are correct, his math could lead to the creation of an all-encompassing, all-learning algorithm.

The paper, titled “A Mathematical Framework for Superintelligent Machines,” proposes a new type of math, a class calculus that is “expressive enough to describe and improve its own learning process.”

Buehrer suggests a mathematical method for organizing the various tribes of AI-learning under a single ruling construct, such as the one suggested by Pedro Domingos in his book “The Master Algorithm.”

We asked Professor Buehrer when we should expect this “Master Algorithm” to emerge, he said:

If the class calculus theory is correct, that human and machine intelligence involve the same algorithm, then it is only less than a year for the theory to be testable in the OpenAI gym. The algorithm involves a hierarchy of classes, parts of physical objects, and subroutines. The loops of these graphs are eliminated by replacing each by a single “equivalence class” node. Independent subproblems are automatically identified to simplify the matrix operations that implement fuzzy logic inference. Properties are inherited to subclasses, locations and directions are inherited relative to the center points of physical objects, and planning graphs are used to combine subroutines.

It’s a revolutionary idea, even in a field like artificial intelligence where breakthroughs are as regular as the sunrise. The creation of a self-teaching class of calculus that could learn from (and control) any number of connected AI agents – basically a CEO for all artificially intelligent machines – would theoretically grow exponentially more intelligent every time any of the various learning systems it controls were updated.

Perhaps most interesting is the idea that this control and update system will provide a sort of feedback loop. And this feedback loop is, according to Buehrer, how machine consciousness will emerge:

Allowing machines to modify their own model of the world and themselves may create “conscious” machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculus’s model of the world and the results of what its robots actually caused to happen in the world.

Buehrer also states it may be necessary to develop these kinds of systems on read-only hardware, thus negating the potential for machines to write new code and become sentient. He goes on to warn, “However, turning off a conscious sim without its consent should be considered murder, and appropriate punishment should be administered in every country.”

Sophia the robot would be thrilled, but it isn’t because it’s just a puppet.

Buehrer’s research further indicates AI may one day enter into a conflict with itself for supremacy, stating intelligent systems “will probably have to, like the humans before them, go through a long period of war and conflict before evolving a universal social conscience.”

It remains to be seen if this new math can spawn a mechanical species of beings with their own beliefs and motivations. But it’s becoming increasingly difficult to simply outright dismiss those machine learning theories that blur the line between science and fiction. And that feels like progress.

The Next Web’s 2018 conference is just a few weeks away, and it’ll be ??. Find out all about our tracks here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.