Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on January 2, 2019

AI is incredibly smart, but it will never match human creativity


AI is incredibly smart, but it will never match human creativity

One could be forgiven for thinking that machines are creative. Numerous artificial intelligence projects appear to demonstrate that machines are capable of creating intricate works of art that rival those created by their inferior human creators.

Just recently, IBM Watson created a movie trailer for the horror film Morgan (IBM). Google’s DeepDream AI fascinated the world with its eerie superimpositions of eyeballs, cats, birds, and iguanas onto everyday images in a seemingly creative way. The image below was transformed with this very net.

Neural nets can even restore color to black and white images that the network has never seen before in a similar manner to a child with a coloring book — an example of this is below.

A black and white photo colorized by a neural net

Each of these demonstrations of the creative prowess of AI relies on new advances in the field of machine learning, which allows computer programs to compute things in a manner similar to the human brain. The key, however, to machines’ lack of true creativity lies in the word compute.

Each example above utilizes a carefully constrained algorithm to achieve a very specific end goal. At its core, these algorithms are simply manipulating symbols then concatenating the results in a meaningful way. As John Searle argued in Minds, Brains, and Programs, this does not represent understanding.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

True machine creativity cannot be derived from a system that solely takes input, performs mathematical functions, and presents an output to the eager programmer that created it. As long as this is the case, the threat of machines completely displacing the human labor force is nonexistent.

This is not to say that machine intelligence won’t surpass, or hasn’t already surpassed, the intellectual power of the brain. Many try and make a direct comparison between human computational power or storage capacity and that of computers. This is not necessarily a useful comparison, but for the time being we can utilize it to demonstrate the compartmentalized superiority of machines.

Storage accuracy and retention is one area in which computers have unequivocally bested humans. Any person thrust through the educational system is familiar with the struggle of attempting to memorize passages from a textbook or cram equations the night before a test. Recall of information is imperfect in human brains, and it takes a while until information is cemented in the brain securely enough to survive more than a few minutes of distraction. To the envy of these frustrated students, give the same task to a computer and it will happily retain anything you tell it to keep. Computers are simply better with data.

Parallel computation is another area in which computers have the advantage. Human brains do “process” things in a parallel manner, but clearly it’s difficult to have more than one train of thought at a time. Graphics processors, on the other hand, utilize hundreds or thousands of discreet processing units to do everything from sequencing the genome of primates to mining cryptocurrency. This amazing video by Nvidia provides a comparison between CPUs and GPUs — picture your brain as the single-threaded CPU:

The speed of the brain’s computation is also orders of magnitude slower than that of its electronic counterparts. Individual synaptic connections happen at most a few thousand times a second, whereas the transistors in your smartphone can switch on and off billions of times a second. Even the best mathematician cannot rival the sheer computational speed of a silicon-based system.

Computers have a distinct spec-advantage on paper, and this advantage does carry in some capacity to the labor market. Even before the era of computers machines rapidly displaced human workers. Luddite rebellions against the mechanization of the textile industry were perhaps some of the first examples of human resistance to machines (Thompson). Now consider the modern labor market. CGP Grey summarizes this quite well in his video Humans Need Not Apply:

Grey shows how general-purpose robots are the current threat to humans seeking jobs, as it would be a slow process to replace every single manufacturing job with specialized machines.

Towards the end of his video, Grey begins to discuss the implications of artificial intelligence for creative work. He states how creativity is a supposed safe-haven that many run to in defense of the uniqueness of human labor. This particular argument is a form of a theory first purported by Keynes in Economic Possibilities for our Grandchildren.

Keynes essentially says that by the year 2030 the market economy will satisfy all of humanity’s material desires, allowing the government and people to place an increased emphasis on the arts and improvement of the human condition. This will eventually result in an intellectual paradise where individuals can pursue knowledge and beauty.

Grey characterizes one who performs this sort of artistic work as a “special creative snowflake.“ He goes on to describe how such a society inherently wouldn’t work, as many artists seek fame and recognition. This reliance on “popularity” can’t be sustainable in a society where everyone is a special creative snowflake. Grey also shows how robots can now perform many of these “creative” tasks, such as composing music, painting, or writing.

Everything discussed so far paints a grim picture for human labor in its current form. There is no place to run, the robots are coming. In large part, they are. There’s little debate that the labor landscape will be fundamentally reshaped in the coming years, and I’m not refuting this point. It’s the implications of this reshaping that many incorrectly characterize.

A natural conclusion to make from the computational and physical superiority of machines is that humanity is doomed, and we will all be replaced by robots sooner rather than later. This sense of desperation and doom, to varying degrees, appears to be standard across much of the literature on this subject. This is primarily driven by the assumption that machines will be able to do everything that humans can do, and this is the key assumption leading much of the public astray.

Clearly humans are different than computers in their current incarnation. No computer has yet achieved consciousness, and, according to Searle, no computer of the current form will. Searle primarily utilizes his “Chinese room” thought experiment to argue this point. In it, he describes a scenario where an individual, having no knowledge of Chinese, sits in a room with a rule book. Other individuals outside feed Chinese characters to the person in the room. The person in the room takes the input, finds the proper output for that character or sequence of characters, and feeds the output to the people waiting outside.

To them, it appears as if the machine has a true knowledge of Chinese. As we know, however, this is not the case. The same principle can be extended to all current forms of artificial intelligence; they may manipulate symbols in a clever way, but they are not conscious.

Credit: wikicommons
Even consider the case where a machine truly is conscious: the “dancing qualia” thought experiment. Proposed by David Chalmers, this scenario is designed to illustrate that machine consciousness is indeed possible. The thought experiment is as follows: A piece of your brain has been removed, but it is still externally connected to your brain via some wires such that you notice no interruptions or anything out of the ordinary at all. Also connected to these same wires is a computer chip.

This portion of your brain is specifically responsible for giving you the conscious experience of seeing that an object is red, and the computer chip is designed to replicate this phenomenon. A researcher controls a toggle that allows him/her to switch between the chip or your actual brain being connected. The researcher places a red apple in front of you and flips the switch back and forth.

You experience no interruption in your perception of the apple. Even if the computer chip experiences red in a different way, the conscious experience of red is still the same for both you and your silicon counterpart. Therefore, Chalmers concludes, there is no functional difference between this portion of your brain and the computer chip, even if the chip may just be manipulating symbols to represent consciousness.

Credit: Wikipedia
If such a conscious integrated circuit were possible it would still lack the foundations for true creative and spontaneous thought. Human brains have the remarkable ability to generate ideas in a true creative fashion which this conscious IC does not. In his TED talk, neuroscientist Henning Beck describes the remarkable characteristics of the brain that allow us to spontaneously generate thought:

As Beck shows, brains are imperfect, nondeterministic, and partially analog. These characteristics allow us to represent things as concepts rather than just pure data. When we think about some object we’re not recalling the actual object itself, but rather a conceptual idea of what the object is. This simple trait allows the brain to be incredibly adaptive, as it can observe completely new stimuli and utilize general conceptual understanding to immediately determine what these stimuli represent. Ideas are simply an association of concepts, linked in a new way by the synchronized firing of bunches of neurons.

Consider something as simple as numbers. Computers can easily represent numbers as a sequence of binary states, whereas the brain thinks about distinct numbers as concepts. Researchers tested this by showing subjects the idea of a number (let’s say three) using dots. Even if the presentation of this number varied, such as three dots on one page or three sequential dots on different pages, the same group of neurons responsible for the idea of “three” fired each time.

This is why humans have such a hard time estimating large quantities or conceptualizing large numbers. We utilize the idea of “three” numerous times a day, so we have a well-defined idea of what it is. Other familiar numbers such as two or four feel completely distinct from three. The numbers 61,967,278 and 89,595,540, however, feel about the same to the human brain. Even though the difference between them is immense, we just conceptualize them as “large.” To a computer, 31,967,278 is just as distinct to 89,595,540 as three is to four.

It’s easy to say, then, that we should just make computers that mimic this conceptual behavior. Despite the efforts of researchers and theorists, this is intrinsically not the nature of how computers operate. In Computing Machinery and Intelligence, Alan Turing uses the analogy of an onion to discuss human and machine consciousness. Turing argued that if one were to strip away, layer by layer, the inner-workings of a brain or machine and at one layer one encountered consciousness then this brain or machine actually is conscious. If at no point did this curious individual encounter consciousness then the item in question is really a machine.

Individual neurons are not conscious, but at some point consciousness emerges. With sufficient research, scientists could visualize the localized firing of bunches of neurons and analyze how these groups interact to form the conceptual understanding that underlies consciousness. Peel away the outer layer of a computer and there’s RAM, a CPU, a graphics processor, a crystal, and peripherals. Go further into the CPU and there’s cache, ALUs, timers, and controllers. Keep going, and there’s sequential logic. Peel another layer, and there’s logic gates. Delve into the logic gates and you find MOSFETs. Go further and you’re looking at individual atoms. The one thing you don’t find? Consciousness.

Humanity’s safe-haven in the coming years will be exactly that — consciousness. Spontaneous thought, creative thinking, and a desire to challenge the world around us. As long as humans exist there will always be a need to innovate, to solve problems through brilliant ideas. Rather than some society in which all individuals will be allowed to carry out their days creating works of art, the machine revolution will instead lead to a society in which anyone can make a living by dreaming and providing creative input to projects of all kinds. The currency of the future will be thought.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with