This article was published on January 17, 2018

Why AI has to develop a personality to succeed


Why AI has to develop a personality to succeed

What’s so interesting about talking robots? From The Tin Man and C3P0 to HAL 9000, humans have long fantasized about bringing their lovable idiosyncrasies to metal and silicon.

Now, as intelligent assistants and chatbots become real presences in our lives, owing to impressive technological developments in machine learning and natural language understanding, a look at that history shows personality, not technology, is actually just as necessary for success.

The chatbot story begins in the early sixties

In 1966, a German-American professor at MIT, Joseph Weizenbaum, unveiled ELIZA. ELIZA was a virtual psychotherapist, running on an IBM 7090, one of the first mainframes to use transistors instead of vacuum tubes.

Joseph Weizenbaum demonstrating remote modem access at MIT, around 1965. Photo taken at the German newspaper DIE ZEIT (Source)

ELIZA began with: “I am the psychotherapist. Please, describe your problems.”

As Weizenbaum’s coworkers in the Artificial Intelligence Lab sat down to chat with ELIZA, they began to unload their burdens, as if it was a real therapist.

Weizenbaum’s own secretary, who remained nameless in literature, but is described as “surely knowing [sic] it to be merely a computer program,” even once asked him to leave the room so she could have some privacy with ELIZA.

This behavior shocked Weizenbaum. He wrote:

What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

ELIZA did not actually have much intelligence at all, noted Weizenbaum. Instead, it cleverly decomposed its users’ statements and re-purposed them back as open-ended questions, simulating a non-directional conversation. For example:

Hello, I am Eliza. 
I’m writing a blog post about you.
Do you believe it is normal to be writing a blog post about me? 
Yes.
I see.

Driven by his experiences, Weizenbaum ended up writing a book that extolled the limitations of computers and tried to make clear that they did not possess the capability to emphasize or fully reason outside of their instructions.

As a therapist, ELIZA could not tell between a schizophrenic and a kleptomaniac, but it was there to listen, at least superficially, and many users found it thrilling.

Just two years later, in 1968, 2001: A Space Odyssey introduced the concept of artificial intelligence to millions. HAL begins the film friendly and helpful. Later, he morphs into the ominous HAL we all know, a robot that follows the unintended consequences of its programming at the expense of all else, including its astronauts’ lives.

Computing power hadn’t progressed much since ELIZA, but dreamers like Arthur C Clarke extended the concept far past what was possible in software then — or now. In turn, moviegoers proved their receptivity.

Despite being a brand new concept, HAL seemed to require little explanation and instantly became a pop culture icon.

In the early 1990s, Stanford professor Clifford Nass began studying the reaction of humans to computers. In 1994, he published Computers are Social Actors. Nass discovered his subjects were:

  • Polite to computers when they didn’t have to be;
  • Treated identical functions differently when different electronic voices were used;
  • Responded to computers in a way that reflected gender stereotypes and expectations provided by the study; and
  • Naturally attributed actions to the computer’s own agency, not some unknown programmer.

 

The experimental setup used by Clifford Nass.

Nass’s rigorous work proved what Weizenbaum had discovered through with how his officemates reacted to ELIZA and what had become culturally clear: people are inclined to treat computers as something more than the inanimate groupings of code that they are.

A few years later, a new phenomenon arose in the hearts of American teenagers, fueled by the growth of AOL Instant Messenger and 56K modems: SmarterChild.

With the emergence of the new channels of instant messaging and SMS, entrepreneurs were looking for ways to take advantage. SmarterChild launched on AOL Instant Messenger in June 2001, originally answering questions regarding sports, the stock market, and the news.

 

Source: Botwiki.org

The creators of SmarterChild began to notice that its usage spiked every day at 3 P.M. and 6 P.M. Eastern. Kids, home from school on the East and then the West Coast of the United States, were hurling vicious insults at SmarterChild as a fun after-school activity.

SmarterChild’s creators could have decided that it was not the kind of product they wanted to build; instead, they responded to this ‘positive’ feedback by keeping it going and giving SmarterChild a distinctly snarky personality.

This proved to be just what users wanted, and SmarterChild spread like wildfire, eventually accounting for 5% of global instant messaging traffic.

SmarterChild undoubtedly had great technology, but, echoing the experiences of ELIZA and Clifford Nass, we don’t remember for its scaling to handle hundreds of millions of daily messages — we remember it for its humor and personality.

In October 2011, conversational interfaces took a huge step forward when Apple introduced Siri to the world.

In its marketing, Apple emphasized its intelligence, but in its product, emphasized personality. When the iPhone 4S was released, millions now had access to their first virtual intelligent assistant. HAL had seemingly become a real thing, and thousands entertained themselves by asking Siri silly questions and being surprised by her snarky responses.

Siri, especially back then, had limits to its functions, but it shifted popular culture. The movie Her, released in 2013, extended the long tradition of AI futurism with a modern perspective.

Her’s AI, Samantha, was compassionate and fully intelligent. Her name is not an acronym, her abilities not purely functional; she was a friend and lover to Theodore, the movie’s protagonist.

 

Theodore and Samantha

Practical lessons from the history of chatbots

If you’re implementing a chatbot as a product or interface, the importance of personality doesn’t mean it’s necessary to pretend to be human or make cheesy jokes with every interaction. It does mean that your interaction design needs to be respectful of the social manner in which your users will assess it.

Your chatbot should be purposeful, reflective of your product’s voice, and simpatico with your users. One helpful design exercise is to produce an assistant persona and personality:

  • Don’t pretend to be a human! Personality doesn’t mean person.
  • Should the script’s tone be familiar or professional?
  • Should the bot have a name?
  • Should the bot be polite and conversational or entirely focused on the task at hand?
  • Would jokes be appropriate? If yes, be mindful of repetitive interactions, and repetitive punch lines.
  • Does your bot platform allow the easy A/B testing of different messages and tones?

Given that users innately treat computers as social beings, there’s no need to pretend to be a human. Instead, a bot is a potential opportunity to expand the corporate voice to the familiar. For example, Apple’s brand is generally serious, but in the early days of Siri, Apple was not afraid of jokes and snarky responses. Over time, the jokes were toned down, but Siri still retains a distinct personality.

Chatbot technology is making exciting progress, driven by innovations in machine learning, deep learning, and natural language understanding. Looking back, though, the most successful examples of conversational agents succeeded with little technology, but a lot of personality.

As we move forward as an industry, building conversational products and chatbots for real-life uses, designers will need to incorporate personality into their conversational interfaces, and successful brands will allow and encourage it.

Get the TNW newsletter

Get the most important tech news in your inbox each week.