This article was published on January 18, 2013

More than human: Why does Apple need writers for Siri?


More than human: Why does Apple need writers for Siri?

At this point you’ve probably seen the job listing that Apple posted asking for writers for Siri. Though it was posted early in January and noticed by MIT Technology Review a few days back, the Internet picked yesterday to post dozens of stories about it.

I blame CES for the delay in glutting your feeds with ‘Apple hires people to fix Siri!’ stories. But, in fact, that’s not what this is at all. In reality, where we live, Siri still has one of the best chatterbots of any of the major ‘voice assistant’ systems out there. The ‘lack of banter’ cited by some stories makes me laugh because, of all of Siri’s flaws, its ability to project humanity isn’t one of them.

Chatterbots are just a slice of what makes a system like Siri work, but its the one responsible for those witty replies that it gives you when you ask things like ‘do you love me?’. Things that are not really a command, but would produce nothing better than an error message without a handler in place to give a humanized response. This system is likely what Apple is working to expand, in order to provide better dialog options and, more importantly, to help us be more understanding when Siri fails. IMG_1252

Because a chatterbot isn’t just there to wrap hard queries in human speech, it’s also there to make failure states more bearable. When Siri can’t give you what you need, or has an issue parsing a question, these responses are often vital in making us more willing to ‘be ok with that’. As the logic and recognition engines of Siri expand, it’s going to need exponentially more dialog options, not only to deliver correct responses, but to deliver queries for more information, or to ‘apologize’ when it can’t do what you ask it.

If you’re familiar at all with chatterbots (or chatbots), then you’ll probably be nodding and thinking to yourself that there are better examples out there. And that’s true. But they use the same principles, and the general consensus with chatbots is that personality, rather than deep logical understanding, is the key to making a great one.

The 2012 Loebner Prize (essentially a Turing test that tries to discover the most human chatbot) was awarded to Mohan Embar’s chatbot Chip Vivant. In a comment about ‘Chip’ after the contest, a judge had this to say about its success (emphasis mine):

Chip was, I think, the only Chatbot that really seemed to engage with me. ‘He’ apologised for not understanding a question. At one point Chip also suggested I might phrase a question differently so it would be more understandable to ‘him’. Chip didn’t try too hard pretending to be human but instead explained that it hoped to learn more so as to be able to answer my questions better in [the] future.

Chip made me realise that I really don’t care whether I’m talking to a human or a computer as long as the conversation is in some way rewarding or meaningful to me.

My hairdresser, for instance, often asks Siri questions that it has no ability to answer. Yet, when I asked about how she was liking her new iPhone, the first thing she praised was Siri. She didn’t mention that Siri seemed limited, but instead told me that ‘Siri is such a crazy b**ch!’

Why did she attribute a personality first and foremost, rather than commenting that it seemed to be limited in capability? There are a couple of reasons.

First, Siri does do a lot of things that she asks of it, and does them well. As a hairdresser, she schedules a lot of appointments, and has to do them herself. Instead of writing them down in a book, she now has Siri make all of those for her. That provides her with a very real benefit that makes her more likely to ‘forgive’.

Then, when she asks Siri to do something that it can’t, it taps into the chatbot to give her a ‘sassy’ response. Ask Siri ‘am I fat’ and she says ‘I prefer not to say’. Ask ‘what should I wear’ and you get ‘what’s wrong with what you’re wearing now?’ In either case, a simple ‘I can’t help you with that’ would have sufficed as an error message.

But that wouldn’t provide that sliver of delight, that chuckle that allows us to overlook that the system just error-ed out. Unfortunately, there are still many cases that deliver a flat error message when using Siri, and that is only going to get worse as its capabilities expand.

As Apple adds more verticals to Siri, it’s going to end up with more of those gaps. More places where even a human assistant might cock their head and ask you for more information. That’s why it’s key to expand Siri’s vocabulary and scripting. This isn’t about jokes, it’s about caulking over the seams with humor and personality.

Because Apple will be busy building out the learning aspects of Siri, and expanding what it’s capable of doing. But one thing it can’t do is control the humans using it. And that’s why making Siri a better conversationalist is just good for business.

If you’re interested in reading a bit more about Siri and how it will evolve, check out this article, it’s from a couple of years ago but I think it’s still relevant: Siri, Mickey Mouse and Apple’s cult of personality.

Image Credit: Oli Scarff/Getty Images

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top