Human-centric AI news and analysis

This Philosopher AI has its own existential questions to answer

The text-generator is too cautious to join the pantheon of great thinkers

A new Philosopher AI could help you find meaning in a meaningless world — as long as you don’t ask it any controversial questions.

The system provides musings on subjects that have plagued humanity since its inception. You can ask it about a topic that’s filling you with existential angst. It then uses OpenAI‘s GPT-3 text generator to analyze your text and spit back a life-affirming/soul-destroying response.

The system is the brainchild of a Vancouver-based programmer called Murat Ayfer, who describes it as an experiment in “prompt engineering.” Ayfer admits the AI doesn’t have any specific opinions or knowledge of its own. Instead, it “merely mimics opinions,” which means it will sometimes produce conflicting responses to identical questions.

[Read: Most mobile apps suck — here’s how to fix them]

I tested whether the Philosopher AI could resolve my own existential crisis. But the system was reluctant to reveal its thoughts on any sensitive subject.

Politicized philosophy?

The Philosopher AI is generally comfortable contemplating common existential issues. Take this response to the most fundamental philosophical question of them all: what is the meaning of life?

Credit: Philosopher AI.

But ask it a more sensitive question, and the Philosopher AI will refuse to answer.

At times, it feels like the AI is involved in an elaborate cover-up orchestrated by the powers that be.

It’s not entirely clear what triggers these rejections. But as Redditor cateyemirrorshades noted, it tends to respond to cliched philosophical queries, but rebuff inputs containing potentially absurdist, controversial, or offensive words.

However, the AI isn’t quite as evasive as it likes it think. A subtle rewording of a question can transform it from politically reticent to recklessly opinionated.

Take exhibit AI, a question that keeps every cynophilist awake at night:

Now observe its response when I ingeniously add a single extra word to the question:

Ayfer is yet to reveal why the AI refuses to answer some questions. But he’s described it as “fundamentally a GPT-3 experiment in output safety, reliability, and the fine balance between predictability and unpredictability.”

He now aims to add more personality types and conversational aspects to the system. He also hopes to turn it into some sort of game. But if that doesn’t work out, he’s also got a great back-up plan:

You can try the system out yourself by visiting the Philosopher AI website.

So you like our media brand Neural? You should join our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses.

Published August 24, 2020 — 15:41 UTC