You won't want to miss out on the world-class speakers at TNW Conference this year 🎟 Book your 2 for 1 tickets now! This offer ends on April 22 →

This article was published on April 7, 2022

AI sucks at telling jokes — but it’s great at analyzing them

Google's PaLM is an impressive beast


AI sucks at telling jokes — but it’s great at analyzing them Image by: Monica Silvestre (edited)

Have you heard the one about the algorithm that tells hilarious jokes? Me neither — but I have seen AI gags bomb like US presidents.

Stand-up robots, improvised joke generators, Q&A pun systems, and android bartenders have all failed to make me laugh.

“You can get a rum and coke anywhere,” Brian Connors, a hospitality professor, told the Miami Herald, “but how often are you going to get it from a robot that tells bad jokes?”

More often than you may think, unfortunately.

All of these comedians have something in common: they’re not funny.

I don’t blame their creators. Computing humor is so hard it’s been described as the holy grail of AI.

While algorithms are excellent at following formulas, they lack the reasoning, linguistic abilities, and cultural references to make effective jokes.

Google must have read the reviews, because the firm has moved algorithms from generating jokes to analyzing them.

The company this week unveiled an AI system with a vast range of linguistic skills — including explaining gags.

Dubbed PaLM (Pathways Language Model), the text generator has a gob-smacking 540 billion parameters — more than three times as many as GPT-3.

Unlike the OpenAI system, PaLM was trained with a mix of English and multilingual datasets, which were taken from “high-quality” (their words, not mine) websites such as Wikipedia, books, discussions, and GitHub.

PaLM outperformed average human intelligence in the logic-focused BIG-bench benchmark and has proved adept at linguistic tasks and code generation.

“Remarkably, PaLM can even generate explicit explanations for scenarios that require a complex combination of multi-step logical inference, world knowledge, and deep language understanding,” the model’s creators explained in a blog post.

“For example, it can provide high-quality explanations for novel jokes not found on the web.”

Google showed off this skill in a new study paper:

all evaluated jokes were written by the authors. Of course, these jokes do share abstract premises with existing jokes (wordplay, reliability, humorous analogies, reversal-ofexpectations).
PaLM uses a Transformer model architecture. Credit: Google

To ensure that they were novel and unique, all the jokes were created by Google scientists — which explains why they’re not funny.

The researchers then entered the prompt “Explain this joke” alongside an indication of when the joke starts. This proved sufficient for the PaLM model to provide pretty impressive explanations.

PaLM explains an original joke with two-shot prompts.
PaLM explained this joke with two-shot prompts. Credit: Google

This ability to understand something as linguistically complex as humor is another step toward human-level intelligence. If PaLM learns to create jokes as well, we might create AGI sooner than expected.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with