Journalism is hard. Hours of research and writing culminate with publishing a piece with little more than blind faith that it’ll be well-received. Most aren’t. And for those that flop, it’s typically the headline we start with when trying to diagnose the cause of our failures.
Until recently, journalists scoffed at the idea of an AI capable of doing their work. AI could gather facts, write simple reports, and publish with a dizzying frequency human writers could never touch, but they weren’t as creative. They were, in fact, a complete failure at writing human-centric copy, the sort of prose that dances off the page, keeping readers engaged from beginning to end.
But that could be changing.
Primer, an AI company, recently built a tool capable of writing headlines that look like those a human would produce. It’s not perfect, but in speaking with Axios, the examples it came up with were often quite good — those that didn’t miss the mark entirely, that is.
Axios’ Kaveh Waddell put the robot to work, asking Primer to generate headlines based on past work. The headlines it selected all came from previous Axios pieces, other than the last one, which was a piece Waddell penned for The Atlantic.
- Uncovering secret government AI
- The AI acquisitions war
- The AI sharecroppers
- The desperate search for Lebanon’s mass graves
From these, the AI came up with a few headline ideas of its own. And while the first two are pretty awful, it managed to stick the landing on the last two, producing headlines most of us wouldn’t hesitate to publish.
- AI and surveillance
- The AI companies since 2010, carving out another front in the nonstop war
- The new “sharecroppers”
- The missing memories of Beirut
The first headline is dull and not at all clickworthy, in my opinion. That’s not to say I’m the best headline writer myself — it’s something all journalists struggle with — but most junior writers are capable of far better. Senior writers and above wouldn’t even suggest a headline like this in a brainstorming session.
The second one missed the mark entirely. It’s long, confusing, and not all that compelling.
The next two, however, are legitimately good headlines. In an industry where so much is riding on the headline — it has to be compelling enough to entice a click, yet the article must deliver on the premise it implies — it’s chilling that robots are now capable of producing them in much the same way humans would, only better and faster.
To get here, Primer’s AI read more than a million headlines and news articles. From there it set about stringing together the best possible series of words, based on previous training data. Humans then judged these AI-generated examples against the originals (those written by humans) in what Primer’s director of science called a “headline Turing test.”
Primer managed to beat humans more than half the time.
For now, understanding and generating natural, human-centric language is still difficult for AI. That means that human journalists are safe, for now, even though robot–writers are quickly narrowing the gap. Maybe we should learn to code, after all.