Fiona J McEvoy
Midway through a podcast, a high-energy commercial chirps out all the advantages of using a particular learning system for languages. They are familiar: Babbel can get you conversing in just three weeks, it teaches you phrases you’ll actually use in the real world, lessons are designed to help you remember.
Then a less familiar selling point:
“Other learning apps use AI for their lesson plans, by Babbel lessons were created by over 100 language experts.”
The company magazine website explains further,
“Babbel’s lessons aren’t the result of an algorithm or computer program; they’re designed by real humans. Babbel’s Didactics Team, made up of more than 100 linguists and language experts, puts a lot of time and care into creating lessons that will actually work for you.“
This framing of human input as a kind of quality standard isn’t new — in this case, the online article was posted in 2017 — but it is becoming increasingly prevalent.
A quick sweep surfaces this recruitment startup that heavily promotes the fact it doesn’t use AI to make predictions. This community of therapists promises it won’t use algorithms to match clients with professionals. This social network uses its rejection of algorithms and ads as its USP.
Additionally, there are too-many-to-mention dating websites, subscription services, ad sales, financial managers, and staffing agencies that state outwardly on their websites that they do not use AI or algorithms to do their work. The message is very clear. These companies want to emphasize that they don’t palm-off the most artful part of their business to unthinking systems. They employ experts to carefully consider the task at hand based on their experience and, perhaps, intuition.
“Made by humans” denotes a quality product.
In a world full of sparkle-toothed AI salesmen peddling their wares at (virtual) conferences, shouting about data being “the new oil” and offering to open the magic gates for businesses looking to dig themselves out of a pandemic pit, this is an interesting development. There is room in this world — perhaps ample room — for artisanal old humans and their semantic knowledge.
Is this indicative of a kind of backlash? A shift to privilege connoisseurs and subject-matter experts? Well, the short answer is “unlikely.” Artificial intelligence is still a solid way for businesses in all industries to make efficiencies, streamline procedures, and generally speed things up. Indeed, it’s mostly excellent at this. What it may demonstrate is that there really is space for both organic and silicon brains and — far from being outmoded — the stock of soft skills and experience is actually on the rise.
Data scientists and computer engineers may not inherit the earth after all.
We’re often told that for all the work AI takes from humans, it will also create a vacuum to be filled by new roles and services predicated on human talent. That these AI-less companies are shouting from the rooftops about their human crews suggests that we biological workers have good reason to be optimistic.
This article was originally published on You The Data by Fiona J McEvoy. She’s a tech ethics researcher and the founder of YouTheData.com.
Get the TNW newsletter
Get the most important tech news in your inbox each week.