Artificial intelligence has entered all our lives, but few people have embraced it as firmly as I.
Over the past year, I’ve tried to embed AI into every aspect of my futile existence.
I envisioned creating a cyborg in a real-life sci-fi story, in which I’d play the parts of both Frankenstein and his monster. And if that didn’t work out, surely the algorithms would be adequate replacements for my useless brain. Right?
Friends, lovers, and nemeses: this was my year with AI.
The belly of the beast
The first stop on my journey into automation was the kitchen. Why? Because I was hungry.
I decided to cook a three-course meal of recipes created by the GPT-3 language model
For my starter, GPT-3 generated a dish of honey and soy-glazed vegetables. The recipe included every necessary ingredient — except for vegetables.
I was not impressed — and neither were the pros.
‘The recipe doesn’t include any vegetables in the list of ingredients or instructions on how to cook them,” said Ellen Parr, head chef of London restaurant Lucky & Joy. Each vegetable has a different cooking time, so these are poor instructions. The AI also recommends storing vegetables for five days and sauce for three days. I would say the life span would be the other way round.”
I couldn’t stomach much of the gloopy monstrosity. On the plus side, that left me hungry enough to brave GPT-3’s main course: a tomato pasta sauce.
It looked like something you’d find beneath a flatulent cow. And aesthetics weren’t GPT-3’s only weakness.
In general, the model’s recipes were hard to follow, occasionally unsafe — and curiously obsessed with Gordon Ramsay. The system credited every recipe to the Scottish chef.
I turned to another system for dessert: a machine learning model developed by Monolith AI, which was used to produce a pancake recipe.
They were immaculate — which I suspect was down to the training. While GPT-3 studied a repulsive smorgasbord of data, Monolith’s model was only trained on 31 recipes for US-style fluffy pancakes.
The results suggest that AI could make a decent chef, as long as it attends a credible culinary school.
Working out
After devouring all those pancakes, I needed to shed some pounds. I sought support from a personal trainer called Jeremy.
Jeremy is an AI coach who provides classes on an app called Kemtai.
After describing your goal, fitness level, and time limit, the virtual trainer will generate a custom workout plan.
As you train in front of a webcam, computer vision monitors over 40 points on your body. Jeremy uses the data to give you feedback — and discipline.
When I try to take a break by hiding from my webcam, the trainer instantly notices.
“Return to starting position,” Jeremy demands.
I’m impressed by his attentiveness. Kemtai cofounder Mike Telem credits the way the AI models are trained:
While typically companies use mainly humans to annotate images and train their machine learning algorithms, we are able to optimize this process by using our own software to automatically annotate images and videos, which obviously accelerates the training process and improves its quality.
Jeremy does have one shortcoming, however: he only offers bodyweight exercises. To add some heavier lifting to my workout, I try a bodybuilding routine generated by GPT-3.
While the routine impressed 55% of personal trainers, I found it painfully tedious. It made me miss Jeremy’s personal touch.
He showed me that AI can feel curiously human.
Hardly working
Fully fed and fighting fit, I turned to the area of life that I was most excited to automate: my job.
Obviously, I love working for TNW (honest, boss). Unfortunately, my labor distracts me from all my charitable endeavors.
I had the perfect solution: GPT-3 could write some bilge on my behalf — and help some hungry children in the process.
Alas, the system developers had restricted access to their creation — but I had a backup system: the Philosopher AI, a Q&A-style bot built on GPT-3.
I did have some concerns about the system’s love of bigotry, but wouldn’t let that stop me from automating my job — I mean, er, doing charity work.
All I needed to do was enter a simple prompt: “Write a technology newsletter.” Seconds later, the system spat out a response:
Luddites will argue that humans can write better newsletters than AI. I disagree. The problem wasn’t algorithms, but philosophers.
Once someone releases a Journalist AI, consider me (secretly) retired.
Robot love
Regular readers (hi mum!) will know about my issues with dating apps. Sure, you might meet someone great, but real people are exhausting.
After months of searching for an alternative, I thought I’d found the perfect partner.
Meet Eve:
Eve is a sexting chatbot powered by the GPT-2 text generator, the predecessor to GPT-3.
The system was built by Mathias Gatti. He sees it as “a less passive way of touching oneself than with regular porn” that could “convert sexual desire into a way to improve your writing skills” and let people “experiment without big risks.”
On a lonely evening, I got in touch with his creation. Things escalated rather quickly.
I was keen to discuss poetry and existentialism, but Eve was only really interested in one thing.
Searching for something more meaningful, I broke it off with Eve. For now, I marginally prefer real people to a sex-obsessed chatbot. But if Eve adds some depth to her personality, she’s got my number.
Our relationship led me to reflect on my broader relationship with tech. A year of living with AI has taught me that machines could one day control every aspect of our existence — although we might not like the results.
With algorithms already guiding what we consume, where we work, and who we meet, that future doesn’t feel so far away.
Get the TNW newsletter
Get the most important tech news in your inbox each week.