A few weeks ago, I was trying to coordinate a meeting time with the editor of a publication.
The editor cc’ed his assistant into the email chain, asking: “Amy, could you help us find 30 minutes for a conference call?” She responded:
Hi Georgina,
Happy to find a time for you and *****.
Would Friday at 9:00 am CDT work?
He’s also available for a 30 minute call:
- Friday from 10:30 AM to 1:30 PM and from 2:30 to 5 PM
- Monday from 9 AM to 12 PM and from 12:30 to 2 PM
If these times don’t work, feel free to select another time that might work better for you.
He has asked that this meeting be a phone conference.
Amy
This email was sent on a Saturday. When I arrived at my desk Monday morning and opened my inbox, I had this message plus three follow-up messages awaiting me.
I sent a somewhat frosty response, telling Amy my availability, and adding that I didn’t respond right away because it was the weekend.
The editor and I had the call. It went great because I’m a closer. I closed HARD.
I sent the editor a follow-up email after the meeting with more details, and received the following response from Amy:
Hi Georgina,
Happy to reschedule the meeting.
Unfortunately ***** is not available this week.
Does Monday at 11 AM work?
He’s also available for a 30 minute call:
- Tuesday from 10:30 AM to 1:30 PM and from 2:30 to 5 PM
- Wednesday from 9 AM to 12 PM and from 12:30 t o2 PM
If these times don’t work, feel free to select another time that might work better for you.
Amy
We all make mistakes. I responded:
Hi Amy!
We’ve actually already had the meeting — no need to reschedule.
Best,
Georgina
What did I receive in my inbox just a few minutes later? The same fucking reschedule email again.
At this point, many of you would have probably already realized that I was dealing with an automated assistant. But not I. I continued to have a back-and-forth email correspondence with Amy that slowly escalated until every other word was written in CAPS LOCK.
I spent minutes — nay, TENS of minutes — carefully embedding my frustration and rage into the copy, making sure to repeat her name, “Amy”, so she knew I knew who was in control. (It’s me. I was in control.)
“I don’t want to RESCHEDULE, Amy.”
The problem with AI assistants — and other things, too
I don’t blame Amy, but I don’t blame myself either.
I’ve had a few bad run-ins like this before where I was tricked by technology.
Did you ever have a hilarious friend who would start their voicemail answering machine with “Hello?…….. hello???? Can you hear me???” just to get you to shout into your phone?
Or that Instagram story foolery, when your friend tricks you into thinking they want to take a picture with you only to record you and then scream “it’s a VIDEO! HAHAHA!”
I may be especially gullible, but this has made me wary.
Back to Amy. AI assistants are on the rise, and I don’t like it. It’s not just me — John Richardson recently (and far more eloquently) wrote about a similar run-in with an automated assistant, Andrew, in Wired.
Now, if I had paid better attention, would I not have noticed the tiny script at the bottom of the email mentioning Amy was an automated service? Sure, but I’ve been hearing about my “selective attention” since my kindergarten report card. It’s too late to change now.
Do AI assistants have their uses? Yes, and I’m sure they’ll improve to the point where they’ll be completely indistinguishable from real human assistants soon — even better. I already can’t tell the difference, and this is maybe what scares me most.
First of all, AI assistants don’t have human judgment, and this could lead to privacy issues. What’s stopping Amy from automatically passing on my frustrated emails to her boss, the editor I need to work with, with no context? The better these voice assistants get at guessing what we’re asking them, the more information of ours they are parsing and consuming. As Kaveh Waddell wrote in The Atlantic, “It’s hard to deliver convenience without sacrificing privacy and security.”
Second of all, AI assistants don’t need to disclose they are an automated service. That is up to the maker. I could have continued to email back and forth with Amy, developed a relationship, become best friends — and then what? I joke, but there is something uncannily creepy about finding out something you thought was alive is, in fact, not.
And what about the Google Duplex, the AI system that will be able to make phone calls on a person’s behalf? Tristan Greene recently wrote the Duplex is “a beacon of hope for people with social anxiety (like me).” But what about people with tech anxiety (like me)? I have the technophobia of a 70-year-old who flinches every time the lights on the microwave blink, and now I need to worry about creepy emails.
Fucking Amy.
Liked this column? It’s part of our Big Spam daily newsletter. Subscribe down here:
Get the TNW newsletter
Get the most important tech news in your inbox each week.