This article was published on May 11, 2018

Voice assistants could be fooled by commands you can’t even hear


Voice assistants could be fooled by commands you can’t even hear Image by: Alexa Developers / YouTube

Many people already consider voice assistants to be too invasive to let them listen in on conversations in their homes — but that’s not the only thing they should worry about. Researchers from the University of California, Berkeley, want you to know that they might be also be vulnerable to attacks that you’ll never hear coming.

In a new paper (PDF), Nicholas Carlini and David Wagner describe a method to imperceptibly modify an audio file so as to deliver a secret command; the embedded instruction is inaudible to the human ear, so there’s no easy way of telling when Alexa might be asked by a hacker to add an item to your Amazon shopping cart, or worse.

To demonstrate this, Carlini hid the message, “OK Google, browse to evil.com,” in a seemingly innocuous sentence, as well as in a short clip of Verdi’s ‘Requiem,’ which fooled Mozilla’s open-source DeepSpeech transcription software.

Speaking to The New York Times, Carlini – who, in 2016, demonstrated how he and his team could embed commands in white noise played along with other audio to get voice-activated devices to do things like turn on airplane mode – said that while such attacks haven’t yet been reported, it’s possible that “malicious people already employ people to do what I do.”

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Thanks for the cheerful thought, Nicholas.

There have been other (unfortunately successful) attempts to fool voice assistants, and there aren’t a lot of ways to counter such audio from being broadcasted to target people’s ‘smart’ devices. One method called DolphinAttack even muted the target phone before issuing inaudible commands, so the owner wouldn’t hear the device’s responses.

We need hardware makers and AI developers to tackle such subliminal messages, particularly for devices that don’t have screens to give users visual feedback and warnings about having received secret commands. In demonstrating what’s possible with this method, Carlini’s goal is to encourage companies to secure their products and services so users are protected from inaudible attacks.

Let’s hope Google, Amazon, Apple, and Microsoft are listening.

The Next Web’s 2018 conference is just a few weeks away, and it’ll be ??. Find out all about our tracks here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.