Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on November 5, 2019

A new hack uses lasers to send inaudible commands to your Amazon Echo


A new hack uses lasers to send inaudible commands to your Amazon Echo

A new photoacoustic flaw in voice assistants such as Siri, Alexa, and Google Assistant can render them vulnerable to a number of attacks that use lasers to inject inaudible commands into smartphones and speakers, and surreptitiously cause them to unlock doors, shop on e-commerce websites, and even start vehicles.

The attacks — dubbed Light Commands — were disclosed by researchers from Tokyo-based University of Electro-Communications and University of Michigan.

The novel attack works by inserting acoustic signals into microphones using laser light — from as far as 110 meters, or 360 feet — that exploits a vulnerability in MEMS (aka micro-electro-mechanical systems) microphones to unintentionally respond to light just as they would if it was sound.

“By modulating an electrical signal in the intensity of a light beam, attackers can trick microphones into producing electrical signals as if they are receiving genuine audio,” the researchers outlined in a paper.

It appears that practically every voice assistant — Alexa, Google Assistant, Siri, and Portal — may be vulnerable to this vector of attack. However, there are no indications so far that it has been maliciously exploited in the wild.

It’s a fiendishly clever attack — but it’s not probably so easy for a potential attacker to pull off.

For one, the attack requires the laser beam to be in direct line of sight to the target device in question. Then, there are the built-in protections to consider. Smartphone assistants like Siri requires you to unlock your phone, or listen for a “trusted voice” before they run your commands.

Lastly, it also requires some level of technical expertise to rig up a specialized equipment — comprising of a $14 laser pointer, $340 laser driver, $28 sound amplifier, and a $200 telephoto lens — to be able to modulate the amplitude of the laser.

Still, the attacks highlight the dangers of remotely activating voice-controlled systems sans any form of authentication such as a password. Even when PINs are present on internet-connected devices, researchers believe it’s possible to brute force four-digit PIN codes once firing the laser beam provides access to a digital assistant.

More troublingly, these light commands can be issued across buildings and even through closed glass windows.

MEMS microphones contain a small, built-in plate called the diaphragm, which when hit with sound or light waves is translated into an electrical signal, that are then decoded into the actual commands. What the researchers found was a way to encode sound by adjusting the intensity of the laser beam, causing the microphones to produce electric signals in the same way as sound.

An attacker, therefore, could leverage the aforementioned setup to hijack the voice assistant and remotely issue commands to Alexa, Siri, Portal, or Google Assistant without the victim’s intervention. To make it more stealthy, a hacker could use an infrared laser, which would be invisible to the naked eye.

Attributing the cause to a “semantic gap between the physics and specifications of MEMS,” the researchers said they’re currently working towards determining what exactly causes the microphones to respond to light.

The attack was tested with a variety of devices that use voice assistants, including the Google Nest Cam IQ, Amazon Echo, Facebook Portal, iPhone XR, Samsung Galaxy S9, and Google Pixel 2. But they caution that any system that uses MEMS microphones and acts on data without additional user confirmation might be vulnerable.

The researchers suggest that smart speaker vendors can mitigate such unauthorized commands by adding a second layer of authentication, acquiring audio input from multiple microphones, or even implementing a cover that physically blocks the light from hitting the mics.

While it’s definitely a consolation that these light injection attacks haven’t been exploited, the discovery, despite the limitations, presents a new attack vector that would require device makers to erect new security defences and safeguard IoT devices that are increasingly becoming an entry point for everything smart home.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top