Published: Sat, May 12, 2018
Science | By Joan Schultz

As voice assistants go mainstream, researchers warn of vulnerabilities

As voice assistants go mainstream, researchers warn of vulnerabilities

All of that takes things a step beyond what we saw a year ago, when researchers in China showed that inaudible, ultrasonic transmissions could successfully trigger popular voice assistants like Siri, Alexa, Cortana and the Google Assistant. The problem, according to researchers, is that these smart assistants might follow even those commands that you might not be able to hear. Researchers at the University of IL demonstrated ultrasound attacks were possible from 25 feet away.

"My assumption is that the malicious people already employ people to do what I do", Nicholas Carlini, a Ph.D. student in computer security at UC Berkeley and co-author of the study, tells WRAL Tech Wire. For example, an attacker can easily point a transmitter in the general direction of a Smart Speaker and as it to unlock the door or your pesky neighbour who you hate could add a few hundred too many of something to your Amazon shopping list and the list goes on.

The microphones and software that runs assistants such as Alexa and Google Now can pick up frequencies above 20Khz, which is the limit of the audible range for human ears. Google said that security is an ongoing focus and that its Assistant has features to mitigate undetectable audio commands.

The study says that while these commands cannot be heard by humans, they can be detected by Google Assistant, Siri and Alexa.

Apple said it's HomePod smart speaker is created to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands. They've used it to instruct smart devices to visit malicious sites, initiate calls, click pictures and send messages. That vulnerability, which left the Alexa assistant active even after ending a session, was fixed by Amazon after receiving its report from the researchers' team. During the Urabana-Champaign, they showed that though commands couldn't yet penetrate walls, they still had the potential to control smart devices through open windows in buildings. In it, Carlini and Wagner claim that they were able to fool Mozilla's open-source DeepSpeech voice-to-text engine by hiding a secret, inaudible command within audio of a completely different phrase.

They also embedded other commands into music clips.

Mr Carlini said he was confident that in time he and his colleagues could mount successful adversarial attacks against any smart device system on the market. "We want to demonstrate that it's possible", he said, "and then hope that other people will say, 'O.K. this is possible, now let's try and fix it'".

Like this: