| AI base of science and technology (rgznai100)

Participate in | shawn



By whispering to humans at frequencies they can’t hear, hackers can take control of the world’s most popular voice assistants.

Voice assistants from Apple, Google, Amazon, Microsoft, Samsung and Huawei are all riddled with scary bugs, according to Researchers in China. It affected all iphones and macbooks running Siri, all Galaxy phones, all PCS running Windows10, and even Amazon’s Alexa assistant.

A research team at Zhejiang University used a technique called dolphin attack to convert typical voice commands into ultrasonic frequencies that are too high for human ears to hear. But using microphones and software to power our always-on voice assistants makes perfect sense.

It’s a relatively simple translation process, and they control the gadget by saying a few words at a frequency we can’t hear.

Instead of just saying “Hey Siri” or “ok Google,” researchers can tell an iPhone to call 1234567890, or an iPad to FaceTime. They can force a Macbook or Nexus 7 to open a malicious website, they can command the Amazon Echo to open a back door, and even the audi Q3’s navigation system can be redirected to a new location. “These inaudible voice commands present a challenge to common designs based on the assumption that an adversary will at most attempt to manipulate a voice assistant through audible speech that can be detected by the alerting user.” The team has written a paper that has just been accepted at the ACM Conference on Computer and Communications Security.

In other words, Silicon Valley designed a user-friendly user interface and exercised enormous security oversight over it. While we may not be able to hear the bad guys, our computers apparently can. “From a user experience perspective, it feels like a betrayal.” Ame Elliott, director of design at nonprofit SimplySecure. “How you interact with the device is premised on telling it what to do, so silent, furtive commands are shocking.”

                       

To hack each voice assistant, the researchers used a smartphone that included $3 in additional hardware, including a tiny speaker and AMP feature. In theory, their methods are now open to anyone with a bit of technical knowledge and a few bucks in their pocket. In some cases, these attacks can only be carried out from a few inches away, though gadgets like the Apple Watch are vulnerable within a few feet. In that sense, it’s hard to imagine the Amazon Echo being invaded by dolphins.

An intruder who wants to open your back door must first get inside your home, close to your echo. But cracking an iPhone seems like a piece of cake, and a hacker just needs to walk past you in a crowd. They’ll take the phone out, play a command on a frequency you can’t hear, and your phone will swing back and forth in your hand. So maybe you won’t see Safari or Chrome load a website that runs code to install malware while your phone’s content and communications are open for them to explore.

                                

The researchers explained in their paper that the vulnerability resulted from a combination of hardware and software problems.

The microphones and software behind voice assistants like Siri, Alexa, and GoogleHome are designed to collect unhearable audio frequencies beyond the 20KHZ that surround human hearing. (How high is 20 KHz? Just a few years ago, a mosquito cell phone ringtone went viral, allowing young students who hadn’t lost their hearing to text their friends without being overheard by their teachers.)

According to Gadi Amit, the founder and industrial designer of products like Fitbit,
The design of such microphones makes it difficult for them to defend against such attacks. The microphone components themselves come in different types, but most air pressures used may not be shielded from ultrasonic waves. Amit explains. Basically, the most popular microphones today convert turbulent air or sound waves into electronic waves, and blocking these super-hearing functions is impossible.

That means we need software to decipher human speech and machine speech. In theory, apple or Google could order their assistant never to obey commands from digital audio filters below 20kHz. “Wait, this man commanded me from a range of voices they couldn’t possibly speak! I’m not going to listen to them.”
But the zhejiang researchers found that every major voice assistant company showed a vulnerability to sending commands beyond 20kHz.

Why did Amazon and Apple leave such a huge hole that could easily be plugged by software? We don’t know yet, but we’ve reached out to Apple, Google, Amazon, Microsoft, Samsung and Huawei. But at least two theories, both aimed at making voice assistants more user-friendly, are entirely plausible.

The first is that voice assistants actually need ultrasound to hear people’s voices, instead of using those high frequencies to analyze voices. “Keep in mind that speech analysis software may need every cue in your voice to create its understanding,” Amit said. “Filtering out the highest frequencies of our speech system can have the negative effect of lowering the overall system’s understanding score.” Even if people don’t need ultrasound to hear other people’s voices, maybe our computers will rely on them.

The second reason is that companies are already developing ultrasound to improve user experience, including communication between phones and accessories. Most notably, amazon’s dash button matches the phone at 18Hz, and Google’s Chromecast uses ultrasound to match. For the end user, this matching creates the most magical experience that can be expected in the electronic age. How does it work? Who cares, it’s amazing!

But because we can’t hear them when they’re working, we can’t tell when they’re broken or hacked. They were designed to be invisible. It’s the equivalent of driving a car with a silent engine. If the timing belt is broken, you won’t notice it until the car comes to a stop, and the engine has been destroyed. User friendliness also brings more security concerns. Our browsers collect cookies so easily and stealthily that marketers can track us anywhere. Our phones back up our photos and contacts to the cloud, tempting dedicated hackers to create an entire repository of our private lives.

Every time we invent a technology that works well, we default to its hidden cost: our personal vulnerability.The development of the voice command is just the latest in a long list of security flaws created by design, but perhaps the best example of Silicon Valley’s disdain for the security of something new.

“I think Silicon Valley has a blind spot in how not to misuse products, which should be the strongest part of product planning.” Elliott said. “Voice systems are difficult to secure, and that raises questions, and it’s difficult to understand how the system works, and sometimes it takes a lot of thought to design it. I think it’s going to take a lot of hard work to untangle seamless speech and think about how to add more visualization to how the system works.”

There is a relatively simple solution to dolphin attacks. All you need to do is turn off the “always on” button on Siri or Google Assistant on your phone or tablet, so hackers can’t send commands to your phone. (Except when you’re talking to your phone), Both Amazon Alexa and Google Home have mute buttons to smooth out most of these gimmicks. (Google Home wasn’t tested, but it’s theoretically just as vulnerable.)

Of course, these solutions can be self-defeating. If the only way we can safely use voice assistants is to keep them from hearing, what’s the point of developing them? Maybe these computers didn’t exist in our lives in the first place, or they weren’t everywhere in public.

We’ve reached out to Apple, Google, Amazon, Microsoft, Samsung and Huawei. We will keep updating if there are more stories to come.



The author |

Mark Wilson

Philanthroper, a senior writer at Fast who founded a web site, is an easy way to give back.


The original address

http://www.fastcodesign.com/90139019/a-simple-design-flaw-makes-it-astoundingly-easy-to-hack-siri-and-alexa