Researchers Can Hack Alexa, Google Home And Siri With A Laser: Here’s How

Image for representation purpose only

Hackers have got a slew of new and creative hacking methods up their sleeve to hijack devices, be it or a computer or a smartphone. Speaking of which, researchers have discovered a new way that allows hackers to hijack smart speaker and other devices. This new method involves the use of lasers, with the help of which, hackers can send commands to smart speakers like Amazon Echo and Google Home sans any spoken commands, reports Wired.

The discovery of vulnerability dubbed “Light Commands” was done by the cyber-security researcher Takeshi Sugawara and a group of researchers from the University of Michigan. “In our paper we demonstrate this effect, successfully using light to inject malicious commands into several voice controlled devices such as smart speakers, tablets, and phones across large distances and through glass windows,” states researchers.

They revealed how changing the intensity of a laser beam to mimic the frequency of a sound wave and pointing it to a smart speaker’s microphone can help send commands. These laser-induced commands are then interpreted as a normal voice command by the smart speaker. According to the researchers, “Light Commands” is a vulnerability of MEMS microphones where hackers can remotely inject inaudible and invisible commands into popular voice assistants including Google Assistant, Amazon Alexa, Facebook Portal, and Apple Siri with the help of light.

Researchers also revealed how this security vulnerability can be used to hack not only smart speakers, but can also hack computers, smartphones, and tablets, that include a microphone to accept voice commands. Once the attacker gets access and control over a voice assistant, other systems including smart home switches, smart garage doors, etc, can be broken into by the attacker. The way this works is with the help of microphones that convert sound into electrical signals. By modulating the electrical signal in the intensity of a light beam, attackers can trick microphones to produce electrical signals that resemble genuine audio.

In response to the security issue, Google and Amazon told Wired that they’re looking into the findings revealed by the researchers but didn’t mention anything about if and how they plan on preventing similar attacks in the future.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s