Researchers have used programmable low-powered lasers to remotely control voice-activated personal assistants like Apple's Siri, Amazon's Alexa, and Google Assistant from up to 360 feet away. These attacks exploit a vulnerability in microphones using micro-electro-mechanical systems (MEMS), which the researchers have discovered will respond to lasers the same way they will respond to sound.
The attack method, called Light Commands, was only tested on the three big-name players in the personal assistant space, but the researchers believe this attack will affect any microphone that uses MEMS (presumably including devices with Microsoft Cortana, Baidu DuerOS, or other digital assistants).
Although the attack is essentially a proof-of-concept at this stage, and has significant limitations (like requiring direct line-of-sight to the device), the researchers admit that they don’t fully understand why this exploit works, which could open the door to others finding ways to make it even more effective.
Voice-activated systems vary in the level of control they allow over a device without first gaining user authentication. Apple’s track record with Siri is mostly decent in this sphere, with most critical voice-activated functions requiring a passcode or biometric verification (i.e. Touch ID or Face ID) before completing the request.
In some circumstances, the Light Commands attack method can be used to brute-force a device passcode, so the usual security advice applies: choose a complex and not easily-guessable passcode, and set up your device to lock after a certain number of incorrect attempts. Even that basic level of security should protect you against this new and admittedly sci-fi sounding method of remote device hacking.
How can I learn more?
Also subscribe to our e-mail newsletter and keep an eye here on The Mac Security Blog for updates.