- Ninja mocked for not knowing how to make a sandwich Wednesday 9:30 PM
- Marvel comics writer discusses misogyny in the industry Wednesday 9:09 PM
- TikTok conspiracy theorists think Juice WRLD is still alive Wednesday 7:03 PM
- Conservatives are protesting YouTube’s new harassment rules Wednesday 5:36 PM
- YouTuber’s ‘creepy’ comment about Taylor Swift’s eggs gets ratioed Wednesday 5:31 PM
- Bloomberg razzed for accidentally making an Alexa Fleshlight Wednesday 5:29 PM
- Who is putting cowboy hats on pigeons? Wednesday 4:33 PM
- Scammer reportedly bribed Facebook employee to keep posts up Wednesday 3:36 PM
- The 1975’s singer criticized for ‘Islamophobic’ rant Wednesday 3:22 PM
- Ready to dish out $52K for Apple’s new Mac Pro? Wednesday 3:03 PM
- N.K. Jemisin and Jamal Campbell discuss their new Green Lantern comic, ‘Far Sector’ Wednesday 3:00 PM
- YouTube says it will be harsher on creators with ‘patterns of harassing behavior’ Wednesday 1:15 PM
- Why one senator stopped a vote on net neutrality Wednesday 12:49 PM
- Man reportedly denied refugee status after officials fail to forward email Wednesday 12:09 PM
- ‘Jojo Rabbit’ star to lead Disney+ ‘Home Alone’ reboot Wednesday 12:08 PM
Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, according to the New York Times.
For the last two years, the researchers have figured out how to activate these devices to dial phone numbers and open websites—causing many to worry that it may soon be possible for malicious users to unlock doors to homes, take money out of bank accounts, or simply buy products online. For watchers of Josie and the Pussycats, it could spark concern about subliminal messaging, as well.
At the University of California, Berkeley, and Georgetown University, groups of research students in 2016 displayed their ability to hide commands in white noise played over speakers and in YouTube videos to trick smart devices to turn on airplane mode or open a website. Now, the newspaper reports that Berkeley students have published a research paper that says they can successfully embed commands into recordings of music—so while you listen to your favorite newest single, Alexa hears an instruction to purchase something from Amazon.
“We wanted to see if we could make it even more stealthy,” Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, told the Times.
Meanwhile, researchers at Princeton University and China’s Zhejiang University in 2016 demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear, using a technique called the “DolphinAttack.” The attack first mutes the phone so the owner can’t hear what’s going on and then instructs the device to visit malicious websites, initiate phone calls, take a picture, or send text messages. This year, another group of researchers successfully sent voice-activated commands embedded in songs.
Right now, there are no laws that regulate subliminal messaging to Artificial Intelligence—or people, for that matter—which could become problematic as these technologies become more complex and smart devices race to outnumber people by 2021, according to the research firm Ovum. More than half of all American households will have at least one smart speaker by then, according to Juniper Research.
Some readers were concerned about these advances in subliminal messaging, and many tweeted their concerns, including that the research felt chillingly like the dystopian novel 1984.
Well this is terrifying. https://t.co/y1TnhcDRCt— Tommy Vietor (@TVietor08) May 11, 2018
Good morning from the dystopia https://t.co/YRvJOhHn8m— erin mccann | subscribe to The Times (@mccanner) May 10, 2018
🕵🏻♂️🔎 Big Brother Is Watching 🎤Listening 🔉Sending Commands =🔬— JD (@JDiviv) May 11, 2018
Siri, Alexa & Google Assistant can be controlled by INAUDIBLE subsonic commands hidden in radio music, YouTube vids or even white noise played over speakers-HUGE security risk https://t.co/gRUktk1D6X
hackers can now send literal dogwhistles via radio signal to siri or alexa to open incriminating websites or wire money out of your account+all I can think about beside the fact we live in hell is the extent to which the stasi would've played god with this https://t.co/FDZmAkMuTU— cs (@cszabla) May 11, 2018
All three corporations—Amazon, Google, and Apple—ensured the Times that their devices are safe from intruding forces.
Both Google and Amazon’s assistants use voice recognition to prevent devices from acting on certain commands unless they recognize the user’s voice—which has been proven to be easy to fool. Apple said its smart speaker, HomePod, cannot unlock doors and said that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.
Now when choosing a smart speaker for your home, users will have to consider which device is the least likely to get hacked by outside forces, on top of researching which devices do the best job at the tasks needed.
Tess Cagle is a reporter who focuses on politics, lifestyle, and streaming entertainment. Her work has appeared in the New York Times, Texas Monthly, the Austin American-Statesman, Damn Joan, and Community Impact Newspaper. She’s also a portrait, events, and live music photographer in Central Texas.