- Tristan Thompson disables Instagram comments after reports he cheated on Khloe Kardashian Today 11:25 AM
- Introducing ‘boner culture,’ this Gamergate blogger’s latest cause Today 11:16 AM
- HBO debuts trailer for controversial Michael Jackson doc ‘Leaving Neverland’ Today 10:46 AM
- Christian woman refuses to do taxes for lesbian married couple Today 10:43 AM
- Political campaigns will be snooping on your phones in 2020 Today 10:43 AM
- How to get the first Apex Legends Twitch Prime pack for free Today 10:28 AM
- Mother discovers YouTube Kids video that encourages self-harm Today 10:14 AM
- Bernie Sanders’ messed-up map of the U.S. is his first campaign flub Today 10:05 AM
- Woman starts a whites-only yoga club to prove the wrong point about racism Today 10:01 AM
- John Mayer steps in to Photoshop Diplo’s Instagram Today 9:28 AM
- Venmo is flagging payments that mention ‘Persian’ Today 9:17 AM
- YouTube’s Slo Mo Guys inspired a key moment in ‘Solo’ Today 9:14 AM
- Trump unveils ‘workshopped’ nickname for Bernie Sanders Today 8:16 AM
- This Kickstarter needs $4,000 to digitally erase the rat from ‘The Departed’ Today 8:07 AM
- Welcome to Bernie 2020 Twitter, same as Bernie 2016 Twitter Today 7:39 AM
Welcome to the dystopian future.
Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, according to the New York Times.
For the last two years, the researchers have figured out how to activate these devices to dial phone numbers and open websites—causing many to worry that it may soon be possible for malicious users to unlock doors to homes, take money out of bank accounts, or simply buy products online. For watchers of Josie and the Pussycats, it could spark concern about subliminal messaging, as well.
At the University of California, Berkeley, and Georgetown University, groups of research students in 2016 displayed their ability to hide commands in white noise played over speakers and in YouTube videos to trick smart devices to turn on airplane mode or open a website. Now, the newspaper reports that Berkeley students have published a research paper that says they can successfully embed commands into recordings of music—so while you listen to your favorite newest single, Alexa hears an instruction to purchase something from Amazon.
“We wanted to see if we could make it even more stealthy,” Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, told the Times.
Meanwhile, researchers at Princeton University and China’s Zhejiang University in 2016 demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear, using a technique called the “DolphinAttack.” The attack first mutes the phone so the owner can’t hear what’s going on and then instructs the device to visit malicious websites, initiate phone calls, take a picture, or send text messages. This year, another group of researchers successfully sent voice-activated commands embedded in songs.
Right now, there are no laws that regulate subliminal messaging to Artificial Intelligence—or people, for that matter—which could become problematic as these technologies become more complex and smart devices race to outnumber people by 2021, according to the research firm Ovum. More than half of all American households will have at least one smart speaker by then, according to Juniper Research.
Some readers were concerned about these advances in subliminal messaging, and many tweeted their concerns, including that the research felt chillingly like the dystopian novel 1984.
Well this is terrifying. https://t.co/y1TnhcDRCt
— Tommy Vietor (@TVietor08) May 11, 2018
Good morning from the dystopia https://t.co/YRvJOhHn8m
— erin mccann | subscribe to The Times (@mccanner) May 10, 2018
🕵🏻♂️🔎 Big Brother Is Watching 🎤Listening 🔉Sending Commands =🔬
Siri, Alexa & Google Assistant can be controlled by INAUDIBLE subsonic commands hidden in radio music, YouTube vids or even white noise played over speakers-HUGE security risk https://t.co/gRUktk1D6X
— JD (@JDiviv) May 11, 2018
hackers can now send literal dogwhistles via radio signal to siri or alexa to open incriminating websites or wire money out of your account+all I can think about beside the fact we live in hell is the extent to which the stasi would've played god with this https://t.co/FDZmAkMuTU
— cs (@cszabla) May 11, 2018
All three corporations—Amazon, Google, and Apple—ensured the Times that their devices are safe from intruding forces.
Both Google and Amazon’s assistants use voice recognition to prevent devices from acting on certain commands unless they recognize the user’s voice—which has been proven to be easy to fool. Apple said its smart speaker, HomePod, cannot unlock doors and said that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.
Now when choosing a smart speaker for your home, users will have to consider which device is the least likely to get hacked by outside forces, on top of researching which devices do the best job at the tasks needed.
Tess Cagle is a reporter who focuses on politics, lifestyle, and streaming entertainment. Her work has appeared in the New York Times, Texas Monthly, the Austin American-Statesman, Damn Joan, and Community Impact Newspaper. She’s also a portrait, events, and live music photographer in Central Texas.