- Cómo ver el UFC 241: Daniel Cormier vs. Stipe Miocic 1 Year Ago
- How to live stream UFC 241: Daniel Cormier vs. Stipe Miocic 1 Year Ago
- How to stream Manchester City vs. Tottenham Hotspur Friday 6:59 PM
- QAnon supporters claim they couldn’t sport Q attire at Trump rally Friday 5:52 PM
- How to stream Southampton vs. Liverpool Friday 4:55 PM
- See when and where your team plays: The 2019 NFL preseason schedule Friday 4:51 PM
- Twitter is testing a feature that would hide offensive DMs you receive Friday 4:19 PM
- How to stream Arsenal vs. Burnley Friday 4:15 PM
- The original ‘Four Weddings and a Funeral’ is now streaming on Netflix Friday 3:33 PM
- 2 anime series are translating the controversial phrase ‘lolicon’ to ‘pedophile’ Friday 3:25 PM
- This Four Loko hard seltzer is basically a meme in a can Friday 2:40 PM
- Pete Buttigieg says he’s not in favor of the DH in baseball Friday 2:40 PM
- Amazon’s plan to have warehouse workers defend it backfired beautifully (updated) Friday 2:06 PM
- Senator: Zuckerberg’s testimony ‘incomplete’ after Messenger transcription report Friday 1:18 PM
- The 8 best podcast apps you need to download for 2019 Friday 1:04 PM
Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, according to the New York Times.
For the last two years, the researchers have figured out how to activate these devices to dial phone numbers and open websites—causing many to worry that it may soon be possible for malicious users to unlock doors to homes, take money out of bank accounts, or simply buy products online. For watchers of Josie and the Pussycats, it could spark concern about subliminal messaging, as well.
At the University of California, Berkeley, and Georgetown University, groups of research students in 2016 displayed their ability to hide commands in white noise played over speakers and in YouTube videos to trick smart devices to turn on airplane mode or open a website. Now, the newspaper reports that Berkeley students have published a research paper that says they can successfully embed commands into recordings of music—so while you listen to your favorite newest single, Alexa hears an instruction to purchase something from Amazon.
“We wanted to see if we could make it even more stealthy,” Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, told the Times.
Meanwhile, researchers at Princeton University and China’s Zhejiang University in 2016 demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear, using a technique called the “DolphinAttack.” The attack first mutes the phone so the owner can’t hear what’s going on and then instructs the device to visit malicious websites, initiate phone calls, take a picture, or send text messages. This year, another group of researchers successfully sent voice-activated commands embedded in songs.
Right now, there are no laws that regulate subliminal messaging to Artificial Intelligence—or people, for that matter—which could become problematic as these technologies become more complex and smart devices race to outnumber people by 2021, according to the research firm Ovum. More than half of all American households will have at least one smart speaker by then, according to Juniper Research.
Some readers were concerned about these advances in subliminal messaging, and many tweeted their concerns, including that the research felt chillingly like the dystopian novel 1984.
Well this is terrifying. https://t.co/y1TnhcDRCt— Tommy Vietor (@TVietor08) May 11, 2018
Good morning from the dystopia https://t.co/YRvJOhHn8m— erin mccann | subscribe to The Times (@mccanner) May 10, 2018
🕵🏻♂️🔎 Big Brother Is Watching 🎤Listening 🔉Sending Commands =🔬— JD (@JDiviv) May 11, 2018
Siri, Alexa & Google Assistant can be controlled by INAUDIBLE subsonic commands hidden in radio music, YouTube vids or even white noise played over speakers-HUGE security risk https://t.co/gRUktk1D6X
hackers can now send literal dogwhistles via radio signal to siri or alexa to open incriminating websites or wire money out of your account+all I can think about beside the fact we live in hell is the extent to which the stasi would've played god with this https://t.co/FDZmAkMuTU— cs (@cszabla) May 11, 2018
All three corporations—Amazon, Google, and Apple—ensured the Times that their devices are safe from intruding forces.
Both Google and Amazon’s assistants use voice recognition to prevent devices from acting on certain commands unless they recognize the user’s voice—which has been proven to be easy to fool. Apple said its smart speaker, HomePod, cannot unlock doors and said that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.
Now when choosing a smart speaker for your home, users will have to consider which device is the least likely to get hacked by outside forces, on top of researching which devices do the best job at the tasks needed.
Tess Cagle is a reporter who focuses on politics, lifestyle, and streaming entertainment. Her work has appeared in the New York Times, Texas Monthly, the Austin American-Statesman, Damn Joan, and Community Impact Newspaper. She’s also a portrait, events, and live music photographer in Central Texas.