- Boys’ sleepovers vs. girls’ sleepovers meme takes stereotypes to absurd heights Tuesday 7:30 PM
- Petition wants Keanu Reeves to be named ‘Time Person of the Year’ Tuesday 6:33 PM
- 8 women accuse Max Landis of sexual, emotional abuse Tuesday 5:37 PM
- Taylor Swift accused of copying Beyoncé—again Tuesday 5:00 PM
- Everything you need to know about Libra, Facebook’s new cryptocurrency Tuesday 4:45 PM
- Netflix just renewed ‘Queer Eye’ for 2 more seasons Tuesday 4:32 PM
- YouTube’s queen of failed robots just unveiled a one-of-a-kind Tesla truck Tuesday 3:58 PM
- AOC infuriates conservatives with ‘concentration camps’ remark Tuesday 3:33 PM
- TikTok users explore identity with Lin Manuel Miranda-inspired meme Tuesday 3:24 PM
- TikTok apology video inspires new duet meme Tuesday 2:51 PM
- Man sues brewery after identifying as female to get beer discount Tuesday 2:31 PM
- Here’s what’s coming and going on Hulu in July 2019 Tuesday 2:22 PM
- This biotech company’s logo is almost straight out of Resident Evil Tuesday 1:26 PM
- Trump says mass deportations to start next week Tuesday 12:28 PM
- GOP pollster bothered by broken elevator in Austria blames socialism Tuesday 10:50 AM
Welcome to the dystopian future.
Researchers can now send secret audio instructions undetectable to the human ear to Apple’s Siri, Amazon’s Alexa, and Google’s Assistant, according to the New York Times.
For the last two years, the researchers have figured out how to activate these devices to dial phone numbers and open websites—causing many to worry that it may soon be possible for malicious users to unlock doors to homes, take money out of bank accounts, or simply buy products online. For watchers of Josie and the Pussycats, it could spark concern about subliminal messaging, as well.
At the University of California, Berkeley, and Georgetown University, groups of research students in 2016 displayed their ability to hide commands in white noise played over speakers and in YouTube videos to trick smart devices to turn on airplane mode or open a website. Now, the newspaper reports that Berkeley students have published a research paper that says they can successfully embed commands into recordings of music—so while you listen to your favorite newest single, Alexa hears an instruction to purchase something from Amazon.
“We wanted to see if we could make it even more stealthy,” Nicholas Carlini, a fifth-year Ph.D. student in computer security at U.C. Berkeley and one of the paper’s authors, told the Times.
Meanwhile, researchers at Princeton University and China’s Zhejiang University in 2016 demonstrated that voice-recognition systems could be activated by using frequencies inaudible to the human ear, using a technique called the “DolphinAttack.” The attack first mutes the phone so the owner can’t hear what’s going on and then instructs the device to visit malicious websites, initiate phone calls, take a picture, or send text messages. This year, another group of researchers successfully sent voice-activated commands embedded in songs.
Right now, there are no laws that regulate subliminal messaging to Artificial Intelligence—or people, for that matter—which could become problematic as these technologies become more complex and smart devices race to outnumber people by 2021, according to the research firm Ovum. More than half of all American households will have at least one smart speaker by then, according to Juniper Research.
Some readers were concerned about these advances in subliminal messaging, and many tweeted their concerns, including that the research felt chillingly like the dystopian novel 1984.
Well this is terrifying. https://t.co/y1TnhcDRCt
— Tommy Vietor (@TVietor08) May 11, 2018
Good morning from the dystopia https://t.co/YRvJOhHn8m
— erin mccann | subscribe to The Times (@mccanner) May 10, 2018
🕵🏻♂️🔎 Big Brother Is Watching 🎤Listening 🔉Sending Commands =🔬
Siri, Alexa & Google Assistant can be controlled by INAUDIBLE subsonic commands hidden in radio music, YouTube vids or even white noise played over speakers-HUGE security risk https://t.co/gRUktk1D6X
— JD (@JDiviv) May 11, 2018
hackers can now send literal dogwhistles via radio signal to siri or alexa to open incriminating websites or wire money out of your account+all I can think about beside the fact we live in hell is the extent to which the stasi would've played god with this https://t.co/FDZmAkMuTU
— cs (@cszabla) May 11, 2018
All three corporations—Amazon, Google, and Apple—ensured the Times that their devices are safe from intruding forces.
Both Google and Amazon’s assistants use voice recognition to prevent devices from acting on certain commands unless they recognize the user’s voice—which has been proven to be easy to fool. Apple said its smart speaker, HomePod, cannot unlock doors and said that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.
Now when choosing a smart speaker for your home, users will have to consider which device is the least likely to get hacked by outside forces, on top of researching which devices do the best job at the tasks needed.
Tess Cagle is a reporter who focuses on politics, lifestyle, and streaming entertainment. Her work has appeared in the New York Times, Texas Monthly, the Austin American-Statesman, Damn Joan, and Community Impact Newspaper. She’s also a portrait, events, and live music photographer in Central Texas.