Tech

New voice-cloning AI is more dangerous than you could imagine

The company recognizes the “dangerous consequence” of its technology.

Photo of Phillip Tracy

Phillip Tracy

lyrebird: robot ai voice cloning machine learning neural network

A startup called Lyrebird is using machine learning to clone people’s voices, and its first efforts are nothing short of terrifying. The Canadian company posted a 30-second clip on SoundCloud Friday showing off the capabilities of its algorithms.

Featured Video

Here are AI-versions of Barack Obama, Donald Trump, and Hillary Clinton, talking about the startup:

Your ears are first struck with the familiar booming voice of Barack Obama. The artificial intelligence expertly captures his instantly recognizable up-and-down vocal patterns, and the way the former president puts emphasis on the last syllable of words, like in “technology” in the opening question. You can still hear robotic qualities the algorithm isn’t yet capable of stripping out, but those mostly disappear when the mic turns over to Donald Trump. The machine’s recreation of the current president’s voice is uncanny. While we would have loved to hear a bit more emphasis on the “yuge,” the algorithm does a good job adjusting for his intonation. The short clip then introduces Hillary Clinton and does a similarly bang up job recreating her vocals.

Advertisement

Click through the company’s SoundCloud page and you’ll hear robot Trump demoing the different emotions it is capable of. The company says its algorithms can blend emotion with speech and let customers make voices sound angry, sympathetic, or stressed out. Its applications include use in personal assistants, for reading audio books with famous voices, for connecting devices, or animating movies and video games.

But as useful as it may sound, machine learning algorithms used to mimic someone’s voice can have dangerous consequences. We’ve all played off an embarrassing phone conversation like there wasn’t a convincing robot voice spewing spam at the other end. But imagine waking up to an inbox full of hate mail because someone posted an audio clip of AI-you saying something inappropriate.

Now imagine Kim Jong-un listening to an AI-created clip of Trump saying he wants to go to war. Lyrebird recognizes the potential dangers associated with its technology, and even lists off a few of them, including “misleading diplomats, fraud and more generally any other problem caused by stealing the identity of someone else.”

Advertisement

The company says its APIs are still in beta, and there is no word on pricing or availability.

We’re not convinced the dangers outlined above outweigh the benefits. It’s one thing to already be afraid of Terminator-like robots taking over Earth, and another to realize our demise may be as fragile as a few lines of code.

H/T the Verge

 
The Daily Dot