- Game developer Chucklefish accused of whitewashing characters of color Monday 5:22 PM
- Apple TV’s ‘Hala’ is a silent explosion of a coming-of-age film Monday 5:20 PM
- This new video game apparently lets you play Jesus Monday 4:02 PM
- Golden toilet creator sells world’s most expensive banana—only for another artist to eat it Monday 3:24 PM
- This new Chinese video game lets players attack Hong Kong protesters Monday 3:05 PM
- These TikTok videos that recreate NPC interactions from Skyrim are honestly incredible Monday 2:40 PM
- John Legend defends pro-consent ‘Baby It’s Cold Outside’ lyrics Monday 2:38 PM
- Video shows UC Berkeley student using racial slurs, making homophobic comments Monday 2:36 PM
- New video reveals Brother Nature instigated sandwich shop fight Monday 2:06 PM
- Lizzo’s thong dress breaks the internet Monday 1:25 PM
- Pixel Buds 2 or Apple AirPods 2: Which are right for you? Monday 1:09 PM
- It’s 2019: Make your holiday cards online, for free this year Monday 12:47 PM
- Fighting over the ‘Marriage Story’ fight scene becomes a meme Monday 12:41 PM
- ‘Trump is innocent!’: InfoWars correspondent interrupts impeachment hearing Monday 12:12 PM
- Video shows runner smacking reporter’s butt on live TV Monday 11:46 AM
When Google demoed the Duplex voice assistant at its I/O developer conference, it was met with mixed reactions. On one hand, the AI, which can take phone calls on behalf of users, was praised as a technological achievement, especially from the bias developers in attendance. But others had a different impression and are questioning the ethics of an AI deceiving people by acting human.
Now the tech giant is hoping to dispel those concerns by reassuring critics that the robot will disclose its identity.
“We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified,” a Google spokesperson told CNET. “What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”
At I/O, Google unsurprisingly showed off an innocent demonstration of its AI calling a salon to book a haircut appointment and a restaurant to book a table. Unlike the robotic voices we’ve grown used to over the years—Siri, Cortana, and even Google’s Assistant—the Duplex AI was surprisingly convincing, using words like “um” and expertly changing the inflection of its voice. It was so convincing, it fooled the employees on the other end, who had no idea they were speaking to a program, not an actual human.
When the applause faded, criticism against Google was unleashed. TechCrunch’s Natasha Lomas slammed Google in an article titled “Duplex shows Google failing at ethical and creative AI design” while the Washington Post’s Drew Harwell asked, “Should it be required to tell people it’s a machine?“
In its defense, Google made it clear it’s aware of potential ethical concerns in a lengthy write-up about the in-development technology.
“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that,” the company wrote. “We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”
Sundar Pichai eerily echoed that sentiment, reaffirming the potential dangers of AI while explaining the importance of taking the right steps in developing it.
“There are very real and important questions being raised about the impact of technology and the role it will play in our lives,” Pichai said. “We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right.”
Google did not specify what its Duplex disclosures would sound like. In the demo it showed, the AI did not reveal its true self, perhaps for testing purposes or to further the illusion for crowd members. However, it wouldn’t be difficult for Google to simply add a “This is the Google Assistant calling…” prompt at the start of each conversation.
Even then, this is only the tip of the iceberg when it comes to ethical decisions around AI. As technology grows more advanced, AI will spread further into our daily lives. Prominent figures, including Elon Musk and the late Stephen Hawking, think it’s better left alone, with Hawking once saying AI could be the “worst event in the history of our civilization.”
Phillip Tracy is a former technology staff writer at the Daily Dot. He's an expert on smartphones, social media trends, and gadgets. He previously reported on IoT and telecom for RCR Wireless News and contributed to NewBay Media magazine. He now writes for Laptop magazine.