Imagine seeing yourself in a porn video. Naked. Violated. Except you have no recollection of the act, of the room you’re in, partner you’re with, or the camera filming. That’s because it didn’t really happen, at least not to you.
Or imagine seeing yourself in a video saying things you’ve never said before, controversial things, the sort of stuff that could cost you your job or alienate you from family and friends. The voice you hear is definitely yours, and so are the turns of phrase, but you have no recollection of what’s being said, and you’re horrified at what you hear.
Such are the possibilities unleashed by notorious programs like FakeApp, which have enabled bad actors to superimpose the face of unsuspecting victims onto the body of someone else, inviting a predictable flurry of fake celebrity porn. The videos are called deepfakes, and while much has been said about the dangers they impose, the real issues are far broader than you may realize.
Deepfakes are only the first step in a chain of technological developments that will have one distinct end: the creation of AI clones that look, speak, and act just like their templates. Using neural networks and deep learning programs, these clones will first exist in video and in virtual worlds. Whether you’re knowingly involved or not, they’ll provide exacting reproductions of your facial expressions, accent, speech mannerisms, body language, gestures, and movement, going beyond the simple transplanting of faces to offer comprehensive, multidimensional imitations.
In the more distant future, these advances in machine learning will be married to advances in robotics, enabling physical robots to fully assume the form and behavior of particular human beings, both alive and dead. In the process, the nature of individuality and personhood will be altered, as we find ourselves living alongside our own clones and proxies, which will act on our behalf as alternate versions of ourselves.
This isn’t Westworld science fiction. It’s already starting to happen.
A single package
In terms of imitating specific people at a non-cognitive level, AI technology is close to being able to produce convincing virtual clones. “Facial expression, voice, and body movements are three examples of what AI could do today, assuming that we have the right type and amount of data,” explains Hussein Abbass, a professor at the University of New South Wales, whose research covers such areas as AI and image processing, intelligent robotics, and human-computer interaction. “The technology needs some improvements, but the feasibility of the principles have been demonstrated already.”
As an example of how neural networks and deep learning can already do more than simply copy someone’s face, researchers at the University of California, Berkeley recently developed a program that can learn the dance moves of one person and copy them to a second. While the researchers had to model the bodies of both individuals as stick figures in the videos they produced, their work illustrates how AI can already learn and reproduce complicated human movements.
As Abbass points out, such reproduction “is different from imitating cognition, which represents the mental processes that generated the behavior.” Still, Abbass believes the ability to imitate the inner as well as external behavior produced by a particular person could be as close as a decade away. “We have some success stories in limited contexts in this direction today, but I predict it may take us 10-20 more years before we reach the tipping point needed for AI technologies to converge to a state where a massively distributed AI appears to be indistinguishable from a human in the way it acts and behaves.”
AI can also imitate individual human voices with a high level of accuracy. In February, Chinese tech firm Baidu announced that it had developed a deep learning program that can reproduce any given person’s voice after listening to it for only a minute, while a Montreal-based startup called Lyrebird went public with a similar feat in April 2017. And just as impressively, San Francisco-based Luka launched a chatbot called Replika last year that learns from the user’s speech (or rather, text) patterns to produce a conversational double of them.
The question is, can all of these advances in machine learning be combined into a single AI program that, in either videos or a virtual world, acts as an uncannily lifelike clone of a user or an unwitting subject?
“It’s hard to predict,” says Bryan Fendley, an expert on educational AI and the director of instructional technology and web services at the University of Arkansas at Monticello. “AI can be well suited for imitating human behaviors such as speech, handwriting, and voice. Someday we may even put it all together in a single package that can pass as a type of clone. …
“I see many experts throwing out numbers for AI predictions that are 20 years down the road. I think it’s moving faster than that.”
Within the next two decades, AI technology will advance to the point where a number of things will be possible. On the one hand, people will likely have access to products and services that will let them build clones of themselves and their friends and family for their own amusement (or therapy). Such clones will be accessible via either digital interfaces or virtual game worlds so that they can be interacted with as if they were the real thing.
“Yes, within 10 years or so, the blending of chatbot-type technology, and deepfake-style tech will be able to generate a plausible audiovisual interaction via a Skype-style technology that one would believe is a real person, at least for a short period,” says Nell Watson, an AI expert and faculty member at the Singularity University’s AI & Robotics department.
Watson acknowledges that considerable progress is still needed before truly convincing AI clones of people become a regular feature of the technological landscape. Nonetheless, she does think it will be more straightforward to produce AI-based reproductions of celebrities.
“One important aspect is the need to have good training data to work from,” she says. “For a celebrity, this is fairly easy, given a variety of film and TV appearances, interviews etc. … In the video game Deus Ex Invisible War, there is a character, NG Resonance, that the player can interact with at various kiosks in nightclubs, etc. This character is powered by AI but is based upon a real human starlet … We are likely to see similar interactions with pseudo-real AI-powered characters, virtual versions of historical heroes and villains that can populate themed venues (e.g., Hard Rock Café, or an Americana-themed diner, perhaps).”
By contrast, Watson cautions that cloning non-famous individuals will be considerably more difficult—but not impossible. “Replicating a private individual will be challenging in comparison, except with their deliberate cooperation. There are technologies today that can capture 3D models easily from 2D stills or video, and replicate accent and prosody, so replicating personality and a plausible ‘spark of life’ will be the greatest challenge.”
Even so, it’s apparent that such videos and simulations are possible with enough data, and in an age of big data and ubiquitous social media, it would be naive to rule them out completely. And assuming that enough personal data can be collected surreptitiously, clones could then be used in much the same way that deepfake porn videos are used to humiliate various people now, although with a more convincing and extensive range of imitated abilities.
Send in the clones
Dr. David Levy not only believes that AI clones will arrive in the next two decades but that there will be a considerable consumer market for them as well.
“Within 20 years there will be a collection of technologies available that will allow companies to produce robots in the likeness of any human being,” says Levy, an international chess master who combined his love of chess with an interest in computers to forge a latter-day career as an author and expert on AI.
In a climate defined by pornographic deepfake videos, such a prediction could be a cause for concern for many, given the enormous potential for abuse. But, Levy, the author of Love and Sex with Robots, believes that robot clones will have a number of legitimate uses.
“One idea is that it will be possible to create a robot in the likeness of a loved one who has passed away. So if you’re married to someone for 50 years and they die, you can have a replica of them,” he says. (This may in part be a reference to Bina48, a robot built by David Hanson of Hanson Robotics and Sophia fame in 2010 and based on the “mind clone,” i.e., audio recordings, of Bina Rothblatt, the deceased co-founder of the transhumanist Terasem Movement.)
Another function for AI-based robotic clones is that of proxy or stand-in (mainly) for celebrities. By way of example, Levy cited Hiroshi Ishiguro, who famously programmed an android version of himself to give lectures on his behalf. “There is already one business deal in existence under which Stormy Daniels is having her likeness made into a sex robot by a company in that business,” he says, referring to a licensing deal signed in June between the pornographic actress and the California-based sex robot manufacturer Realdoll.
Levy admits that such licensing deals certainly won’t be every celebrity’s cup of tea. “But equally in the real world of business and people trying to make money, I think that some famous people will think, ‘That’s a nice idea.’ It’s a bit like a 3D version of sending a photograph of someone. I think that will be another important use of these technologies.”
The need for clear ethical guidelines
Of course, some experts believe we’ll have to wait much longer for truly convincing doppelgangers.
“Simple imitation will take just a few more years,” says Toby Walsh, a professor of computer science and engineering at the University of New South Wales. “But to pass as human, this is the Turing Test. It will be 50, 100, or even more years before computers can match all of our abilities.”
Hussein Abbass is even more conservative in his estimate of when indistinguishable AI will emerge, even if he agrees that comparatively lifelike virtual AI clones are only one or two decades away. “[T]he challenges are not AI ones alone. Sometimes the challenges are mechanical constraints on the robot’s body, and sometimes they are the materials used to produce, for example, the robot face, where these materials do not have a natural texture or the elasticity to do a proper imitation,” he says. “It may take us centuries before we can have this same AI on a human-sized robot without relying on any external connection through the internet.”
Researchers don’t know when you’ll be able to walk down the street with your clone, but the time to decide how to handle and protect ourselves against such media is here now.
“Unless the community continues to push for clear ethical guidelines and boundaries for the use of AI, it will be inevitable to see a future scenario where people appear in videos doing stuff that they did not do in reality,” Abbass warns. “It is likely this will start as fun applications, but then the situation can turn upside down very quickly.”
Other experts agree that the consequences of fully realized AI imitators will be mixed, with some results being novel and entertaining, and others proving more disturbing. “We’ll have deceased actors back in Hollywood movies,” says Toby Walsh. “And politics will be greatly troubled by fake videos of politicians saying things they never said.”
Given the glut of fake news already circulating, it’s possible that the existence of convincing deepfakes, virtual clones, and even robot clones could only make the situation worse. Indeed, if not accompanied by a sea change in how we critically evaluate media, it could only reinforce today’s tendency toward political polarization, in which we increasingly inhabit the filter bubbles that confirm our biases and prejudices.
“The line between what is real and not real is changing,” concludes Fendley. “Our mental fortitude to resist placing human traits on robots, and not falling under the influence of things we know aren’t real, will ultimately fail. We will become more comfortable living in an altered reality alongside robots and powerful AIs.”