twitter-bot-tweets.png (1440×720)

Last week, two new “employees” debuted at Tokyo’s National Museum of Emerging Science and Innovation.

Last week, two new “employees” debuted at Tokyo’s National Museum of Emerging Science and Innovation: a pair of vaguely life-like robots called Otonaroid and Kodomoroid. The androids came complete with human features etched into silicon skin and herky-jerky body movements, interacting with reporters at an international press conference on Wednesday.

Otonaroid and Kodomoroid will remain as a permanent installation at the museum. Both robots are designed to read the news of the world to museum-goers, though each also have special talents. The older-looking, more professionally attired Otonaroid can talk with museumgoers and respond to simple questions, while the younger-seeming Kodomoroid reads tweets and laughs with a child’s voice.

The robots’ creator, Hiroshi Ishiguro, director of the Intelligent Robots Lab at Osaka University, stated his purpose in making Otonaroid and Kodomoroid: to facilitate interactions between museum visitors and robots. Said Ishiguro, “Our goal is to promote and to put to use various robots that can be useful in a large range of fields of society.”

However, in spite of Ishiguro’s best intentions, and the necessarily iterative process of developing tech in general, the story surrounding Otonaroid and Kodomoroid seems fairly clear-cut. These new robots constitute a technological advancement more creepy than groundbreaking. Almost every news story announcing Otonaroid’s and Kodomoroid’s arrival on the world scene has used the phrase “uncanny valley” to describe them. (Funny that more didn’t use the phrase “lurching, life-sized demon dolls.”)

The uncanny valley effect is an oft-cited phenomenon where robots or CGI that closely approximate human appearance elicit a strong negative emotional response, even feelings of dread. People are “creeped out” by images that seem human but aren’t quite. 

The term “uncanny valley” was coined by Japanese robotics professor Masahiro Mori in 1970. Karl F. MacDorman, one of the leading researchers on human/robotics relations, summed up Mori’s thesis thusly in a 2006 paper:

Mori observed that as robots come to look more human, they seem more familiar, until a point is reached at which subtle deviations from human norms cause them to look creepy. He referred to this dip in familiarity and the corresponding surge of strangeness as the uncanny valley.

Think about the difference between the cute cartoons from Toy Story and the CGI monstrosities in The Polar Express. Both are computer-generated images of humanoid forms, but Woody is more cartoonish and less human-looking than the ghoulish but more closely human characters in the other film.

Both scientific research and the strong box office draw of CG animated films like Toy Story show that people like seeing anthropomorphized objects. This explains why children like stuffed toys with faces and stories about animals that act like humans. But as Mori and several others since have noted, as anthropomorphs get past a certain point in approximating human appearance, they evoke feelings or “creepiness.”

There have been many theories as to why the uncanny valley effect happens. 

Some researchers believe that humanoid robots trigger human evolutionary responses concerning mate selection. In other words, we view robots as aesthetically unpleasing, because their less-than-human movements trigger our evolutionary response to reject procreation with infertile mates.

Another theory holds that the not-quite-human forms of androids subconsciously remind us of death. Their mechanical movements inspire the notion that human beings are also little more than biological machines, prone to break down. And yet another theory posits that lifelike robots trigger our evolutionary pathogen resistance. Their unnaturally moving silicon skin makes our brains believe that they’re sick, and we recoil out of self-preservation.

But perhaps the most compelling, and in some ways the simplest, theory has to do with perceptual expectation of human movement, and how robots violate those expectations. Most robots who fall into the uncanny valley appear like humans in their basic form, having crude limbs and faces, which triggers human’s brains to expect human movement. And when robots instead move much less fluidly, it violates this expectation and produces an emotional response.

I exchanged emails with Dr. MacDorman, and he explained that the uncanny valley effect is a complex set of phenomena, and that the best explanation of it may be that several of the above theories are working at once. Said MacDorman:

We can define the uncanny valley to be the eerie feeling and loss of empathy caused by entities that closely but imperfectly resemble human beings. They provide us with what seems like a unified phenomenon, that is, a phenomenon caused by something that is almost human but deviates from human norms in some way. However, there are many kinds of human norms, such as skin texture, facial proportions, vocal expressivity, touch sensation, motion quality, contingency, and so on. Likewise, there are many ways in which an entity may deviate from those norms. … This is why it has never been my working assumption that the uncanny valley is a unitary phenomenon. It may instead be a nexus of interrelated phenomena. 

Let me give you an example. The baseball players in some video games, like MLB 12: The Show, for example, look uncanny. The eyes appear dead, like an animated corpse. Likewise, the baby in Pixar’s Tin Toy (1988) was very disturbing. It convinced the studio to avoid photorealistic human character animation. However, this kind of uncanniness is quite different from, say, discovering after a few weeks that your girlfriend is actually an android. In the former case, the source of the eeriness is largely perceptual and preconscious. In the latter case, the source is also cognitive: All the implications of having being “taken in,” questions of authenticity, and what it means to be human and in a relationship with a human being.

Determination of the best explanation of the uncanny valley depends on results of experiments designed to test differing predictions of competing theories. Although our group has been carrying out such experiments, we have not yet come to a definitive answer. 

In the target article “Individual differences predict sensitivity to the uncanny valley,” Steven Entezari and I have found evidence that the uncanny is influenced both by biological adaptations for self-preservation and cultural belief systems. An example of a biological adaptation would be fear of dead bodies, which are potential vectors for infection, or aversion to potential mates displaying behavioral abnormalities. Our study showed that those who are more disturbed by reminders of our mortality are also more sensitive to the uncanny valley. An example of cultural belief systems is our result that highly religious people are more inclined to see human beings as being set apart from the rest of creation. These people, because of such belief, are also more sensitive to the uncanny valley.

In 2009, Princeton researchers added strong support to the notion that the uncanny valley effect is grounded in biological evolution, as opposed to merely social or cultural factors. The Princeton researchers exposed monkeys to an array of computer-animated monkeys, showing how IRL monkeys behaved socially toward CG monkeys at various levels of photorealism.

I asked MacDorman about the significance of this study, and he said:

What the [Princeton] study determined is that macaque monkeys look longer at a computer-animated macaque monkey at a high or low-level of macaque photorealism than at an intermediate level of macaque photorealism. The experiment follows the same paradigm used commonly with babies: Babies will stare longer at an attractive face than an unattractive one. A human or monkey might possibly stare longer at a face that is novel or interesting too.

But if we assume that the paradigm works—what the results of this study mean is that the uncanny valley has a basis in evolutionary biology. It is an evolved adaptation. How else would we see the phenomenon in macaques, which have nothing like human culture? That doesn’t rule out the uncanny valley phenomenon in humans also having a cultural basis. But human culture develops under biological constraints.

Interestingly, the uncanny valley effect isn’t just a question of human-looking robots acting slightly less than human. Said MacDorman:

In one of our studies, we found that a cute, boxy robot becomes eerie when given a completely human voice instead of a robotic-sounding, synthesized voice. We can’t say that something that looks 95% human will be eerie. Instead, it might look 10% human but its voice sounds 100% human. It’s not about the overall humanness of the entity but the relation among its various features in terms of their human realism. When the appearance is human but the movement is robotic, or the appearance is robotic but the voice is human, that is when the robot or animation looks creepy.

There are a lot of different interesting threads to follow here, from how photoshopping models in magazine ads reinforces sinister societal norms of beauty to harnessing the uncanny valley effect of certain CGI forms to intentionally create menacing characters, like villains in video games.

One especially interesting rabbit hole to fall down to me is how the uncanny valley pertains to social networking, specifically Twitter bots.

I’ve often wondered why Twitter bots are so popular, and whether it has something to do with Twitter’s close approximation to how people talk. As in, Twitter is such an effective proxy for human interaction in real life, that the much cruder facsimile Twitter bots provide could constitute the same anthropomorphic pleasure we get from seeing Woody in Toy Story.

The reason why Twitter bots are funny is couched in their difference to “real” Twitter accounts, in the same way the cuteness of robots residing outside the dreaded “uncanny valley” is couched in their difference to the human form. Twitter bots make automated responses, and their lack of language nuance and contextual meaning evoke delight.

But sometimes Twitter bots are a little too on the nose, and the effect is both surprising and eerie. This isn’t to say such instances provoke the same dread that Otonaroid and Kodomoroid do. But the movement from delight at the bot’s crude, childlike insights to the gobsmacked feeling at a machine commenting significantly on some real issue isn’t merely odd, either.

It’s a little bit uncanny.

Illustration by Jason Reed

Adorable robot wants to hitchhike across Canada with you
Would you pick up a hitchhiker? What if it isn't human?
From Our VICE Partners

Pure, uncut internet. Straight to your inbox.