Tech

Tech’s dangerous race to control our emotions

The tech industry will be selling emotional technology long before anyone has a serious debate about whether such tech is desirable.

Photo of Simon Chandler

Simon Chandler

pepper emotional support robot

Technology already manipulates our emotions. When we receive a like on social media, it makes us happy. When we see CCTV cameras on our streets, they make us feel anxious. Every technological tool is likely to produce some kind of corresponding emotional reaction like this, but it’s only in recent years that companies and researchers have begun designing tech with the explicit intention of responding to and managing human emotions.

Featured Video

In a time when increasing numbers of people are suffering from stress, depression, and anxiety, the emergence of technology that can deal with negative emotions is arguably a positive development. However, as more and more companies aim to use AI-based technology to make us “feel better,” society is confronted with an extremely delicate ethical problem: Should we launch a concerted effort to resolve the underlying causes of stress, depression, and other negative states, or should we simply turn to “emotional technology” in order to palliate the increasingly precarious human condition?

For its part, the tech industry seems to be gravitating toward the second option. And it’s likely that it will be selling emotional technology long before anyone has a serious debate about whether such tech is desirable. Because even now, companies are developing products that enable machines to respond in emotional terms to their environment—an environment which, more often than not, includes humans.

In October, it was revealed that Amazon had patented a version of its Alexa personal assistant that could detect the emotional states of its users and then suggest activities appropriate to those states or even share corresponding ads. Microsoft patented something very similar in 2017, when it was granted its 2015 application for a voice assistant that would react to emotional cues with personalized responses, including “handwritten” messages. Even more impressively, Google received a patent in October for a smart home system that “automatically implements selected household policies based on sensed observations”—including limiting screen time until sensing 30 minutes outdoors or keeping the front door locked when a child is home alone. One of the many observations the system relies on to operate? The user’s emotional state.

Advertisement

And aside from the tech giants, a plethora of startups are entering the race to build emotionally responsive AI and devices, from Affectiva and Beyond Verbal to EmoTech and EmoShape. In EmoShape’s case, its chief product is a general-purpose Emotion Processing Unit (EPU), a chip which can be embedded in devices in order to help them process and respond to emotional cues. Patrick Levy-Rosenthal, the CEO of the New York-based company, explains this takes it beyond other AI-based technologies that simply detect human emotions.

“Affective computing usually focuses on how machines can detect human emotions,” he says. “The vision we have at EmoShape is different, since the focus is not on humans but on the machine side—how the machine should feel in understanding its surroundings and, of course, humans, and more importantly the human language. The Emotion Chip synthesizes an emotional response from the machine to any kind of stimuli, including language, vision, and sounds.”

As distant as the prospect of emotional AI might seem right now, there is already at least one example of a commercially successful AI-based device that responds to and manages human emotion. This is Pepper, a humanoid robot released in June 2015 by the SoftBank Group, which had sold over 12,000 units of the model in Europe alone by May 2018 (its launch supply in Japan sold out in one minute).

Even when Pepper was first launched, it had the ability to detect sadness and offer comforting conversation threads and behaviors in response, but in August 2018 it was updated with technology from Affectiva, heightening its emotional intelligence and sophistication ever further. For instance, Affectiva’s software now enables Pepper to distinguish between a smile and a smirk, and while this ostensibly makes for only a subtle difference, it’s the kind of distinction that lets Pepper have a bigger emotional impact on the people around it.

Advertisement

This is most evident in Japan, where Pepper is enjoying gradually increasing use in care homes. “These robots are wonderful,” one Japanese senior citizen told The Japan Times last year, after participating in a session with Pepper at the Shintomi nursing home in Tokyo. “More people live alone these days, and a robot can be a conversation partner for them. It will make life more fun.”

As such testimony indicates, Pepper and machines like it have the power to detect the moods of their “users” and then behave in a way that either changes or reinforces these moods, which in the case of elderly residents equates to making them feel happier and less lonely. And Pepper certainly isn’t the only emotional robot available in Japan: In December, its lead developer Kaname Hayashi announced the appropriately named Lovot, a knee-high companionship bot launched by his spun-off startup, Groove X. “Lovot does not have life, but being with one is comforting and warm,” he said proudly, adding, “It’s important for trust to be created between people and machines.”

Depending on the particular ends involved, the possibility of such “trust” is either inspiring or unsettling. Regardless, the question emerges of when, exactly, such technology will be made available to the general public, and of when emotionally responsive devices might become ubiquitous in homes and elsewhere. Pepper’s price tag was $1,600 at launch—not exactly a casual purchase for the average household.

“Ubiquity is a far-reaching proposition,” says Andrew McStay, a professor in digital media at Bangor University in Wales, and the author of 2018’s Emotional AI. “But by the early 2020s we should be seeing greater popular familiarity with emotion-sensing technologies.”

Advertisement

Familiarity is one thing, but prevalence is another, and in this respect other AI experts believe that the timeframe looks more like decades than years. “I believe we are still quite far away from a future where emotionally responsive devices and robots become ubiquitous,” says Anshel Sag, a tech analyst at Moor Insights and Strategy. “I think we’re probably looking at 20 to 30 years, if I’m being honest. Robotics are expensive, and even if we could get today’s robots to react appropriately to emotions, the cost of delivering such a robot is still prohibitively high.”

Although Sag is doubtful that most of us will interact with emotionally responsive AI anytime sooner than 2039 or 2049, he’s nonetheless confident that such tech will be used in a way that “regulates” human emotions, in the sense of being used to perk up and change our moods. “Yes, I believe there will be emotional support robots and companion robots to keep people and pets company while others are gone or unavailable,” he explains. “I believe this may be one of the first use cases for emotionally aware robots as I believe there is already a considerable number of underserved users in this area.”

But while the arrival of emotionally responsive devices is only a matter of time, what isn’t certain is just how healthy such devices will be for people and society in general. Because even if a little pick-me-up might be welcome every now and again, emotions generally function as important sources of information, meaning that turning us away from our emotional states could have unfortunate repercussions for our ability to navigate our lives.

“The dangers are transformative technologies that are trying to hack the human body to induce a state of happiness,” says EmoShape’s Levy-Rosenthal. “All emotions are important. Society wants happiness, but you should not feel happy if your life is in danger, for example. AI, robots, apps, etc. must create an environment that helps make humans happy, not force happiness on them.”

Advertisement

It might seem hard to imagine plastic robots and AI-based devices having a significant emotional hold over humans, but our present-day relationship with technology already gives a clear indication of how strong our response to emotionally intelligent machines could be. “I’ve seen firsthand how people emotionally react when they break their phone, or when the coffee-machine breaks,” explains Pete Trainor, an AI expert, author, and co-founder of the London-based Us Ai consultancy. “They even use language like ‘my phone has died’ as if they’re mourning a friend or loved one. So absolutely, if I were emotionally attached to a machine or robot, and my attachment to that piece of hardware were as deep as the relationship I have with my phone, and the mimicry was happiness or sadness, I may very well react emotionally back.”

Trainor suggests that we’d have to spend a long time getting comfortable with a machine in order for its behavior to have an emotional impact on us comparable to that of other people. Nonetheless, he affirms that there “are substantial emotional dangers” to the growth of emotionally intelligent AI, with the risk of dependency likely to become a grave one. This danger is likely to become even more acute as machines become capable of not only detecting human emotions, but of replicating them. And while such an eventuality is still several years away, experts agree that it’s all but inevitable.

“I believe that eventually (say 20-30 years from now) artificial emotions will be as convincing as human emotions, and therefore most people will experience the same or very similar effects when communicating with an AI as they do with a human,” explains David Levy, an AI expert and author of Love and Sex With Robots. What this indicates is that, as robots become capable of expertly simulating human emotions, they will become more capable of influencing and regulating such emotions, for better or for worse.

Bangor University’s McStay agrees that there are inherent risks involved in using tech to regulate human emotions, although he points out that such tech is likely to fall along a spectrum, with some examples being more positive than others. “For example, use in media and gaming offers scope to increase pleasure, and wearables that track moods invite us to reflect on daily emotional trajectories (and better recognize what stresses us),” he says. “Conversely, more surveillant uses (e.g., workplaces and educational contexts) that promise to ‘increase student experience’ or ‘worker wellbeing’ have to be treated with utter caution.”

Advertisement

McStay adds that, as with most things, the impact of emotional tech “comes down to meaningful personal choice (and absence of coercion) and appropriate governance safeguards (law, regulation and corporate ethics).” However, the extent to which there will be meaningful personal choice and appropriate regulations is still something of a mystery, largely because governments and corporations have only just begun looking into the ethical implications of AI.

And unsurprisingly for an industry that has in recent years been embroiled in a number of trust-breaking scandals, one of the biggest dangers surrounding emotional AI involves privacy. “Ultimately, sentiment, biofeedback, neuro, big data, AI and learning technologies raise profound ethical questions about the emotional and mental privacy of individuals and groups,” McStay says. Who has access to the data that your robot has collected about your ongoing depression?

And as Cambridge Analytica and other scandals have shown, the privacy question feeds into wider issues too. “Other factors include trust and relationships with technology and AI systems, accuracy and reliability of data about emotion, responsibility (e.g., what if poor mental health is detected), potential to use data about emotion to influence thought and behaviour, bias and nature of training data, and so on.”

There are, then, a large number of hurdles to overcome before the tech industry can sell emotional AI en masse. Still, while recent events might lead some of us to take a more pessimistic and dystopian view of such AI, McStay, EmoShape, and other companies are optimistic that our growing concern with AI ethics will constrain the development of new technology, so that the emotional tech that does emerge works in our best interests.

Advertisement

“I do not think emotional AI is bad or sinister,” McStay concludes. “For sure, it can be used in controlling and sinister ways, but it can also be used in playful and rewarding ways, if done right.”

 
The Daily Dot