Tech

AI wants to revolutionize healthcare—it may be doing more harm than good

Tech experts have a history of overpromising. Should they be doing so in life and death situations?

Photo of Agnes Arnold-Forster

Agnes Arnold-Forster

Blue AI face with ekg coming from mouth on black background

In 2018, British trauma and orthopedic surgeon Alex Young founded his company Virti. Virti uses artificial intelligence to build “virtual patients” and simulations that train healthcare professionals, pledging to improve their “soft skills” and make them better at communicating with their patients. Dr. Young is evangelical about the power of machine learning, immersive experiences, and virtual and augmented reality to transform the world of healthcare. 

Featured Video

And he is not the only person attempting to use AI to make healthcare better. As assistant professor at Indiana University Dr. Hannah Zeavin told the Daily Dot, “AI in healthcare promises a great deal and has wended itself into almost every aspect of the industry, from virtual humans who ‘offer therapy’ to algorithms that determine who gets an ER bed, a kidney, a surgery.” 

Various companies have invested in ways for infallible machines to replace people who make mistakes. Google has been using big data to build models to warn clinicians and predict high-risk conditions, such as sepsis and heart failure. Ivion offers a “clinical success machine” that identifies the patients most at risk as well as those most likely to respond to treatment protocols. In 2020, Microsoft announced AI for Health, devoting $40 million to an ambitious five-year program. 

Advocates of AI insist that it can lighten the load of overworked healthcare professionals and improve the accuracy, efficiency, and cost-effectiveness of medicine. Brad Smith, the president of Microsoft, said that AI has the potential to solve “some of humanities greatest challenges.” Babylon Health, a digital-first health service provider that combines an AI-powered platform with virtual clinical consultations, calls AI a “transformational force in healthcare.” In the world of AI, this kind of language is everywhere.

Advertisement

The tenor of discussion frequently veers toward the grandiose and utopian, despite some critics claiming that even the term “artificial intelligence” is a lie, as the vast majority of systems currently advertised are merely statistical pattern recognition and replication engines

And despite this enthusiasm, in the last few months, various healthcare AI companies have seen their fortunes falter. Babylon’s shares fell nearly 87% since it went public last October. Alphabet has closed its healthcare division Google Health. A U.K. company, Sensyne, was recently fined by the London stock exchange and IBM sold its arm that provided “AI healthcare solutions.” These were supposed to be disruptive technologies that brought the “move fast and break things” ethos to the traditional, slow-paced, and conservative world of medicine.

But tangible successes have been hard to demonstrate.  

In some cases, AI doesn’t seem to be able to perform analytical tasks any better than people can. A study published in the British Medical Journal found that 94% of AI systems that scanned for signs of breast cancer were less accurate than a human radiologist. 

Advertisement

Algorithms in the ER that are used to determine who next gets admitted are more likely to direct white patients to a bed than Black patients, even when they have the exact same symptom profile. There are also concerns about ethics and privacy. Companies that claim to use AI to improve people’s mental health are often able to circumvent regulatory systems, avoid FDA scrutiny, and thrive without peer-reviewed evidence of efficacy. Several such companies—including Babylon—have also suffered serious data breaches

But healthcare AI is a big and diverse field, and while some elements haven’t quite lived up to expectations, others are proving remarkably lucrative.

Virti is one of the pandemic’s monetary success stories. It uses “immersive learning, artificial intelligence, and game design” to train people to be better workers. Its systems are now embedded in various companies, universities, and hospitals. It raised $10 million in Series A funding, has increased its revenue by 978%, and was even named one of TIME’s Best Inventions of 2020.

But as many tech companies have proven over and over again, raising money doesn’t mean you’re saving the world—despite the promises of many founders.

Advertisement

Virti built “virtual humans”—and in the case of healthcare “virtual patients”—that are supposed to interact with learners “just like ‘real’ humans would.” These avatars can be used to train technical, clinical skills—something other organizations have done before—but they also claim to use AI to train “soft skills.” The virtual patients gather data and provide feedback to improve healthcare professionals’ communication and enhance their emotional literacy. 

Virtual patients have been provided to teaching hospitals across Europe and the United States to train doctors and nurses to better talk to the people they serve, and allow them to practice empathy and other interpersonal skills. Virti provides simulations that offer a “low-pressure way to practice high stakes conversations” and “foster an emotional connection for learners.” It “recreate the stress and emotion of challenging conversations” and use AI tools like behavioral and sentiment analysis to capture data on employees’ emotional intelligence, identify “high EQ” staff, and “level-up [the] workforce empathy and drive human behavior changes.” In one U.K. hospital, Virti is even being used to tackle racial “inequality and discrimination.”

In the highly industrialized and pressurized world of modern medicine, doctor-patient communication is frequently flawed. Evidence suggests that poor communication skills on behalf of healthcare workers are the cause of most patient complaints and can engender anxiety, mistrust of medical providers, dissatisfaction with healthcare services, and poor clinical outcomes.

One of the investors in Virti is Cedars-Sinai Medical Center, which during the pandemic used Virti’s technology to train its healthcare professionals. Russell Metcalfe-Smith, director of the Women’s Guild Simulation Center for Advanced Clinical Skills at Cedars-Sinai, said in an interview back in 2021 that they found Virti’s combination of VR, augmented reality, and AI “very valuable to observe a doctor’s thinking process.” 

Advertisement

Virti and other services that use AI to read people’s feelings, assess their speech for meaning, and train doctors to be better at emoting are part of a broader branch of AI called Emotion AI, or Affective Computing. Emotion AI is supposed to allow computers to analyze and understand verbal and nonverbal signs such as facial expressions, tone, and gestures to assess their emotional state. This subfield is as compelling as it is lucrative, despite a lack of robust underpinning evidence. This kind of training is touted as superior to more traditional alternatives because it is more cost-effective, requires less labor, and can be delivered at a distance.

Emotion AI is based on a set of assumptions. First, that there is a small number of distinct and universal emotional categories. Second, that we reveal those emotions on our faces and through our words; and third, that these feelings can be detected by machines. But researchers have identified various flaws in these assumptions. Not only are they based on outdated studies with flimsy methodologies, but as Dr. Zeavin told the Daily Dot, “technologies do not somehow, magically, free us from human bias.” And affective AI is infused with cultural, gender and racial prejudices—not to mention its frequent inability to deal with any degree of neurodiversity. 

Emotion recognition systems are known to flag the speech styles of women, and particularly Black women, differently from those of men. As Lisa Feldman Barrett, a professor of psychology at Northeastern University put it, “Companies can say whatever they want, but the data are clear. They can detect a scowl, but that’s not the same thing as detecting anger.” In a life or death field like medicine, where communication is key, these flaws could be the difference between good care and bad.

Virti did not respond to the Daily Dot for comment on these concerns about affective AI, and has not publicly addressed them.

Advertisement

There are already excellent, low-tech solutions to healthcare professional wellbeing. Interventions like Balint Groups, shorter working hours, Schwartz Rounds, better resourcing, increased staffing, a humanities education, and face-to-face therapy have all been found to improve workplace satisfaction for doctors and nurses and ameliorate their ability to communicate effectively with their patients. Becoming overly reliant on AI simulations not only risks reducing the complexity of human feelings and relationships to bare numbers, but also diverts money away from structural solutions to prop up a tech company’s stock price.

There is very little independent evidence out there to either support or undermine Virti’s specific claims that its products can train healthcare workers to feel and communicate better. When Metcalfe-Smith from Cedars-Sinai spoke to the Daily Dot, he was complimentary about Virti, but cautious about the transformative potential of its technologies. “The feedback was generally positive,” he said, “and for some the virtual experience will have been exceptional.” 

Cedars-Sinai is still using Virti’s system to train their workers, but only to supplement in-person training. Some of the people that used the immersive technology still preferred more traditional methods of training, and Metcalfe-Smith told the Daily Dot that virtual technologies and AI had plenty of limitations. While they offer something useful, particularly in times of crisis, Virti complemented, rather than replaced, more conventional methods of teaching. 

There are many reasons why even healthcare workers who are relatively positive about AI, caveat its potential. Contributing to anxieties about the threat posed by AI to healthcare jobs, in 2016 the computer scientist Geoff Hinton said, “we should stop training radiologists now; it is just completely obvious deep learning is going to do better than radiologists.”

Advertisement

Such claims come up against the actual efficacy of some AI systems, which as mentioned, are worse than humans at detecting breast cancer.

But many of AI’s advocates aren’t all that interested in actually saving individual lives when they can instead theoretically “save the world.” But in healthcare, that’s absolutely missing the point. Forcing it into this field isn’t the same as other industries tech evangelists have tried to disrupt.

Because waiting a little longer for a ride in the rain after an algorithm tweak isn’t the same as a computer processor misdiagnosing the lump in your chest.


Read more of the Daily Dot’s tech and politics coverage

Advertisement
Nevada’s GOP secretary of state candidate follows QAnon, neo-Nazi accounts on Gab, Telegram
Court filing in Bored Apes lawsuit revives claims founders built NFT empire on Nazi ideology
EXCLUSIVE: ‘Say hi to the Donald for us’: Florida police briefed armed right-wing group before they went to Jan. 6 protest
Inside the Proud Boys’ ties to ghost gun sales
‘Judas’: Gab users are furious its founder handed over data to the FBI without a subpoena
EXCLUSIVE: Anti-vax dating site that let people advertise ‘mRNA FREE’ semen left all its user data exposed
Sign up to receive the Daily Dot’s Internet Insider newsletter for urgent news from the frontline of online.
 
The Daily Dot