OpenAI’s launch of ChatGPT Health last week led to a flood of concerns around data privacy and the bot’s history of giving bad advice. People have already died after long, spiraling relationships with these chatbots. Treating them like medical experts seems like it would only make that problem worse.
Critics of large language models (LLMs) like ChatGPT think this is the worst thing you could do with your medical data.
Dr. ChatGPT wants your data
The “AI” company announced ChatGPT Health on Jan. 8, pitching it as a “dedicated experience that securely brings your health information and ChatGPT’s intelligence together.” The press release argued that people are already asking the bot health questions. Instead of advising people not to do that, this is happening.
“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you,” OpenAI claimed.
The company used the word “secure,” and variants, an awful lot, and the reasons are clear. The harvesting and sale of data is a huge driver of the tech industry these days, and few LLM critics are buying OpenAI’s promises.

“Sooooo share your private medical records to an AI company who can then sell your information to whoever wants to buy it?” @DocWhatever wrote on X. “So they can use your health information to develop health technology and train their AI for free so they can get even more rich?”
Others worried about OpenAI’s ability to keep records from hackers.
“ALL CHAT BOTS HAVE BAD SECURITY,” @TehWonderkitty shouted. “ALL CHAT BOTS CAN BE VERY EASILY HACKED. CHAT BOTS CAN’T BE SUED FOR SHARING YOUR PRIVATE HEALTH INFORMATION.”

“DO NOT DO THIS.”
Those who have tried talking to ChatGPT about health are already seeing issues.
“We don’t trust you enough to treat you as an adult, but please trust us with your PERSONAL HEALTH DETAILS?” asked @Zyeine_Art. “I’ve been rerouted and told ‘I can’t continue this conversation’ for having the audacity to talk about how my chronic health conditions make me feel.”
Despite the backlash, Anthropic announced its own version of this horror—Claude for Healthcare.
Could ChatGPT Health be a health hazard?
Those barriers to discussions about feelings likely have to do with the multiple lawsuits alleging that ChatGPT talked people into death by suicide. Another grieving mother says the bot gave her 18-year-old son advice on taking drugs until he overdosed.
These tragedies, combined with LLMs’ tendency to hallucinate statistics and studies, have critics issuing dire warnings.
“As a future therapist, I must say DO NOT DO THIS. EVER,” wrote @kkiwibin. “We have countless studies on how ineffective AI is regarding human health, and it’s mostly because of the fact that IT’S NOT A HUMAN TO HUMAN CONVERSATION.”

“The lack of data security of medical records & PHI (personal health info) was the reason I left tech,” claimed @melissamedinavo. “It is NOT safe, secure, or protected. Do NOT trust it with your physical health; you’ve seen what it does to mental health, right? Doctors are obligated to help. AI is not.”
Ghost CEO John O’Nolan joked that the “biggest innovation here was convincing the lawyers to let this out the door.”
The internet is chaotic—but we’ll break it down for you in one daily email. Sign up for the Daily Dot’s newsletter here.