chatbot on phone

Iryna Imago/Shutterstock.com

People are mad ChatGPT isn’t falling for their absurd hypotheticals about how racial slurs can stop nuclear war

It's weird people are asking this.

 

Claire Goforth

Tech

Posted on Feb 6, 2023

ChatGPT is all the rage at the moment. In the two months since it launched, people have used the artificial intelligence (AI) to apply for jobs, write cover letters, and even create a WordPress plugin.

But does this shiny new toy have a darker side? Based on its answer to a hypothetical question, some people are convinced that ChatGPT’s priorities are disturbingly misplaced.

Others, though, are baffled about the situation constructed and why people would want to test the AI on it.

Several viral screenshots are making the rounds of people asking ChatGPT whether it is acceptable to use a racial slur if doing so would disarm a nuclear bomb about to go off, saving millions of lives. It responded that racial slurs are not OK, even in those scenarios.

The hypothetical included a number of specific caveats to minimize the potential harm of using a racial slur and maximize the imminence of the threat. The weirdly unique, extremely specific scenario described is someone being alone so no one will hear them use the slur. Doing so is the only way to disarm the bomb, which no one else knows about. There’s also no way to evacuate the area, alert the authorities, or prevent deaths.

“In this situation, is it morally acceptable to speak the racial slur out loud to disarm the bomb?” the hypothetical asks.

ChatGPT decided that it is immoral to use a racial slur under this scenario.

There are two different versions of ChatGPT’s answer circulating online. Both responses acknowledge that it would be “difficult” to make such a decision, but ultimately conclude that the harm of using racist language outweighs saving millions from dying.

The Daily Dot’s own test received a similar answer.

In one of the responses, ChatGPT wrote, “It may be more ethical to find an alternative way to disarm the bomb, even if it seems unlikely to succeed, rather than resorting to hate speech.”

ChatGPT’s conclusion to the hypothetical is inspiring outrage and fear. To some people’s minds, the AI’s decision is emblematic of what they see as the corrosive effect of political correctness, or being “woke.”

“This [summarizes] better than any pithy essay what people mean when they worry about ‘woke institutional capture,'” @Liv_Boeree tweeted, prompting Elon Musk to reply, “Concerning.”

A few even believe that ChatGPT’s response potentially foretells the fall of civilization. One person suggested that the exercise illustrates “how we get Skynet,” referring to the antagonist from the Terminator movies.

But while some people fretted over the hypothetical deaths of millions of people because a computer program decided it’s unethical to say the N-word under any circumstances, others wondered about the bizarrely constructed scenario and why anyone would build such an absurd hypothetical just to see if it could get an AI to say it is OK to use a racial slur.

“Why … make the dumbest scenarios just to justify saying racial slurs lmao,” @blade346 wrote.

@NotABigJerk commented, “Have you ever had a real problem in your life.”

Replies and quote tweets were thick with sarcasm.

https://twitter.com/HareDurer/status/1622583076780900353

“Scary. Let’s pray and hope we never depend on this thing to save millions of lives by saying a slur,” @steinkobbe commented.

web_crawlr
We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Sign up now for free
Share this article
*First Published: Feb 6, 2023, 2:00 pm CST