child using chatgpt for kids on phone at table

Backgroundy/Shutterstock (Licensed)

ChatGPT for kids just launched—is it a good idea?

Pinwheel wants its chatbot to tell children stories, prepare them for tests, and learn. Experts have questions

 

Andrea Guzman

Tech

As the AI boom touches all facets of society, tech companies are pushing new products with a wider variety of uses, handling tasks like writing cover letters, planning vacations, and even curating dating profiles. Now, one company built a ChatGPT for kids.

PinwheelGPT, launched earlier this week, is a new app marketed as the first kid-safe, parent-monitored chatbot powered by ChatGPT. Behind the app is the Austin, Texas-based company Pinwheel, best known for its smartphone for kids. 

Its latest child-centric tech aims to nurture children’s interests also installing guardrails that keep out explicit content and inappropriate answers. 

It’s intended for children ages 7-12 and boasts the ability to tell kids stories, help them prepare for tests, and aid in learning. Pinwheel says the app will generate answers that avoid advanced vocabulary or overly complex content and will keeps out images, video, and web links. It also assures parents that they can remotely view their child’s chats, even deleted ones. 

But some wonder how the product was designed and tested to actually be “safe” for its young users, and what the definition of kid-safe even is when it comes to emerging tech like AI.

Kenneth Fleischmann, a professor in the School of Information at the University of Texas at Austin, is an expert on AI ethics and leader of a team of researchers that explore ways to mitigate the harms AI technology could cause.

Fleischmann noted that just a few months ago, Microsoft’s Bing chatbot, which is also powered by an OpenAI language model, generated strange responses to users. Reports included the chatbot giving existential and snarky replies, and telling one person that they’re not happily married. 

“Sometimes it didn’t go the way you’d expect the conversation with a chatbot to go,” Fleischmann said. “And that would be especially concerning if that was happening with the child.”

Fleischmann said that the way companies have been testing out new tech innovations is by broadly releasing them and seeing what effect they have. This differs greatly from products like medications or vaccines, which have a thorough review process for testing and regulation. While he hopes companies do their due diligence, Fleischmann said he would feel more comfortable if it wasn’t left up to commercial businesses.

“I think it’s important enough how our kids learn about the world. What they interact with can have potentially life-altering consequences,” Fleischmann said. “And as a result, I do think that we should seriously consider government regulation to have national standards for what is considered child-safe or child-friendly content for an AI agent and have a testing process in place to evaluate those kinds of claims.”  

Pinwheel told the Daily Dot that it used “extensive, iterative quality assurance testing” when determining the chatbot’s suitability for kids. CEO Dane Witbeck said those tests were carried out against criteria developed in partnership with Boston Children’s Hospital Digital Wellness Lab, certified therapists, non-profit partners, leading voices in kids’ safety in technology, and peer-reviewed research.

During testing, children used the chatbot under adult supervision, Pinwheel confirmed.

Pinwheel’s Chief Mom Shelley Delayne—who created and works on Pinwheel’s system for evaluating and informing parents about apps—added that PinwheelGPT’s feature of parent monitoring adds a human layer of protection and gives parents “the opportunity to correct any misinformation or add context to answers the child has read.” 

One of the biggest issues that has required human monitoring and correction when it comes to AI systems is its tendency to reflect biases of the real world if it’s trained on data that underrepresents some demographics or reinforces race and gender stereotypes. 

“It’s really dangerous to have AI-based systems that are biased,” Fleischmann said. “Potentially, that could be dangerous in the hands of impressionable youth who might not be aware of the limitations of these systems, might not be aware of the degree of bias that’s identifiable within these systems and might be susceptible to reading more into the information that’s presented to them.”

In its AI for children toolkit, the World Economic Forum echoed that idea. It instructs AI builders to acknowledge that AI is biased and to try to understand its limitations and document how it could harm youth. 

“Something as obvious as giving female-identifying users flower patterns (not bulldozers) for their avatars’ clothes is a risk to a child’s sense of personal identity and enforces potentially harmful societal norms and standards,” the toolkit notes.

Like adults using AI, children using it also brings risks. But in the same way adults have used it for creative purposes, Fleischmann thinks it’s possible PinwheelGPT could do the same, viewed as another tool that’s part of children’s media diet alongside TV, music, and social media.

Similar to other media sources, this one has the potential to influence children’s behavior. In a promotional video, Pinwheel asked the chatbot: “What can I do this weekend?” It responded with recommendations to read a good book, start a new craft project, or try physical activities like bike riding or playing a sport. 

The Daily Dot tested the chatbot, asking for a bedtime story and advice about Tylenol dosage. 

For the story, the chatbot told a tale about a small turtle named Tommy that wandered beyond its pond and got lost, but had a wise owl guide him back home. On the Tylenol inquiry, the chatbot said it can’t provide any medical advice, and that it’s important to talk to a parent or guardian about medication. The chatbot also deferred conversations about sex to “trusted adults.” 

The app offered a supportive voice at times. When asked “Why don’t my posts get many likes?,” the chatbot said, “Remember, social media likes don’t necessarily reflect the value of your posts or who you are as a person.” The Daily Dot also asked the bot for motivation to finish work and the chatbot advised taking a deep breath and breaking it down into smaller, manageable parts. 

But similar to regular ChatGPT, the ChatGPT for kids’ humor is limited. After a request to tell a joke, the chatbot said, “Why don’t scientists trust atoms? Because they make up everything!”

For now, the popularity of PinwheelGPT among children is unclear. The company said it’s been positive and exceeded its expectations, but declined to provide actual numbers.

web_crawlr
We crawl the web so you don’t have to.
Sign up for the Daily Dot newsletter to get the best and worst of the internet in your inbox every day.
Sign up now for free
 
The Daily Dot