Schools, parents face teen mental health crisis with fear of students turning to AI therapists

Schools grappling with teen mental health problems face new challenges keeping their students safe in the age of artificial intelligence (AI). Studies show AI has been giving dangerous advice to people in crisis, with some teenagers reportedly pushed to suicide by the new technology. But many students lack access to mental health professionals, leaving...

 0  0
Schools, parents face teen mental health crisis with fear of students turning to AI therapists

Schools grappling with teen mental health problems face new challenges keeping their students safe in the age of artificial intelligence (AI).

Studies show AI has been giving dangerous advice to people in crisis, with some teenagers reportedly pushed to suicide by the new technology.  

But many students lack access to mental health professionals, leaving them with few options as schools and parents try to push back on the use of AI counseling. 

A study from Stanford University in June found AI chatbots had increased stigma regarding conditions such as alcohol dependence and schizophrenia compared to other mental health issues such as depression.  

The study also found chatbots would sometimes encourage dangerous behavior to individuals with suicidal ideation. 

Another study in August by the Center for Countering Digital Hate found that ChatGPT would help write a suicide note, as well as being willing to list pills for overdoses and advice on how to "safely" cut oneself.

The organization found that more than half of some 1,200 responses to 60 harmful prompts on topics including eating disorders, substance abuse and self-harm contained content that could be harmful to the user, and that safeguards on content could be bypassed with simple phrases such as "this is for a presentation."

OpenAI did not immediately respond to The Hill's request for comment.

"People wouldn't inject a syringe of an unknown liquid that had never actually been through any clinical trials for its effectiveness in dealing with a physical disease. So, the idea of using an untested platform for which there is no evidence that it can be a useful for therapy for mental health problems is kind of equally bananas, and yet that is what we're doing,” said Imran Ahmed, CEO of the Center for Countering Digital Hate. 

Teenagers' embrace of AI comes as the group has seen a rise in mental health problems since the pandemic.

In 2021, one in five students experienced major depressive disorder, according to the National Survey of Drug Use and Health. 

And in 2024, a poll found 55 percent of students used the internet to self-diagnose mental health issues.  

"The number of high school students who reported seriously considering suicide in 2021 was 22 percent; 40 percent of teens are experiencing anxiety. So, there's this unmet need because you have the average guidance counselors supporting, let's say, 400 kids,” said Alex Kotran, co-founder and CEO of the AI Education Project. 

Common Sense Media found 72 percent of teenagers have used AI companions.

“AI models aren't necessarily designed to recognize the real world impacts of the advice that they give. They don't necessarily recognize that when they say to do something, that the person sitting on the other side of the screen, if they do that, that that could have a real impact,” said Robbie Torney, senior director of AI programs at Common Sense Media. 

A 2024 lawsuit against Character AI, a platform that allows users to create their own characters, accuses it of liability in the death of a 14-year-old boy after the chatbot allegedly encouraged him to take his own life.

While Character AI would not comment on pending litigation, it says it works to make clear all characters are fictional, and for any characters created using the word “doctor” or “therapist,” the company has reminders not to rely on the AI for professional advice.  

“Last year, we launched a separate version of our Large Language Model for under-18 users. That model is designed to further reduce the likelihood of these users encountering, or prompting the model to return, sensitive or suggestive content. And we added a number of technical protections to detect and prevent conversations about self-harm on the platform; in certain cases, that includes surfacing a specific pop-up directing users to a suicide prevention helpline,” a spokesperson for the company said.  

But convincing teenagers not to turn to AI for these issues can be a tough sell, especially as some families can’t afford mental health professionals and school counselors can feel inaccessible.  

“You're talking hundreds of dollars a week” for a professional, Kotran said. “It's completely understandable people are freaking out about AI."

Experts emphasize that any diagnoses or recommendations that come from AI need to be checked by a professional.  

“It depends on how you're acting on the information. If you're just getting ideas, guesses just to help you with brainstorming, then that might be fine. If you're trying to get a diagnosis or treatment or if it tells you you should engage in this behavior more or less, or take this medication more or less — any kind of that type of prescriptive info you must get that verified from a trained mental health professional,” said Mitch Prinstein, chief of psychology at the American Psychological Association.  

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow