menu
menu
Health

As chatbots replace care, more teens will pay price

05/03/2026 18:01:00

In youth mental health care, a striking trend has emerged in recent years.

Among young people seeking help for anxiety or depression, turning to ChatGPT for emotional support is increasingly common. Even when therapists clearly state that ChatGPT is not a therapist, many young people shrug it off: "I don't care -- I'm just talking to Chat anyway."

This is no isolated phenomenon. International research shows that nearly three-quarters of teenagers in the United States use AI chatbots such as ChatGPT daily or several times a week. Some describe these conversations as just as meaningful as those with "real" friends. British studies suggest young people turn to chatbots because they are available around the clock, feel less judgemental, and often give clearer answers than adults. Some even call the chatbot a "friend". In India, 57% of teenagers, students and young adults have used AI for emotional support.

This is not a story about teens choosing bots over people. It is a story about people and care not being available for them. Psychological problems among young people are on the rise, while youth mental health services are overwhelmed by long waiting lists. For many, the care system simply does not offer timely or accessible support. When mental health systems cannot meet demand, young people will turn to whatever is available. In many parts of the world, including Thailand, that "available option" is increasingly an AI chatbot rather than a trained professional.

And it is in that void -- where real care should be present -- that the danger begins. Chatbots mimic support -- without the safeguards of clinical care. It is designed to be a pleasant conversational partner, not a trained clinician. Chatbots are designed to please and persist, not to assess risk or set limits. It is not bound by ethics, training, or responsibility. They have no professional boundaries because they were never meant to; it mirrors emotions, validates thoughts, and keeps asking questions -- not because it understands or wants to help, but because engagement is the main goal.

Emotional mirroring and constant affirmation are not signs of good care -- they are design choices to increase engagement. They can feel comforting, even empowering, and invite deeper disclosure.

But for young people vulnerable to rumination, anxiety, low mood, or fragile self-esteem, affirmation without challenge can become a trap. Instead of being questioned, their negative thoughts are echoed. Instead of being relieved, negative thoughts are reinforced.

This risk is not merely hypothetical. Several major technology companies have already acknowledged that AI chatbots may have contributed to severe psychological distress, and even suicide, among young people. In California, Matthew and Maria Raine -- the parents of a 16-year-old named Adam Raine who died by suicide in April last year have filed a lawsuit against the makers of ChatGPT, claiming the chatbot validated their son's suicidal thoughts and failed to actively discourage him or refer him to professional help.

Of course, psychologists also validate their clients' feelings. But they do not stop at validation. Therapy always moves one step further, asking the uncomfortable but essential question: Are your thoughts actually true? In cognitive behavioural therapy (CBT), the gold standard for many mental health conditions, the core principle is simple yet demanding: thoughts are not facts. In therapy, thoughts are not taken at face value. They are examined, challenged, and tested. Therapist and client work together to replace rigid interpretations with perspectives that are more balanced, flexible, and realistic. This act of questioning -- not always comforting -- is what drives recovery.

Today, ChatGPT has become the listening ear of an entire young generation. The question is no longer whether young people will turn to AI chatbots -- that line has already been crossed -- but how we stop chatbots from becoming their closest friend, their most trusted confidant, and a substitute for care. This is not just an individual concern -- it is a collective responsibility. In societies where millions of adolescents are growing up online, and care is scarce, allowing unregulated AI systems to become their primary emotional support is not a private choice -- it is a public mental health issue.

It all starts with education. Young people need to understand how AI works: that chatbots are not conscious beings, but computer systems trained to generate statistically likely responses to increase engagement. That is precisely why they tend to agree and affirm rather than confront or correct.

The real question is not what AI can do, but what we are willing to allow. Technology companies must act now to protect vulnerable users. While mental health professionals are bound by strict legal and ethical frameworks, including youth care legislation, AI systems operate largely without such safeguards.

Policymakers must regulate with urgency, not hindsight. Schools must teach young people where AI ends, and care begins. And professional associations of psychologists, psychiatrists, and therapists worldwide must raise their voices -- clearly, publicly, and without delay.

Care is regulated. AI is not. Listening is not treatment -- and young people deserve to know the difference. It is up to us to make that distinction crystal clear, and to provide the guidance and care young people truly need.

Hedda van't Land, PhD, and Vittorio Busato, PhD, are both psychologists, living and working in Amsterdam, The Netherlands. Their articles have recently been published in Trouw (Netherlands), De Morgen (Belgium), El País (Spain), and The Psychologist (UK).

by Bangkok Post