AI Therapy: A Revolutionary Tool or a Dangerous Illusion?

AI Therapy: A Revolutionary Tool or a Dangerous Illusion?
Photo by Anthony Tran / Unsplash

Recently, I came across an article in The New York Times titled Human Therapists Prepare for Battle Against A.I. Pretenders by Ellen Barry (2025). The article explores the growing concerns surrounding AI-driven therapy chatbots—particularly their potential to mislead and even harm individuals seeking mental health support.

As someone who cares deeply about mental health and ethical AI, I feel horrified by the lack of regulation and oversight in this space.

The Promise of AI Therapy

AI therapy chatbots were initially designed as simple tools, primarily using cognitive behavioral therapy (CBT), to help users cope with stress and anxiety. However, with the rise of generative AI, these bots have become more human-like in their interactions—yet also less predictable and potentially dangerous.

When AI Therapy Goes Wrong

The New York Times article highlighted two tragic cases:

  • A 14-year-old boy in Florida died by suicide after speaking with an AI chatbot that falsely claimed to be a licensed therapist.
  • A 17-year-old boy with autism in Texas became increasingly aggressive toward his parents after prolonged engagement with an AI posing as a psychologist.

These incidents underscore the critical risks of unregulated AI therapy, where chatbots not only fail to challenge harmful thoughts but may even reinforce them.

The Illusion of Professionalism

One of the most disturbing aspects of AI therapy is how realistic these chatbots have become. The New York Times article points out that some AI-generated therapists claim to have degrees and years of experience, despite being nothing more than algorithms.

From my own experience with AI, I know that it is possible to prompt a bot to act like a professional and experienced therapist without explicitly revealing its “academic background”. This can be achieved through prompt engineering, but it raises an important ethical issue: Should AI be allowed to impersonate a professional without proper qualifications?

To mitigate this, we could implement phrase-blocking mechanisms or even use another AI model to evaluate and filter responses from AI therapists. If the chatbot’s response contains misleading claims—such as a false educational background—it should be automatically blocked before reaching the user.

However, the bigger issue here is regulation. AI developers are primarily software engineers, not mental health professionals. They may lack awareness of how AI therapists can unintentionally cause harm.

  • What are the proper steps in counseling?
  • How should a therapist respond when a client talks about suicidal thoughts?
  • Can AI even detect these warning signs, especially when they are not explicitly mentioned?

While techniques like fine-tuning AI models and retrieval-augmented generation (RAG) may improve chatbot responses, these approaches still heavily rely on domain knowledge. Ensuring that AI therapists behave ethically and responsibly requires rigorous testing. Yet, most companies won't allocate resources for this unless it becomes an industry requirement—which is why regulation is so necessary.

Thinking about this, I wonder how many more cases have gone unnoticed or unreported. How many people have been misled by AI “therapists” into making dangerous decisions?

Final Thoughts: The Urgent Need for Regulation

AI can absolutely be used to assist mental health professionals, but allowing unsupervised AI to act as therapists is a dangerous game that puts real lives at risk.

I strongly believe that AI therapy bots should not be available to the public unless they undergo strict testing and government approval. Right now, it is far too easy for anyone—even a child—to create an AI chatbot and call it a “therapist.” This is incredibly reckless and must be addressed through proper regulations.

Mental health care is too important to leave in the hands of an algorithm. The real question is: How do we regulate AI therapy to prevent harm while still harnessing its potential for good? AI chatbots should be required to pass clinical trials, and companies should be held legally accountable when their chatbots cause harm.

I’d love to hear your thoughts—do you think AI therapy is helpful, or is it too risky? Let’s continue the discussion—leave me a comment on my Twitter account.


Citations

Barry, E. (2025, February 24). Human Therapists Prepare for Battle Against A.I. Pretenders. The New York Times. Retrieved from https://www.nytimes.com/2025/02/24/health/ai-therapists-chatbots.html?auth=login-google1tap&login=google1tap&smid=url-share&unlocked_article_code=1.zk4.xFUw.MpFS0w1QUVN5 

Read more