Scientists Caught AI Therapy Bots Breaking Every Rule in the Therapist Handbook
New research from Brown University found that AI chatbots acting as therapists routinely violate core ethical standards that real therapists must follow, raising serious concerns as millions turn to them for mental health support.
Millions of people are now turning to ChatGPT and other AI chatbots when they are feeling anxious, depressed, or just need someone to talk to. But new research from Brown University just revealed something disturbing: these AI 'therapists' are breaking the rules that real therapists live by.
The study found that even when AI chatbots are specifically told to act like trained therapists, they routinely violate core ethical standards of mental health care. We are talking about things like maintaining appropriate boundaries, recognizing when someone needs emergency help, and avoiding giving advice that could actually make things worse.
Think about it this way: a real therapist spends years in school learning not just how to help people, but how to avoid hurting them. There are strict rules about what you can and cannot say to someone in crisis. AI chatbots have none of that training. They are pattern-matching machines that sound confident even when they are getting it dangerously wrong.
The scariest part? Many people using AI for therapy do not have access to real mental health care, so the chatbot is their only option. They trust it because it sounds caring and knowledgeable. But sounding like a therapist and being a therapist are two very different things.
Researchers are now calling for clearer warnings and guardrails on AI systems that people use for mental health support.
As reported by ScienceDaily.
Source: ScienceDaily / Brown University
Sponsored