the actual relationship with another person is where good work happens,” she said.
The article shows why I don’t trust chatbots with mental health: even the therapists quoted say AI should only be a supplement, not a substitute, and the piece documents cases where bots mishandled suicidal ideation. It’s telling that states have started banning or restricting “AI therapy,” and that researchers found some bots literally listed New York bridges when a user hinted at self-harm—proof that safety breaks exactly when it matters most. A real clinician can read body language, tone, and silence; as one therapist in the story puts it, “the actual relationship with another person is where good work happens.” The article itself concedes as much when a therapist warns he “strongly dissuades patients from attempting to diagnose themselves with any mental health condition using AI,” because humans integrate nonverbal cues that bots miss. Yes, journaling with a bot may feel helpful at 2 a.m., but that convenience shouldn’t replace the accountability and care of talking to trained people who can intervene responsibly. Bottom line: for mental health, I’m against AI chatbots; talk to real humans.