AI was used for 0% of this article
Excellent!
AI was used for 0% of this article
Excellent!
It seems so widescale that AI has been called a “mass-delusion event.” Several users have been led by AI to commit suicide.
I think the idea of AI as a “mass-delusion event” sounds exaggerated. When I looked into “chatgpt psychosis” cases, most involved people who already had mental health challenges or were socially marginalized—these are extreme examples, not the norm. It reminds me of nuclear energy: the real danger is not the technology itself, but how people use and control it. For example, in the Windsor Castle intruder case, the key questions are not simply “AI caused this,” but rather: why did this person only listen to a machine’s encouragement? Who was truly behind that encouragement? Why would someone prefer to confide in a robot rather than a human? And why did the operators of that AI system fail to detect and report it in time? These deeper issues of responsibility and oversight are more important to examine than blaming AI for causing psychosis.