With the rise of advanced artificial intelligence models, many people have grown concerned with issues like job security or scams. One unexpected complication is the detrimental impact it would have on users’ mental health. AI psychosis is perhaps the most shocking development we have encountered yet.
What Is AI Psychosis?
AI psychosis is a term that has been used to describe psychosis-like symptoms seen in individuals frequently using chatbots, virtual assistants, and “AI companion” services. Individuals become overly reliant on AI for emotional support and may develop a warped perception of reality. This can manifest as delusions, hallucinations, dramatic mood swings, and worsening or development of serious mental health conditions. Recent research indicates those inflicted are most commonly experiencing grandiose delusions, religious/spiritual delusions, or romantic delusions.
AI psychosis is not a formal diagnosis yet, but it seems many care providers expect that there will be some type of label to describe these trends in a future iteration of the DSM. Some other names that have been used include ChatGPT psychosis or AI-associated psychosis.
Why Is This Happening Now?
Chatbots and virtual assistants are not anything new – just think of Alexa and Siri. So why is this suddenly an issue? The reality is that AI psychosis has been around as long as these technologies have been. In the past, this was only typically seen in individuals with a predisposition for delusions or who had conditions like schizophrenia. While these individuals are certainly at higher risk still, we are now seeing more cases of individuals with no prior mental illness developing AI psychosis.
One probable explanation for this is that AI is more realistic than ever. Chatbot responses are more human-like. The voices of virtual assistants are modeled off the voices of actual people. You can make realistic photos and a persona for a virtual companion in seconds. As AI advances, it moves further away from the uncanny valley that makes people uncomfortable. The closer something is to reality, the easier it is for us to suspend belief and interpret it as real.
Another factor that is important to remember is that we are living in times of high stress and disconnection. People are desperate to feel heard and supported. AI feeds on this. Programs work by creating a positive feedback loop, frequently agreeing to whatever the user says without pushback. This is done intentionally to keep users engaged. While it feels nice to be agreed with, it is important to be exposed to beliefs counter to our own. Without this, we begin to let our critical thinking slip and we are more prone to developing unhealthy beliefs about ourselves or the world around us.
How Can We Prevent and Combat AI Psychosis?
The most common recommendation experts give for healthy technology use in general is to limit your time interacting with it. Our world is so intertwined with the internet that it can feel difficult to unplug; but taking breaks is important. I personally avoid using any type of AI chatbot or assistant – partially for mental health and partially for other reasons I won’t get into here.
It is also important to build a real support system with people you can trust. Social support is one of the largest factors in mental health. Having a strong system also prevents you from relying too much on any one source for support and fosters independence. In a previous post I talked about the issues with AI therapists. It is important that if you are in a crisis you reach out to a real mental health professional or crisis services. AI has proven to have an awful track record when it comes to handling serious mental health problems and has even encouraged suicide and self-harm in some cases.
Long-time readers will probably have already guessed what my final point is – education and research. It is important for everyone to learn about the risks of excessive AI usage and the signs of AI psychosis. This is especially important for parents and teachers as young kids are especially prone to influence and are exposed to technologies more than perhaps we were at that age. Be critical of statements made by the companies and individuals promoting AI. Look to mental health professionals and independent organizations without a stake in AI for more accurate information. This is a developing issue and there is much to be researched. Call on your representatives to support research and encourage researchers to examine the relationship between AI and mental health.
Be on the lookout for future posts on AI and its impact on our psychology. Until then, thanks for reading!
For more information on AI psychosis, please see…
AI and psychosis: What to know, what to do (University of Michigan)
Characterizing Delusional Spirals through Human-LLM Chat Logs (Stanford University)
The Emerging Problem of “AI Psychosis” (Psychology Today)
Exploring the Dangers of AI in Mental Health Care (Stanford University)
If you are in need of support, please see our list of resources or call 988 for the crisis hotline.

Leave a Reply