Artificial intelligence has become a regular aspect of everyday life. With rapid developments and little regulation, it now seems to be taking over many fields. Psychology is no exception. AI tools may be useful in data analysis, research, and mood tracking; but the overreliance of it in therapy can have devastating consequences. Many individuals turn to these services for the sake of convenience or because of financial factors. And while these are undeniable benefits of AI, there are numerous risks to take into account as well.
Dangerous Advice
As was seen when Google started implementing AI summaries in their web searches, misinformation and bad advice is prevalent in AI models which struggle to discern what is reliable information. These issues are not exclusive to web searches or Google. Chatbots like the National Eating Disorder Association’s (NEDA) “Tessa” have faced scrutiny for the dangerous advice they provide (for more on this case, see the resources below). Many other cases include situations in which chatbots or online therapists have encouraged suicide, self-harm, or risky medication practices. In instances where the unthinkable does happen, there is legal ambiguity over who is held responsible. There is little legal precedent in the field of AI – especially when it comes to AI providing medical advice. Hidden disclaimers often state that the bot is not professional advice, potentially allowing companies to avoid penalties real doctors would face for providing dangerous guidance. The lack of safeguards and penalties for chatbots increases the risk of users receiving bad information when they are most vulnerable.
Privacy and Data Concerns
Just like concerns of liability over dangerous advice, there is also confusion around the privacy and protection of private medical data that is shared through textbots. While health data is traditionally protected by the Health Insurance Portability and Accountability Act (HIPAA), there are no clear guidelines on the use of AI. HIPAA does allow patient information to be shared for research purposes after all data has been de-identifying (meaning all identifying information is removed from it). However, data and technology experts warn that online health records have the potential of being re-identified, putting patients’ privacy at risk. Additionally, many chatbots are used by minors who legally cannot consent to terms and conditions. Adolescents are especially vulnerable as they may not understand the risks of using such services. Santa Clara University used the example of “Woebot” to outline steps that can be taken to increase security and prevent the compromise of patient privacy (see below resources).
Individual Nuance and Sensitivity
As we have explored many times before, mental health is complex and unique to each individual. Therapists are trained on methods to tailor treatment to unique situations and people. This is something AI still struggles with. Blanket advice, while often useful in basic situations, will not work for everyone. Cultural sensitivity has also been an issue with many AI models. Responses may contain insensitive language or completely neglect to account for cultural and individual differences. No matter how much detail you provide a chatbot, it will not have the professional and personal experience of a human therapist. Additionally, real therapists are able to gather specific data that AI cannot.
Subtle ways in which we nonverbally communicate is an important aspect of therapy. Reading facial expressions, understanding body language, or recognizing tone – these are tells that therapists can pick up on and respond to. Without these, there is a greater risk of misunderstanding or potential for other hidden issues to be missed. This is especially important for individuals who have diagnosed conditions or who are on medication. We can often miss side effects or symptoms in ourselves which therapists are trained to spot.
Human Connection
AI has become fairly successful in imitating human language which can create a false sense of connection with users. Empathy is a central component of therapy. We all want to feel understood. When we put AI in therapy, we are putting a barrier between people. Those who opt for AI therapy miss out on forming a connection with someone who understands them and their unique situation. The artificial connection can distort our views of our real relationships and make communication more difficult as well. As we have seen with social media, conversations through a screen are very different from a real discussion. There is a larger risk for miscommunication and it can be difficult to portray emotions. It is also easy to get used to a bot that will respond instantly and it can be difficult to remember actual people are not like this. The unrealistic expectations that come from relying so heavily on technology can cause tensions in our real relationships. Nothing can replace the human connection that comes from a real therapist.
AI can be a powerful tool to speed up research or help make simple tasks more convenient. It is not a replacement for humans who understand the nuances of mental health and who have genuine empathy. Without safeguards, AI has the potential to wreak havoc in the field of mental health.
If you would like to hear more about the cases discussed or what the experts have to say, please see the following resources…
AI Therapist: Data Privacy (Santa Clara University)
Eating disorder helpline shuts down AI chatbot that gave bad advice (CBS News)
If AI provides false information, who takes the blame? (NPR)
When Your Therapist Is an Algorithm: Risks of AI Counseling (Psychology Today)
If you are in need of support, see our list of Resources.

Leave a Reply