For the last few decades, we have witnessed technological advancement at a frenzied pace. From the birth of the humble pager to the development of communication models that match the conversational richness of humans, we have always chased the next big game-changer. For us, the scope of “who could’ve thought?” never seems to be explored enough. Who could’ve thought, even a decade ago, that we would have AI-driven chatbots and mobile applications that offer the same services as a therapist? Who could’ve thought, and I admit that this one is more personal, that I would be seeking the services of these chatbots from time to time myself? I surely could not have, and I’m certain that’s a shared sentiment, at least among those who, like me, have a barely preliminary understanding of AI and Machine Learning models.
Of course, this is not without reason. The capability of these tools is no small feat. AI offers a way of consolidating information in a manner that seemed like science fiction until very recently. While the concept of AI has existed since the 1950s, its functionality has remained limited to a few basic tasks. Fast forward to the 2010s, with the breadth of machine learning models snowballing, we saw chatbots such as ChatGPT integrated into the fabric of fields such as business, marketing, and psychology. It is no surprise at all that the estimated user base of ChatGPT touched one million within just the first week of its inception (Singh, 2025).
Ready for a challenge? Click here to take our quiz and show off your knowledge!
Even within the mental health context, the stride towards more efficient AI-powered tools has been purposeful. People are far less averse to relying on chatbots for advice, and again, this is not without cause. The algorithm is constructed to offer quick, objective, and fact-based responses. Moreover, AI has been successful in early detection and diagnostic purposes, acquiring as high as a 90% success rate in classifying depression (Yan et al., 2022). Chatbots like Woebot and Wysa are designed specifically for the management of depression and anxiety, and even offer elements of Cognitive Behavioral Therapy to their users (Team, 2024).
Additionally, artificial intelligence offers the kind of confidentiality that saves you from potential judgment. In the Indian context, this advantage is extremely relevant. Conversations around topics such as sexuality, caste-based discrimination, and familial pressure can be hard to bring up, and the choice to use an AI-powered tool could potentially be the only option left. Another factor to consider is the expense. Therapy can often be expensive, and while some therapists assure discounted prices, one can never be too sure about the quality delivered. AI serves as not only a much more economical but also effective alternative to therapy. Not to mention, the hassle of appointments and general access is never a burden to bear. Courtesy is not a concern when the subject is artificial intelligence.
But that is not the focus of this article. I believe what warrants more scrutiny is how AI is falling short, and how in some cases it can raise more difficulties than it resolves. Much of what we learn about mental health from a therapy perspective seems to begin from the grassroots construct of a therapeutic alliance—the relationship between the client and the counsellor, which determines the effectiveness of treatment (Wakim, 2025). An obvious downside of AI is that there is limited scope for a genuine therapeutic alliance because, by virtue (or vice) of its nature, AI cannot form relationships beyond merely utilitarian ones.
Ready for a challenge? Click here to take our quiz and show off your knowledge!
A logical consequence of this glaring shortcoming is the absence of other critical requirements of a regular therapeutic environment. Empathy, understanding, and compassion would be impossible to generate, despite AI’s ability to simulate emotion through language. That the birth of AI has made the frequently mentioned “human touch” absent would stand true in such a scenario. Mental health treatment happens at a deeply personal level, and no algorithm can truly offer to replicate the emotional depth of a trained professional.
Unfortunately, that’s not where the challenge ends. One of the most touted pros of using AI for mental health intervention is its ability to customize treatment tailored to the individual needs of the patient. This, however, doesn’t give us the full picture. AI models are not nearly equipped to deal with the complexities of mental disorders. This is simply because mental health exists on a spectrum; every disorder manifests uniquely across different individuals. For example, major depressive disorder causes significant impairment in regular living conditions, which look different depending on the people you talk to. The nuance that is required to adequately interpret these symptoms, along with other factors such as environment, personal histories, etc., makes diagnosis a struggle. This can lead to generalized diagnoses and increase the risk of false positives and false negatives. This serves as strong evidence to suggest that AI is not yet a default solution to mental well-being and requires consistent checks to verify its reliability (Thakkar et al., 2024).
Another shortcoming to overcome is the risk of cultural bias and lack of cultural sensitivity with AI. These tools are built upon data that is fed to them by humans, who are guilty of their personal stereotypes and biases. When curated input is fed into these models, they no longer act as objective sources of information (Straw & Callison-Burch, 2020). Rather, they can reinforce prejudice through their limited information on certain cultures. AI has been widely regarded as catering exclusively to Western populations. Training a model specific to certain demographics severely compromises the chance at diversifying treatment. If the goal is to create models that minimize existing discrimination in society, we have to be careful of the kind of information that is being fed to them. This involves questioning the very foundation that they are built on (Straw & Callison-Burch, 2020). Fortunately though, recent models seem to have adopted more cultural sensitivity.
As consumers, we often have the tendency to hold pre-existing beliefs about certain concepts. While this is a universal trait, it can often interfere with research, and in the current context, that can also sometimes mean interaction with AI. ChatGPT 4.0 has been known to amplify confirmation bias in its responses, despite its popularity (Chen et al., 2025). This can be a result of how we govern prompts in the direction of what we already believe to be true. ChatGPT’s response not only affirms our thinking but also provides a metaphorical stamp of approval through its presentation of evidence. In doing so, we limit the scope of our own research (Atos SE, 2025).
The purpose of this article is not to criticize but to outline the potential of the transformative force that is AI. The idea of revolutionizing mental health and various other fields brings with it a sense of wonder and excitement. But I believe it is equally crucial to question “what could be?” instead of just “what is?” To harness the true potential of AI, we must combine the humanness of therapy with the overpowering reach of AI. Such an achievement would herald the dawn of therapy that is both accessible and efficient, helping move towards a mentally healthy and happy society.
References
Atos SE. (2025, April 1). Is ChatGPT your biggest fan? Avoiding confirmation bias in AI interactions – Atos. Shaping the Future Together. https://atos.net/en/blog/is-chatgpt-your-biggest-fan-avoiding-confirmation-bias-biased-ai
Chen, Y., Kirshner, S. N., Ovchinnikov, A., Andiappan, M., & Jenkin, T. (2025). A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do? Manufacturing & Service Operations Management. https://doi.org/10.1287/msom.2023.0279
Singh, S. (2025, May 19). ChatGPT Statistics 2025 – DAU & MAU Data (Worldwide). DemandSage. https://www.demandsage.com/chatgpt-statistics/
Straw, I., & Callison-Burch, C. (2020). Artificial Intelligence in mental health and the biases of language-based models. PLoS ONE, 15(12), e0240376. https://doi.org/10.1371/journal.pone.0240376
Team, B. (2024, October 14). How AI Transforms Mental Health Care: Challenges and potential | Beetroot. Beetroot. https://beetroot.co/healthcare/ai-in-mental-health-care-solutions-opportunities-and-challenges-for-tech-companies/
Thakkar, A., Gupta, A., & De Sousa, A. (2024). Artificial intelligence in positive mental health: a narrative review. Frontiers in Digital Health, 6. https://doi.org/10.3389/fdgth.2024.1280235
Wakim, R., MD. (2025, January 10). Therapeutic alliance: definition, importance, how to build, and challenges. White Light Behavioral Health – Columbus. https://whitelightbh.com/resources/therapy/therapeutic-alliance/
Yan, W., Ruan, Q., & Jiang, K. (2022). Challenges for Artificial intelligence in recognizing mental Disorders. Diagnostics, 13(1), 2. https://doi.org/10.3390/diagnostics13010002
Â