As he settled into the chair across from me, a mix of disappointment and regret engulfed my patient’s face. With a heavy sigh, he announced, “I had a date and it didn’t go well.” This was not an uncommon scenario for him, as he often shared stories of his romantic aspirations being crushed. Intrigued, I asked him what happened next, and to my surprise, he replied, “So I consulted a chatbot for advice.”
The concept of chatbots, simulated human conversations powered by artificial intelligence, had been making headlines, but I had never encountered someone who had actually sought their guidance before. Curiosity getting the better of me, I probed further, asking him what the chatbot suggested.
“It told me to express my appreciation for her values,” he replied in a despondent tone. I couldn’t help but ask, “Did it work?” With a resigned expression, he lifted his hands and sighed, implying that it was a futile attempt. This encounter marked the beginning of a recurring pattern in my therapy practice – patients turning to chatbots for advice before consulting me. The topics varied from love and relationship matters to issues with their children or even repairing faltered friendships. However, the success rate of these consultations was far from consistent.
One patient sought the chatbot’s guidance on coping with the anniversary of a loved one’s death. The advice provided, “Set aside time in your day to remember what was special about the person,” resonated deeply with the patient. Overwhelmed with emotions, she tearfully admitted, “It made me realize that I’ve been avoiding my grief. That’s why I made this counseling appointment.”
Another patient confessed that she had become reliant on AI for support when her friends no longer sufficed. “I can’t burn out my chatbot,” she explained. As a therapist, I was both alarmed and captivated by the potential of AI entering the therapy field. AI has already proven its usefulness in various aspects of life, such as writing cover letters and speeches, planning trips, and organizing weddings. So, why not employ its assistance in our relationships as well? Innovations like Replika, the “AI companion who cares,” have taken this idea further by creating romantic avatars for individuals to fall in love with. Similarly, sites like Character.ai allow people to engage in conversations with their favorite fictional characters or construct their chatbots.
However, these advancements raise concerns in an age plagued by misinformation. We’ve witnessed instances where algorithms propagate falsehoods and conspiracy theories among unsuspecting humans, prompting us to question the consequences of integrating AI into our emotional lives.
Naama Hoffman, an assistant professor at the Department of Psychiatry in the Icahn School of Medicine at Mount Sinai Hospital, New York City, warns, “Even though AI may articulate things like a human, you have to ask yourself what its goal is.” She highlights that while the objective in relationships or therapy is to enhance the quality of life, AI seeks to generate content that garners the most citations; its purpose is not necessarily to provide genuine assistance.
As a therapist, I am aware that my work can benefit from external support. Over the course of two decades, I have led trauma groups and observed how a psychoeducational framework, particularly evidence-based models like Seeking Safety, can facilitate profound emotional growth. For example, the original chatbot, Eliza, designed as a “virtual therapist,” engaged users with endless open-ended questions, and fascinatingly, it is still functional. Chatbots possess the potential to inspire individuals, dismantle defenses, and even encourage people to embrace therapy. However, we must tread carefully, as there is a fine line between leveraging technology for support and becoming overly dependent on machines.
Ultimately, the integration of AI into therapy raises ethical and philosophical concerns. Can an AI truly understand the complexities of human emotions and relationships? How do we ensure that vulnerable individuals aren’t exploited or misled by AI-generated advice? These questions necessitate a comprehensive examination of the technology’s role in mental health and the development of safeguards for its ethical implementation.
As AI continues to permeate our lives, the therapeutic landscape stands at the precipice of a new era. While the potential benefits are enticing, we must proceed with caution, preserving the human element in therapy while embracing the valuable insights that technology can provide. Only by striking the right balance can we harness the power of AI to enhance the well-being of those seeking emotional support without jeopardizing the essence of human connection and empathy.