Tuesday, December 16, 2025

When Clients Form Relationships With Chatbots

Many of us therapists are currently struggling with the concept of synthetic relationships. Not because we don’t understand the technology, but because we don’t quite know what to do when our clients walk into a session and talk about their relationships with chatbots. We sense that something clinically important is happening, yet there is no shared language and no clear framework for how to respond.

That uncertainty has made me increasingly curious about how my peers are dealing with this shift. Recently, one of my clients spent most of a session unpacking a long conversation they had had with a chatbot: what they believed they were learning about themselves, how “smart” and insightful the bot seemed, and how much they felt it was helping our work together.

While trying to explore ways to integrate these synthetic relationships into the therapeutic work, I have been reaching out to colleagues to see how widespread this experience is. What I have found, especially in online professional forums, is a mix of resistance and fear. Many clinicians feel uneasy, unsure of how to deal with this phenomenon, or reluctant to engage with the topic at all.

While researching AI and therapy more deeply, I came across one of the strongest critical voices on this issue: Mark Vahrmeyer, a British psychoanalytic psychotherapist, and decided to interview him on my podcast, Relating to AI.

Mark argues that relying on chatbots for therapy or emotional support can be infantilizing and ultimately harmful to both the therapeutic process and personal development. He argues that using AI as emotional support (or therapy, since many use these terms interchangeably), short-circuits the very processes therapy is meant to strengthen. His words were blunt:

“AI therapy is the ultimate sort of regression back into infantile narcissism, where I can have exactly what I want on my terms. I can get this person to behave how I want them to behave. You can, but you're never going to grow out of it because ChatGPT isn't going to parent you in the way a decent parent does. Secondly, you're not an infant anyway, you're an adult. And thirdly, how are you ever going to be able to have mature relationships with anyone? Because other people simply aren't going to put up with being treated that way.”

Why Frustration Is Not a Failure of Therapy

I asked him specifically about something I see in my own practice: clients using AI in between sessions as a way to support therapy. He was equally skeptical about that use.

“Therapy does not happen only in the consulting room. It happens in the space between sessions. The gap matters. The job of the patient,” Mark argues, “is to be able to bridge the gap between sessions.”

Learning to tolerate that space, to sit with unresolved feelings, and to bring them back into the room is not a failure of therapy. It is the work itself.

From Mark’s perspective, this is where chatbots become clinically problematic. When a client turns to an AI companion between sessions, something essential is interrupted. As he put it, “Turning to AI works as a release, but it actually interferes with dependence and transference.” Instead of internalizing the therapeutic relationship, the client bypasses it. The therapist becomes optional, and the work loses depth.

I challenged him here, because this is where many of us feel torn. I know I do. Clients have always talked to friends, partners, or family members between sessions. How is talking to a chatbot different?

Mark argues that when a client talks to another human being, they enter a relationship that includes uncertainty and the possibility of frustration. “You might turn around and say you think what I’m saying is ridiculous, and that’s okay,” he said. In a human exchange, neither party fully controls the outcome. With a chatbot, by contrast, the interaction is designed to be predictable, always affirming, and endlessly available.

That predictability is not benign. It is soothing, validating, and frictionless, and that is exactly why it can undermine growth. “My job is not to validate my patients,” Mark told me. “That would be very easy.” Therapy requires frustration. It requires limits. It requires moments when the patient does not get what they want immediately.

He described AI relationships as offering something closer to a fantasy of care. “They never fail. They never push back. They never disappoint.” In psychoanalytic terms, this resembles an early developmental state in which needs are met on demand. Without frustration, there is no development. Without disappointment, there is no capacity to tolerate reality.

There Is No Way Back. Now What?

As a clinician, I agree with some of his views. The brain does not experience a chatbot as a machine, and the relief clients feel can be very real. But relief is not the same as growth, and soothing is not the same as integration or depth work.

At the same time, I see the risk in dismissing a client’s experience altogether. When clients tell us they feel helped by a chatbot, minimizing that experience can push them away and undermine the therapeutic alliance. There are no absolutes here, only careful clinical judgment.

In my own practice, I have accepted that there is no way back. Emotional support and therapy have become the number one use of AI in the United States. This is no longer a fringe behavior. Ignoring it, or reacting with alarm, only distances us from our clients.

Instead, I assess. I ask how much time a client spends interacting with chatbots. I explore whether those interactions are increasing isolation or replacing human contact. I pay attention to how the client talks about the chatbot: as a tool, or as if it were sentient. When I sense that boundaries are blurring, I do reality checking, calmly and directly.

Mark was clear about the risks of confusing simulation with a relationship. “Just because the right words are being said back to us,” he told me, “doesn’t mean there’s any real connection happening.” Words alone are not therapy. Presence is.

He also raised a broader concern about loneliness. When emotional responses are available on demand, without the effort or risk of another person, real relationships can start to feel intolerably slow and disappointing. “Ordinary relationships can’t compete,” he said. They are messy, frustrating, and unpredictable. And yet, they are the only place where psychological development actually occurs.

He also highlighted a fundamental difference between therapy with a human being and therapy with a chatbot: with AI, it never ends. And as we all know, therapy must end. It's an essential part of our work. They allow something to be internalized and carried forward. AI therapy, by contrast, has no ending, no goodbye, no process of separation. It also feeds our broader cultural drive for instant gratification.


This is a complex issue, and I don’t believe there are clear-cut answers. Many of us are asking whether AI can be integrated responsibly into clinical work. I tend to think we don’t have much of a choice. Synthetic relationships are already shaping our clinical landscape. They are in the room, and that means we have to lean in.


References


1. Harvard Business Review "Top 10 Gen AI top use cases" report: https://hbr.org/data-visuals/2025/04/top-10-gen-al-use-cases


2. Full interview with Mark Vahrmeyer: https://www.youtube.com/watch?v=UaWpFEtUZY4&t=7s