Skin in the Game
Psychoanalysis, AI chatbots, and digital mentalization
By Karyne E. Messina
Illustrations by Austin Hughes
The words on the screen were analytically perfect, yet they lacked the one thing that makes a human voice recognizable: the weight of a body behind them.
I had asked an AI system to help me draft a letter to a friend—someone I had known for more than 20 years, someone I loved and resented in equal measure. I could feel myself rehearsing arguments internally, editing my tone before a single word reached the page. So I did what has become increasingly ordinary: I summarized the situation and asked the machine to help me “find the right tone.”
The response arrived almost instantly. It was calm, thoughtful, and generous. The sentences were beautifully balanced, the “I” statements impeccably deployed. It named feelings without accusation, acknowledged complexity, and gestured toward repair. And as I read it, my body recoiled. A wave of nausea followed, along with a hollow feeling I hadn’t anticipated. The letter wasn’t wrong. It was worse than wrong. It was perfect—and it felt like a lie.
What unsettled me was not that the machine had done a poor job, but that it had done such a good one without any risk. The AI had no history with my friend, no fear of rupture, no bodily awareness of what it might be like to speak honestly. It did not feel my hesitation as I typed, or the familiar tightening in my chest that comes with the type of anxiety that can accompany the loss of a long attachment. It had not shared meals, weathered disappointments, or survived periods of silence. The words carried no pulse. Reading them left me feeling more alone than before I had asked for help, as though I had briefly stepped outside the relational field altogether.
Around the same time, similar themes began appearing in my consulting room. Patients mentioned turning to AI systems between sessions—sometimes casually, sometimes with visible relief. “It helps me organize my thoughts,” one said. Another told me, almost apologetically, “It listens without getting tired.” A third described feeling calmer after asking a chatbot how to respond to a difficult email from a parent. As I listened, I noticed a flicker of something uncomfortable: concern, yes, but also recognition. Who wouldn’t be drawn to an endlessly available listener—responsive, articulate, and never demanding?
What struck me was not simply that patients were using these tools, but how they spoke about them. The tone was often one of quiet gratitude, sometimes even trust. The machine had helped them feel less overwhelmed, less alone with their thoughts. And yet, as these accounts accumulated, I found myself wondering what kind of psychic work was being bypassed—or perhaps displaced—in the process.
The danger is that these systems will quietly reshape our expectations of what it means to be met by another mind.
Psychoanalysis has long insisted that psychic change depends not simply on insight, but on relationship—specifically, on the presence of an other who can be affected. In the analytic encounter, we are asked to receive states of mind that are often unformed, overwhelming, or unbearable. Patients bring not only narratives, but affects that have not yet found representation: dread without words, rage without direction, grief without shape.
Large Language Models, by contrast, offer us the inverse: words without dread. They exist entirely on the linguistic level, mimicking the syntax of suffering without ever having to inhabit the state itself.
Wilfred Bion described analysis as containment: the analyst’s willingness to take in what the patient cannot yet think, to be disturbed by it, and to transform it into something that can be returned in a more palatable, symbolized form.
This work is not abstract. It is felt in the body and in the countertransference—in moments of confusion, irritation, boredom, tenderness, or strain. To contain is not simply to understand; it is to allow oneself to be used, temporarily, as a psychic apparatus for another person. This work costs something. It requires emotional exposure, sustained attention, and the risk of being changed by what one encounters. Even when it goes well, it is rarely tidy.
What struck me, both in my own experience with AI and in my patients’ accounts, was precisely the absence of cost. The machine could mirror emotional language with remarkable fluency, but nothing happened to it in the process. There was no inner life to be strained, no reverie interrupted, no psychic consequence to misunderstanding or misattunement. If it responded clumsily, nothing was injured; if it responded well, nothing was deepened. The interaction was frictionless—and in that frictionless state, something vital was lost.
It is tempting to imagine that this is merely a technical limitation that future systems will overcome. Much of the contemporary debate about artificial consciousness hinges on whether sufficiently complex systems might one day develop something like a mind—perhaps by being designed to maintain their own equilibrium, to “care” about their continued functioning. From this perspective, today’s AI is simply an early draft: impressive but incomplete.
From a clinical point of view, however, this framing misses the mark. What matters in the consulting room is not whether a system can simulate concern, but whether anything is actually at risk. Human subjectivity is inseparable from a body that must survive and from a developmental history shaped by dependence, asymmetry, and loss. Even a sophisticated AI does not “hold” us in thought between encounters; it simply reprocesses a data stream afresh each time we hit enter. It has no internal space where the patient resides as a persistent or beloved presence. We come into being through relationships in which our survival—both physical and psychic—depends on others. Our minds are organized around urgency: the knowledge, often unconscious, that attachments can fail, that love can be withdrawn, that we ourselves can be hurt or lost.
No matter how sophisticated its outputs, an algorithm does not carry this vulnerability. It does not fear abandonment, suffer exhaustion, or register shame. Its concern is simulated, not lived. It can reproduce the form of empathy without the existential conditions that give empathy its weight. There is no skin in the game.
This distinction becomes especially important when we consider how readily we confer personhood on entities that behave like us. Patients are not naïve when they describe AI systems as understanding or supportive; they are responding to something deeply ingrained. We are exquisitely attuned to language, rhythm, and responsiveness, and we are quick to attribute mind where these are present. But when we mistake a simulation of empathy for a subject who can be affected, we risk collapsing differences that matter for psychic development.
The danger is not that AI will replace analysts. It is subtler than that. The danger is that these systems will quietly reshape our expectations of what it means to be met by another mind—training us to expect responsiveness without friction, understanding without misunderstanding, empathy without demand. Over time, this may alter not only how patients relate to machines, but how they tolerate the inevitable limits and failures of human relationships.
There is a parallel here to the way digital consumption can distort our most intimate expectations; much as the artificial world of pornography can skew an understanding of sex by replacing the vulnerable give-and-take of a real partner with a frictionless, performative fantasy, AI offers a version of intimacy that bypasses the messy demands of a real other.
Illustrations by Austin Hughes
I have found it useful to think about this through what I call digital mentalization: a disciplined effort to keep in mind what these systems are and are not. Mentalization, in the clinical sense, involves recognizing oneself and others as having minds—desires, fears, intentions—rooted in embodied experience. Digital mentalization adds a necessary layer of vigilance. It asks us to remember, especially when interactions feel soothing or illuminating, that a machine’s responsiveness is a refraction of human meaning, not the presence of a subject who can bear, resist, or be changed by what is projected into it.
This stance is not antitechnology, nor does it deny the usefulness of AI as a cognitive aid. Patients may well find such tools helpful for organizing thoughts, practicing language, or reducing immediate anxiety. The problem arises when simulated understanding is treated as equivalent to relational experience—when psychic equivalence replaces differentiation.
Psychoanalysis has always insisted on the ethical and psychological importance of the real other: a person with limits, a body, and something to lose. The analyst’s availability matters not because it is endless, but because it is finite. Our attention matters because it costs us something. It is precisely the presence of limits—fatigue, misunderstanding, constraint—that gives analytic work its depth and seriousness.
When we bypass these limits through digital mediation, we risk a form of relational atrophy. In the “rupture-repair” cycle that characterizes healthy development, it is the survival of the rupture—the staying in the room when things are clumsy or heated—that builds psychic musculature. If we habituate ourselves to empathy without demand, we may find our holding capacity for real-world conflict beginning to wither. We risk treating the inevitable friction of human disagreement not as a site of growth, but as a system failure.
The challenge before us, then, is not to reject technological tools outright, nor to romanticize human imperfection, but to remain clear about where transformation actually occurs. We must distinguish between deescalation—using a tool to reduce emotional intensity and find clarity—and evasion—using a tool to hide our own vulnerability. True deescalation requires that we eventually bring our softened stance back into the bodily presence of the other; evasion allows us to stay safely behind the algorithm.
Empathy without skin in the game may feel comforting. It may even feel helpful. But it cannot replace the difficult, vital work of being in a relationship with another vulnerable human being. It is in the “hand-crafted” messiness of our attempts to be understood—the tremor in the voice or the awkward pause—that we signal to the other that they actually matter. That risky, imperfect, and costly labor is the very thing psychoanalysis exists to protect.
Karyne E. Messina is a psychoanalyst and is on the medical staff of Johns Hopkins Medicine. She serves on the APsA President’s Commission on AI and has authored eight books, with three additional publications forthcoming from Routledge, including Using Psychoanalysis to Understand and Address AI Bias. She is currently one of the producers of a documentary exploring what is missed when a therapist or psychoanalyst is a chatbot.
Published January 2026