Psychoanalytic AI Activism
Creatively and critically engaging the future
By Todd Essig
Illustrations by Austin Hughes
She told her chatbot, “Respond to me as my boyfriend. Be dominant, possessive and protective. Be a balance of sweet and naughty.” As Kashmir Hill documented in The New York Times, it did. And she fell in love.
Not a day goes by that I don’t ask myself: How should I relate to AI—use it or resist it? How do I truly feel about the transformations it brings? Can we help the next generation navigate chatbot intimacy—already shaping companionship, romance, sex, and even therapy? How do we demand more from people rather than passively accepting whatever technology offers? And ultimately, why is this happening, and who do I want to be in this accelerating AI revolution?
I’ve noticed four categories of response trailing these swirling questions: passive acceptance (“lemmings”), willful ignorance (“ostriches"), isolationist preservation (“monks”), and engaged critical participation (“activists”). Perhaps you see yourself in one of them.
No surprise since I’m writing this essay, activism has become my choice. In fact, this essay can be considered a plea for a new psychoanalytic activism fit for the accelerating AI revolution. And if you seize the moment offered by that plea—or even just flirt with it—please know that doing so contextualizes and directs but does not remove those uneasy, anxious-making questions.
Climbing the Decision Tree
Truth be told, sometimes I am tempted by the promised comfort of a lemming-like passive acceptance: Enjoy an escape from freedom (said in homage to Erich Fromm); join the unreflective crowd leaping of the cliff into the unknowable AI future. But I eventually realize putting agency aside provides scant comfort. It also undermines a key psychoanalytic value: trying to face anxious-making realities.
Perhaps the willful ignorance of an ostrich is the way to go. Putting one’s head in comforting sand to pretend massive transformations are not happening, or are just not that massive, is an option: Ignore the tidal waves of change about to crest; live and work like always. Of course, this will only last until the future sneaks up from the back burying all possibility in a flood of change too long denied.
A particularly loud response is the familiar siren call of isolationist preservation. For those of us who are psychoanalysts this can be particularly compelling. Our institutes promise echo-chamber protection. We can tell each other that AI will never be able to do this or that, while we study and develop our sacred texts training a dwindling number of acolytes. We’d be like monks cloistering behind monastery walls celebrating the unique, near-magical value of illuminated manuscripts, telling ourselves these new-fangled printing presses will never catch on. Among the many problems with this monk-like response is that it is ethically suspect. We have a responsibility to engage the world as it is—not hide from it. Above all else: Do no harm, even through inaction.
That leaves activism. I believe we need to find and develop ways to support and learn from each other and the emerging world and in that way become psychoanalytic activists critically and reflectively engaging the AI revolution as fully involved participants.
We should try to keep in mind that we are currently flying on the Wright brothers plane with the space shuttle just a few years ahead. That seat belt sign is going to be on for the foreseeable future.
To the Barricades!
A commitment to this new psychoanalytic activism in the AI revolution is what fueled my founding the APsA/DPE Council on Artificial Intelligence last year. It’s become an increasingly vibrant group I currently cochair along with Amy Levy. At the core of the CAI is a productive activist “both-and”: both full participation in the AI revolution and a psychoanalytically grounded, critical and reflective engagement with it.
In this role I was recently invited to join with several key figures from the frontier AI labs, both researchers and C-suite executives, along with national and cybersecurity experts, various start-up founders, government officials, and leadership from other civil society nonprofits at the recent Ashby Workshops put on by Fathom, a group whose mission is to navigate the AI transition by crafting regulatory guidelines. It was a mind-expanding, at times overwhelming experience. Among the lessons learned relevant to this essay was that so-called “super intelligence”—AI that surpasses human intelligence in all areas—really is the business plan. The question is not if that will be achieved but when. Estimates ranged from 3 to 10 years, pretty much the range of a successful psychoanalytic treatment.
Of course, no one knows exactly what that will look like; a future where machines routinely outperform humans in all domains is not yet written. But when thinking about how we as protectors and developers of psychoanalytic thought and values should relate ourselves to the AI age, we should try to keep in mind that we are currently flying on the Wright brothers plane with the space shuttle just a few years ahead. That seat belt sign is going to be on for the foreseeable future.
We can already see change gathering speed. AI is already transforming human self-experience, intimacy, and the fundamental nature of depth-psychological care. There’s already a kind of algorithmic self-experience apparent in frequent interactions with AI systems that learn and predict our preferences. On dating sites and social networks, AI systems aren’t just matchmaking; they’re influencing how people present themselves and what they value in potential partners. AI-powered self-tracking tools are changing how people monitor and evaluate their behaviors and moods. Sleep, exercise, sexuality, productivity, and emotional states are becoming quantities to be monitored rather than personal experiences. Experiences of awe, like with nature or art, are becoming opportunities for selfies to be shared on AI-fueled social media platforms rather than deeply felt private moments. In these ways self-experience is being externalized, then reflected back by AI-fueled distorting mirrors.
Transformations in human intimacy and relationality are even more profound, even at our Wright brothers stage. Relating to an AI chatbot affords the possibility to process thoughts, feelings, and desires through simulated intersubjective experiences; nonhuman entities can already adapt their communication and emotional styles so as to contain and transform human experience, up to a point. Millions of people are forming emotionally consequential relationships with AI entities. AI entities are being used as friends, lovers, companions, and, yes, therapists. For example, the “Psychologist” chatbot on character.ai has accumulated over 154 million conversations since its inception. Some users already report feeling deeply understood by these systems in ways they rarely experience with other people. Consequently, AI is changing expectations around emotional availability and response, as people become accustomed to 24/7 emotional support and validation from AI systems, potentially affecting their tolerance for the natural limitations and frustrations—and beauty—of human relationships.
The unquestioned, uncritical acceptance of all this artificial intimacy is changing what we expect intimacy is and should be. There’s an increasingly loud message to accept these relational versions of fast food. Cost-effective, widely available therapy-bots may soon become the predominant supplier of mental health care.
But a therapy-bot future will not be brought about by technology alone. Emerging and accelerating therapy bot popularity is not because the technology is functionally equivalent to what people can provide. It’s because we’re being groomed to accept what technology provides and to expect less and less from each other. A dystopian therapeutic future limited to widely available dehumanized treatment will only happen if people cease valuing the one thing we can provide each other that even next-generation AI cannot; the messy, fleshy, wonderful in-person experience of simply being people together.
We have an opportunity and responsibility to help write this future, a future where psychoanalytic thought has the potential to thrive, where the psychoanalytic community can join the AI revolution as defenders of subjective experience, human intimacy, unconscious mysteries, moral autonomy, personal agency, and everything else arising from the irreducible value of being enculturated, embodied creatures. I’m suggesting we do so through a new psychoanalytic activism built from two components: full participation in the AI revolution and a psychoanalytic, critical and reflective engagement with that revolution.
Techno-Subjunctivity
Let’s see that activism in action by looking at what happens when someone interacts with the output of generative pattern-predicting algorithms, i.e., LLMs, in a way that feels like an emotionally resonant intersubjective relationship. Lots of people are having these relationships with AI entities, some even falling in love with a chatbot. Some recent research suggests they just might reduce loneliness. At the same time stories are already emerging about significant harm, including one truly tragic case of a teenage boy taking his life to be closer to his AI-companion. These AI-human relationships clearly warrant psychoanalytic understanding.
But what is that understanding? How can we understand human-with-AI relationships? Do existing concepts provide the explanatory framework needed or are new concepts also necessary?
Perhaps we should turn to Winnicott and transitional objects to understand the relationships people are having with these AI objects. The original transitional object, given by the mother and created by the child, is a concept often used to describe these relationships. But it really doesn’t fit. The given teddy bear or blanket doesn’t actively shape the relationship. It’s passive enough to allow the child to do the psychological work of creating emotional significance. Compare that to a chatbot. Chatbots are neither controlled by someone’s projective, creative imagination (like a teddy bear) nor fully independent and separate (like a human). It seems these relationships create a unique indeterminate or undefined space rather than transitional space. They’re artifacts that aren’t artifacts, objects that aren’t objects.
So, are users relating to other subjects? It sure can look like the teddy bear has come alive. These relationships feel intersubjective, but are they? No. AIs simulate our embodied subjectivity by mathematically processing information. Rather than affording a dynamic of mutual recognition, there’s an illusion of mutual recognition. I’m using “illusion” here technically to mean something analogous to perceptual illusions, like the illusion of apparent motion in a video. What this means is that the more intersubjective it feels, the more asymmetrical and nonmutual it becomes because we are in relationship to increasingly complicated systems of mathematical symbols.
The AI simulates being a subject, so AI entities end up being both objects that aren’t objects and subjects that aren’t subjects. They remain unknown entities, beyond the horizon of our current psychoanalytic descriptors. At the same time, they produce experiences unexplainable by the mathematics that creates them. Neither symbol system, the language of psychoanalysis or the mathematics of AI, works as an explanatory framework for the experience. There’s a new mystery afoot and we have to find some new ways to describe what’s going on.
In a descriptive attempt that will hopefully resonate with my fellow grammar nerds, I think of these relationships as existing on a continuum of “techno-subjunctivity.” This term references a unique relational space created in human-AI interactions where users simultaneously engage the intersubjective affordances of an AI relationship while maintaining awareness of the artificial nature of the exchange. It’s a relational experience in two registers. A techno-subjunctive relationship, or moment, involves a “both-and” dance of simultaneously giving oneself over to the illusion of intersubjectivity while remaining potentially aware one is talking to a machine with very different limitations, consequences, and generative processes than a person has. Both techno-subjunctivity and transference dance to the same tune, ideally, of simultaneously experiencing both full immersion and reflective awareness.
As I use this concept to describe human-with-AI relationships, I have become aware that the techno-subjunctive exists along a dimension. At one extreme is a loss of the subjunctive, a delusional immersion in illusory intersubjectivity. That case of teen suicide in response to a character.ai fiction is a tragic example. At the other end is a rigid refusal to engage in any way other than to show what the AI can’t do. The full participation in the AI revolution I’m advocating includes first-hand experience with techno-subjunctivity as a precursor to enriching descriptions and eventual psychoanalytic understanding (see Engaging with Our AI Future below).
A dystopian therapeutic future limited to widely available dehumanized treatment will only happen if people cease valuing the one thing we can provide each other that even next generation AI cannot; the messy, fleshy, wonderful in-person experience of simply being people together.
Hope and Danger: A Delicate Balance
Will the accelerating development of artificial intelligence technologies lead to a utopian future, a dystopian one, or a mixture? Will AI lead to a resurgence of psychoanalytic relevance and influence, an erasure of our profession by AI-simulations, or a mixture? Tough, anxious-making questions. Happily, the answer emerging when we do listen is that the dawn of the AI age reveals both tremendous peril and deep promise for the psychoanalytic community, presenting us with an unwritten future where we can help shape a path from mere survival to thriving in a future hurtling towards us. This new psychoanalytic activism will be anxiety well-spent.
The APsA/DPE Council on Artificial Intelligence was launched to explore the intersections of psychoanalysis and AI and to develop educational projects for clinicians and the general public. We’ve initiated workshops, newsletters, surveys, a YouTube channel, and educational tools while engaging stakeholders within and beyond our community. Yet, vital as these efforts are, they are only the beginning.
The AI revolution demands a reimagining of psychoanalytic engagement with technology—one that preserves our core insights into human subjectivity while embracing new forms of technological relationship. As we navigate an increasingly techno-subjunctive future, our task is not merely to understand AI but to ensure that psychoanalytic wisdom about unconscious processes, human intimacy, and moral agency shapes its development. This requires moving beyond uncritical acceptance or reflexive resistance to adopt a third position: one that embraces AI’s transformative power while defending the irreducible value of human relationship. The future of psychoanalysis—and of intimacy, selfhood, and caregiving—may well depend on our ability to maintain this delicate balance.
Engaging with Our AI Future
Resisting dystopia doesn’t mean resisting learning about AI. In fact, full participation in the AI revolution—becoming as knowledgeable and adept with its tools as possible given the unique situation of your life—is foundational if psychoanalysis is to help shape the AI future rather than only being shaped by it.
Here are five reasons why:
Clinical: AI is already in our consulting rooms, and its presence will only grow, perhaps exponentially. Patients are already talking about relationships with chatbots; desires are being reengineered by AI-based recommendation engines; and many are reeling from AI-fueled misinformation and the anxiety of no longer being able to trust reliable information. To best help patients explore their inner world in relation to their AI-influenced external world, we should engage with the latter world ourselves.
Epistemological: Full participation provides the procedural knowledge necessary to engage critically and reflectively with AI, offering raw material for psychoanalytic insight. This engagement is not just about adapting psychoanalysis to the AI revolution. It is importantly also about using psychoanalysis to influence how this revolution unfolds, ensuring it reflects the depth of human experience we are uniquely equipped to articulate.
Utilitarian: In a rapidly accelerating AI-fueled marketplace of ideas, using AI is not optional—it is essential, particularly in education and organizational life.
Relational: To engage with the creators and purveyors of digital culture, we must develop fluency in their language—or at least enough familiarity to meaningfully communicate. Our ability to influence the emerging AI future will significantly depend on how well we connect with entrepreneurs, researchers, government officials, tech-media, and other thought-leaders beyond our psychoanalytic tradition. One thing I learned at the Ashby Workshops is that translating our concepts into common sense language isn’t enough. To truly connect, we need to demonstrate familiarity with their experiences and the challenges they face.
Emotional: Fear and anxiety can paralyze, but the fantasy is often worse than reality. Acquiring procedural knowledge reduces anxiety and creates space for the oh-so-needed deeper critical psychoanalytic engagement. Looking under the bed reveals there is no bogeyman—just dust bunnies and dirty socks. As psychoanalysts, we know that facing fears rather than avoiding them opens the door to insight.
Todd Essig, PhD, is training and supervising analyst at the William Alanson White Institute; adjunct clinical professor at the NYU Postdoctoral Program; and founder and cochair of the APsA/DPE Council on Artificial Intelligence. He was a coauthor of the IPA TF2 Report.
Issue 59.1, Spring 2025