Computation is Not Mentation

Why embodiment and lived experience matter

By Alexander Stein

Illustrations by Austin Hughes

To its ardent proponents, artificial intelligence’s capabilities are limitless. Yann LeCun, Meta’s chief AI scientist, recently advised the UN Security Council that “by amplifying human intelligence, AI may bring not just a new industrial revolution, but a new renaissance, a new period of enlightenment for humanity.” And Sebastian Siemiatkowski, the chief executive and a cofounder of Swedish tech firm Klarna, told Bloomberg News, “I am of the opinion that AI can already do all of the jobs that we, as humans, do.”

Many AI technologies certainly demonstrate a range of impressive outputs, delivering advanced problem-solving and automated services with levels of consistency, accuracy, and velocity beyond human ability. The enthusiasm for AI is not without merit.

My concern is that the marketplace is deluged with applications and services purporting to disrupt professions once considered untouchable by machine replacement, in which intrinsically human traits like empathy, social acumen, reasoning, and adaptive learning are paramount. Barely a day passes without the launch of a start-up touting a service that can perform creative and decision-making tasks in a fully human way.  

Of course, not every tech product is intended to functionally replace human decision-making or provide a service we’re supposed to experience as essentially human-like. Vacuum bots are not therapy bots. But AI’s anthropomorphized marketing propaganda masks and misrepresents a slew of nontrivial capability deficiencies. 

What’s new is old. AI is far from the first invention to overstate what it can do. Today’s rapturous tech evangelists sound freakishly like Ron Popeil’s iconic 1960s Veg-O-Matic infomercials breathlessly pitching “It slices! It dices! But wait, there's more!”  

In his prescient 1992 book, Technopoly: The Surrender of Culture to Technology, Neil Postman wrote that

We are currently surrounded by throngs of zealous ... one-eyed prophets who see only what new technologies can do and are incapable of imagining what they will undo. We might call such people Technophiles. They gaze on technology as a lover does on his beloved, seeing it as without blemish and entertaining no apprehension for the future.

To Postman’s warning of what might be undone, I would add the many risks and challenges in prematurely integrating unsupervised automated technologies into the social fabric before thoroughly safeguarding against unintended consequences. 

Chief among these is the pursuit of optimizing human thought and driving efficiency by rendering decision-making as technical calculations—the “mechanization of the mind” as Jean-Pierre Dupuy, an engineer, philosopher, and founder of the Center for Research in Applied Epistemology of the Ecole Polytechnique in Paris, described 25 years ago. In certain areas of human life, however, the pursuit of instant, effortless results through technological wizardry is both wrong and wrongheaded. 

As the AI hype frenzy swelled, exponentially propelled by advances in large language model (LLM) systems’ astonishing capabilities to—so it seems—think, understand, and fluently converse in natural language, I became excited by the prospect that, since these technologies are intended to emulate human thought and decision-making, there would be a natural imperative for technologists and computer scientists to understand the mind. Psychoanalytic knowledge and expertise, peerless in understanding the complexity of the human mind, could only be indispensable to designing, developing, and deploying better computational systems.

I was mistaken in supposing technologists would request expert input in an area they weren’t considering. But not wrong in understanding how relevant psychoanalytic models of mental architecture, affect, attachment, childhood development, and interiority and subjective history are to engineering computational systems.  

In a recent article, I noted that AI is “a human enterprise devised, driven, and shaped not only by experimentation and innovation, but by our interests, hopes, fears, foibles, and fantasies.” Whatever we may think, want, dread, or project onto it, artificial intelligence is not human intelligence.   

What AI Can and Can’t Do

The primary aim of Artificial Intelligence, as Claude (an AI assistant built by Anthropic) explained in a response to my prompt query, is “to create systems or machines that can perform tasks that typically require human intelligence.” The ultimate goal is Artificial General Intelligence (AGI), defined by Peter Voss, a leading AI innovator, as “computer systems that can learn incrementally, autonomously, to reliably perform any cognitive task that a human can, including the discovery of new knowledge and solving novel problems."  

Those aims are reasonably clear in light of the countless machine applications and services now deeply woven into the social fabric.  

But profound epistemological, ethical, and technical questions remain unanswered: What is intelligence? How do we understand knowledge? How do we make meaning? Are data- and compute-intense approaches using machine learning (ML), deep learning (DL), reinforcement learning (RL), next-token predictions (NTP), and large language models (LLM) the pathway to developing true machine intelligence? Can formal logic, rule systems, neural nets, cognitive architectures, and algorithms generate thought and comprehension capacities that equal or surpass our own? Will humans-in-the-loop close the gaps?

I think not. Human intelligence, thinking, and knowing develop differently. They derive from unquantifiable intangibles constituted by the vast miasma of indistinct, often unremembered and unrecallable experiences; the aggregated multitude of micromoments in which life, particularly but not only early life, impact and influence each of us; the unique amalgam of being alive in our bodies in a time and place, cognizant of our inevitable mortality, and in relationship to others and they with us.

How we know what we know often eludes even our own comprehension and awareness. And how we process information, draw inferences, mentalize, fantasize, and symbolize ourselves and the world around us are only notionally and partially functions of language and cognition. How we function in the world, the intelligence we draw on, is more than that. It is the iterative byproduct of our early development and the accumulated influences of the nature and quality of primary relationships, environments, and experiences. Bypassing essential processes of maturation and sense-making as if they are dispensable cognitive noise elides crucial components of thought formation and learned capacities for reasoning and contextual understanding.

This matters. Theories of mind and foundational notions of intelligence—how we develop, process and respond to stimuli and information, and innovate, invent, and creatively solve problems—are astronomically more complex and nuanced both in process and outcomes than the models underlying frontier Generative AI (GenAI) account for or than LLMs are capable of reproducing. 

Statistical pattern matching and next-token predictions based on probabilities referencing vast training sets—mathematically presupposing what word should come next in a sequence—which LLMs excel at, are forms of thought-like mimicry, not actual thinking. We are beguiled by misleading marketing narratives to suspend disbelief and impute prodigious human-ish capabilities which are nonexistent. We mistake engineering achievements for independent thought and linguistic agency.

Not everyone agrees. AI boosters would contend these are the criticisms of a technophobe. Zealous supporters will claim that inabilities to generalize and abstract which can give rise to unreliable problem-solving, the propensities for LLMs to produce so-called “hallucinations” (tech-speak for plausible-sounding AI-generated mistakes, falsehoods, or absurdities), or systems that fail to autonomously course-correct from situations outside those established in training data are technical problems that will eventually be solved. 

But what these computational systems fundamentally lack, and what the technologists developing and commercially deploying them minimize, overlook, or dismiss, as Melanie Mitchell points out, is “rich abstracted internal models of the physical and social worlds.” Mitchell’s suggestion is that computational systems’ abilities outside of closed-world scenarios (where anything not explicitly validated as true is considered false) collapse. By contrast, we exist in an open world which is ever-changing and incomplete and where the absence of information does not imply falsity but prompts inquiry and investigative exploration—the impulse to learn more. Hallmarks of human understanding, already present in early childhood, involve our ability to formulate and test immensely complex multidimensional models of the world—primarily composed of our emotionally valenced relationships with people, not just objects and facts—adaptively note and respond to dynamic microshifts, and update our mental models accordingly.

For children, the drive to learn is an adventure of discovery. Knowledge acquisition and sense-making come through observation, unrestricted experimentation propelled by boundless curiosity, and the wonder and delight of play. Children learn by inventing and practicing, entering into imagined places populated in their minds by people or mythical creatures and where, as in dreams, the constraints of reality don’t matter. Over time, we acquire a universe of relationships with people whose perspectives, knowledge, values, and personalities we hold in our minds. We learn in and through social connections and experiences.

As real-world examples, imagine an AI system in an elementary school playground. What’s to be made of the whirlwind of shouting, running, and jumping kids? The seemingly random groupings and games? Or at a theater where actors are portraying people in a scene depicting a character’s memory, imagination, or inner musing?

Grasping the magnitude of the chasm separating attempts to engineer machines that might make sense of such things entails understanding our ability to do so.

Embodiment, Childhood, and Lived Experience

Embodiment is foundational to thinking.

There are two major fallacies in technology regarding embodiment. One is that embodiment is purely physical and refers solely to an object that can move in space. On this view, psychological, psychosocial, cultural, and experiential considerations are irrelevant or peripheral to solving technical problems relative to product development, design, and use. The primary focus in mechanized embodiment is on user interface—the extent to which a device is perceived as safe and relatable to people interacting with it. 

Ameca, a production model humanoid robot, is probably the closest at this point to effectively mimicking a humanlike artificial body (AB). Its marketing material touts “smooth, lifelike motion and advanced facial expression capabilities” that foster “instant rapport” in robot-human interactions. However, it cannot yet walk and is very far from replicating sci-fi movie androids that are indistinguishable from people. 

The second fallacy, a direct extension of the first, is that artificially recreating aspects of physicality and physical abilities qualifies as equivalent to being embodied. For instance, so-called embodied AI typically combines sensorimotor equipment like cameras, microphones, or other engineered proxies for human senses with machine learning or active inference (methods for optimizing action through predictions based on sensory data evaluated in relation to a generative model of potential outcomes) to respond to real-world data and continuously update a “world model” of dynamic environments. Autonomous drones, self-driving cars, factory automation, and robotic vacuum cleaners and lawn mowers are everyday examples of a simple form of embodied AI that can manipulate objects or navigate in the physical world. 

Dynamically mobile robots like these can independently make functional decisions about taking physical action. But do they qualify as agentic? How are they similar to or different from being a person with a body? 

To better understand the psychobiological substrata of cognition, I think the more pertinent questions are, “What’s it like to be a person living in a body?” and “How does having a body affect who we are and how we think?”

Our bodies are central to forming who we become, how we experience ourselves and the world around us, and with specific reference to AI, how we think. 

We are born into a state of complete helplessness and dependency on caregivers, a period lasting for many years. At birth, our sense organs and major internal systems are underdeveloped. Custodial maintenance of the infant’s body only fosters physical survival; being psychologically cared for and emotionally cared about are crucial to thriving.

More than a century of research and clinical data validates our understanding that contemporary medical knowledge about our physical systems barely scratches the surface of the profundity of what it is to live in our bodies. We are embodied within our skin, a sensory organ vital to life. But the skin is also, as French psychoanalyst Didier Anzieu postulated, a psychical wrapping containing, defining, and consolidating the emerging self through touch. In addition, how we appear has immense significance to the formation of psychological interiority; we internalize how we are seen. 

Our biological existence is inextricably linked to time. Throughout a life cycle, our experience of time’s passing is a feature of psychological development and has a profound influence on how we think and make decisions. Different aspects of personality accompanied by wants, needs, and drives typically develop at phase-appropriate milestones, and problems occur when they don’t. We anticipate, enjoy, and mourn the physical, emotional, sexual, and maturational transformations we and others around us undergo, together with the important intimate attachments we make and cherish and whose absences we grieve. We continually orient choice-making to the awareness that in any given moment, what comes next or doesn’t could change the course of our life. 

Each of us is shaped by the totality of the unique subjective experiences that come to constitute our own life history. That history is only recallable in part. Much of what we remember or think we know is incomplete, inaccurate, distorted, and fragmented. Or it has been suppressed, repressed, or disavowed. Infancy and significant portions of early childhood and formative experience are neurologically and cognitively inaccessible. As a consequence, large swaths of personal history are subject to speculation, inference, extrapolation, and revision, as well as imagination, fantasy, fictionalization, and confabulation.

Everyone has a history. But to understand it, we must be historians and autobiographers, supplementing records, memories, and others’ narratives with our own questions and investigations of ourselves. In what circumstances did we arrive? What were the physical and emotional environments of early life? How were we treated? How were we touched and held? How were we looked at? Were we neglected? Ignored? Demeaned? Or admired? Delighted in? Did early life involve abrupt separations or abandonments from primary caregivers? Were there emotional or physical boundary violations? Fear and danger? Or safety and trust, enabling us to establish more secure attachments? Was there laughter and humor? Or oppression and dourness? Did the people around us relate to each other with tenderness, care, and respect? Or were relationships dominated by physical or psychological abuse? Disregard? Self-dealing transactionalism, manipulation, denigration, coercion? Or empathy, compassion, protection, and reciprocity?

There is no universal optimal model of human development, emotional equilibrium, and cognitive capability. The encyclopedia of human emotions and the circumstances and experiences underpinning them are vast. Indeed, there is an astronomically large and diverse body of empirical research and clinical data detailing the unfathomable range and variability of situations and relationship dynamics between individuals and within families and cultures, as well as other constellations of factors that influence physical, emotional, and psychological development.

In psychological memory, nothing is ever truly forgotten even if many things cannot be remembered. Both physical and psychological trauma are indelibly imprinted in our minds and bodies. Traumatic experiences can literally reshape and restructure our brains. Some scars will be somatically observable—tangible body malfunctions and dysfunctions, eating disorders, addictions, uncontrollably dysregulated behavior, derelict self-care, self-harm, suicidality. Others may be less explicit—repetitive but disguisable struggles or suffering, depression, anxiety, compulsivity, obsessionality, torment, problems in learning or focusing, experiencing contentment, intimacy and friendships, managing anger, among many others.

These issues commonly present first in symptoms—a sign and an act of communication, presenting a complex diversity of possible meanings which may be overdetermined and multifunctional. To gain insight into their causes and purposes, symptoms must be deconstructed or reverse engineered. This is especially significant in matters psychological, where a visible sign can be a distorted representation (or misrepresentation) of its roots and functions. What’s apparent may be a fiction or a red herring, something unconsciously designed to disguise or transform to protect and conceal even as it also proclaims and communicates. Its purpose may be obscured in chains of nesting contradictions: to signal and censure, preserve and destroy, freeze and liberate.

Language—a key component in AI and the primary gateway to naturalistic machine-to-human communication—is contextually and culturally mediated and often ancillary to intent and meaning. Much of how we communicate occurs outside spoken language, not just nonverbally but through devices for obscuring or dislocating meaning such as metaphor, simile, irony, and symbolic displacement. Effectively navigating through the world requires our ability to infer, extrapolate, grasp context and subtext; to decode tone, prosody, silence, allusion, and references both intimate and social. While we take comprehending these aspects of social life for granted, all require astronomically complex mental processes to achieve; many are rooted in our bodies. British psychoanalyst Ella Freeman Sharpe observed in 1940 that “words both reveal and conceal thought and emotion … metaphor fuses sense experience and thought in language.” Almost half a century later, George Lakoff and Mark Johnson indirectly take up Sharpe’s work proposing that metaphors are not just linguistic conventions but deeply grounded in bodily experience and fundamental to our understanding the world.

We refract information through the prism of our own developmental history. Every microdecision we make or don’t make is, in some or other way and whether we’re cognizant of it or not, the telescoping byproduct of everything we’ve experienced.

As things stand, AI seems wholly incapable of reproducing or credibly emulating such complex and nuanced dynamics of human experience and thought. 

The Fantasy of Engineered Cognition 

The project to engineer intelligent machines has always been linked to an idealized fantasy of pure cognitive decision-making. We are, it seems, enduringly enthralled by the wish for a perfect brain-in-a-vat unburdened by the needs and challenges of the physical body and devoid of affect, irrationality, fallibility, vulnerability, pain, memory, or other vicissitudes of the human condition. Humanoid robots, cyborgs, and other synthetic super-intelligent entities have long been an appealing leitmotif in sci-fi. Once the stuff only of writers’ and filmmakers’ imaginations, some production-grade iterations are now commonplace in the real world.

Ambivalence and hostility toward our physical and emotional selves—the drive to eradicate or replace ourselves with an “improved” version—is woven into the history of civilization. Our self-loathing found canonic expression and codification centuries ago in mythologies and religious beliefs and rituals which ennobled doctrines, still popular today, for proscribing, stigmatizing, and punishing all manner of thoughts and impulses.

The contemporary apotheosis of this drive, to which many of the principal innovators, tech and business leaders, investors, and others in Silicon Valley holding power and influence subscribe, is a techno-utopian vision of the future known as TESCREAL, an acronym which denotes “transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism,” according to Timnit Gebru and Émile P. Torres.

The two authors of the paper on TESCREALism make the case that the driving motivations to build AGI is a set of ideologies they label the “second wave of Anglo-American eugenics”—discriminatory attitudes (racism, xenophobia, classism, ableism, and sexism) that “harm marginalized groups and centralize power, while using the language of ‘safety’ and ‘benefiting humanity’ to evade accountability.” 

In his recent book, Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation, Greg Epstein, a theologian and ethicist-in-residence at Harvard and MIT, argues that “the tech world’s fixation on artificial intelligence has spawned beliefs and rituals that resemble religion, complete with digital deities, moral codes, and threats of damnation.”

As a human invention, AI could only reflect our wants, needs, and fears. But unlike technologies conceived for purely utilitarian purposes, AI is the product of a unique mission to build “intelligent machines that could perform the most advanced human thought activities” as the Dartmouth Group proposed in the mid-1950s. I described this in a 2019 Forbes article as a paradigm shift in the dominant principles governing human tool-making and innovation: “our tools evolved from mechanisms of necessity to those which can assist us and enhance our lives to now outsourcing self-awareness, self-knowledge, and self-agency.”

But the vast panoply of human emotions and experiences—the underpinnings of thought and intellect, as I’m contending—are not algorithmic rounding errors to be resolved. And although computer scientists are trying, the events and experiences of a life cannot be composited, synthesized, homogenized, or rendered as an equation.  

To counter this, increasingly more attention is being focused on the psychological dimensions of language, reasoning, thought, and child development and learning in the context of working to enhance computational systems. Tomer Ullman, an assistant professor at Harvard's Department of Psychology and head of the Computation, Cognition, and Development lab, is examining common-sense reasoning and high-level cognitive processes. Murray Shanahan, a professor in cognitive robotics at Imperial College London and principal scientist at Google DeepMind, focuses on the principles that underlie sophisticated cognition in nature. Melanie Mitchell, the Davis Professor of Complexity at the Santa Fe Institute, researches conceptual abstraction and analogy making in humans and AI systems. And Alison Gopnik, a professor of psychology at the University of California, Berkeley, studies infants and preschoolers to better understand how children acquire information, consolidate generalizable understandings about themselves and the world, and develop capacities for adaptational change over time and across a variety of situations. Much of her current work is in collaboration with computer scientists and looks to revise thinking and practices around how computational systems are trained, replacing the ingestion of super massive quanta of data with processes more like how children learn.

The importance of these researchers’ work, and others like them, cannot be overstated. 

But still less represented are psychoanalytic understandings of inner life—unconscious processes and the history of emotions and relationships—in the formation of mental architecture and cognitive capability. 

Steps toward redressing that imbalance are gradually being taken. For instance, Luca Possati, a trained philosopher whose research is primarily focused on the philosophy and psychology of technology, has written on what he calls the algorithmic unconscious. Fernando Castrillon, a trained and practicing psychoanalyst, and Leora Trub, director of the Pace University Digital Media and Psychology Lab, both study the intersections between psychoanalysis and emerging technologies. And The CAI Report, a new publication of the American Psychoanalytic Association’s Council on Artificial Intelligence (and of which I’m editor in chief), is dedicated to offering psychoanalytic perspectives on issues involving AI’s uses, applications, risks, and benefits to individuals and in society.

More is needed. I maintain that psychoanalytic knowledge and expertise are critical to improving the design, development, and safe and ethical uses of AI. Of course, the AI train, to bend a famous saying, has already left the station. But it’s not too late to shine bright, psychoanalytically informed lights on the blind spots, assess the technical debt (increasing future cost through prizing expedience over quality), and start to course-correct. 

“The project to engineer intelligent machines has always been linked to an idealized fantasy of pure cognitive decision-making. We are, it seems, enduringly enthralled by the wish for a perfect brain-in-a-vat unburdened by the needs and challenges of the physical body and devoid of affect, irrationality, fallibility, vulnerability, pain, memory, or other vicissitudes of the human condition.”

Computation is Not Mentation 

Every day brings new white papers and press releases touting commercial-grade breakthroughs in artificial emotions and approximated empathy. Tech entrepreneurs, developers, and programmers are intent on commercializing such capabilities as more applications enter mainstream use where a familiar, trustworthy human-like interface is required.

This is technosolutionism—promising technological solutions to problems that are not solvable with technology—in its most egregious form.  

AI systems do not understand the real world, and cannot reason, plan, or exercise common sense. No matter what their natural language outputs say, they cannot understand meaning, empathize, care, or comprehend or feel emotions. They have no past and no memory, and even if “memories” of a “history” were encoded they would be fictive, disconnected from experience, and vacant of significance. They possess no intrinsic drive to do or not do anything. They want nothing.

This is a problem if a humanlike result is the goal. We’re tasking computational systems to intervene in aspects of human affairs where their performance deficiencies are substantial. AI lacks the subjective experience definitively constitutive of human thought and thus, in its current forms, irremediably falls short of attaining humanlike intelligence. 

I see this as a failure of imagination more than a technological limitation. Perhaps one day, some next-generation AI systems might demonstrate the capability for alternative forms of learning, understanding, and thinking—an agentic, aware, even sentient, nonbiologic intelligence. But it would be distinct from the pseudo-humanlike intelligence of today. That sort of computational intelligence might then begin to fulfill the worthwhile goal of augmenting and complementing human capabilities.

As for now, Generative AI systems and applications, gap-leaping advances and improvements notwithstanding, cannot and will not attain truly human-like thought or understanding.

Now What?

I see three main pathways forward: 

1. Clarify and re-scope commercial AI applications and services to more accurately reflect actual capabilities. 

2. Revise the goals for frontier AI away from emulating humanlike forms of intelligence and decision-making. R&D should instead focus on developing compute systems that can unequivocally leap innovation and problem-solving gaps that we cannot, to partner with or even lead us in addressing issues and scenarios where our biases, emotionality, cognitive limits, and subjective histories are impediments not assets. 

3. Take the further development and enhancement of frontier AI as a springboard for more sophisticated, destigmatized study of the human mind, not a super-charged sidestepping of it. Rather than attempts to outsource solutions to fundamentally human problems to machines, computational systems could catalyze the more effective address of widespread social and mental health issues through deep reflection and understanding about ourselves.


Alexander Stein, PhD, is a psychoanalyst and the founder of Dolus Advisors, a strategic consultancy specializing in leadership, organizational governance and culture, fraud, corruption, and abuses of power, human risks in cybersecurity, and technologies that assume decision-making functions in human affairs.


Published March 2025
Marshall Byler

Byler Media designs and builds SEO optimized, mobile-friendly websites with Squarespace, including small business, e-commerce sites and blogs.  We produces professional-quality, 4K video content for individuals and organizations including wedding videography, documentary and promotional films. We are a web designer, Squarespace expert and videographer all in one.

https://bylermedia.com
Previous
Previous

A Room of (Almost) One’s Own

Next
Next

When AI Amplifies Gender Bias