When AI Amplifies Gender Bias
From projective identification to equitable hiring systems
By Karyne Messina
Illustration by Austin Hughes
When Sam first walked into my clinical practice, he was working through a troubling realization: The AI hiring system he had helped create was showing signs of bias against women candidates, particularly those from nontraditional backgrounds, such as women making a career switch or candidates who completed coding bootcamps in lieu of traditional degree programs. The system’s bias hit close to home since Sam’s sister had recently been rejected for a tech job despite her strong qualifications. Sam’s journey in therapy showed how psychoanalytic concepts can help developers understand and address gender bias in AI.
The key to understanding Sam’s situation lay in psychoanalytical mechanisms of defense. One was projection, such as when male developers unconsciously project their insecurities about competence onto women candidates. Projective identification goes further. It occurs when we project feelings onto others that we can’t tolerate about ourselves and then behave in ways that assume the recipient embodies those projections. In response, the object of the projection may even behave in ways that seem to justify the projection. In AI development, this dynamic manifests when developers unconsciously project their biases into AI systems that then act on those projections on a massive scale.
Over the course of several sessions, Sam saw unsettling connections between his personal experience and his work. His discomfort with the AI’s bias led him to examine his own attitudes. With my help, he recognized how subtle doubts about women in tech—doubts absorbed from industry culture—had unconsciously shaped how he designed AI systems. Writ large, Sam’s example shows how bias flows from human relationships into artificial intelligence, creating a digital echo of our psychological struggles.
Discrimination at Amazon
Sam’s case isn’t unique in the AI world. In 2014, Amazon launched an AI recruiting system it had developed that unintentionally but systematically discriminated against women candidates.
The Amazon case revealed how AI systems can amplify existing biases. According to Reuters reporting, the system was trained on patterns from resumes submitted to Amazon over a 10-year period. Because these resumes came predominantly from men (reflecting male dominance in the tech industry), the system taught itself that male candidates were preferable. The algorithms penalized resumes containing words like “women’s” and the names of certain all-women’s colleges. It even developed bias against verbs more commonly found in women’s resumes. The system automatically downgraded applications that didn’t match these masculine-coded language patterns, even though different communication styles might be equally or more valuable for the job.
The implications were significant. Just as people can unconsciously perpetuate societal biases, AI systems can internalize and amplify these prejudices in ways that become increasingly difficult to untangle from their core functioning. After numerous attempts at rehabilitating the system, Amazon ultimately scrapped it when developers couldn’t prevent the tool from finding new ways to discriminate, much like how deeply embedded psychological patterns resist simple solutions.
This dilemma revealed how our psychologically driven assumptions can become supercharged through AI. If projection is what happens when we push our own doubts and biases onto others, projective identification, in this case, is the act of penalizing resumes that didn’t match its expectations. Those embedded biases resulted in thousands of job seekers never getting a call back or knowing why they were rejected in the first place.
Self-Reinforcing Cycles
For Sam, this revelation was both professional and deeply personal. Even well-intentioned male AI developers struggle to recognize their gender biases. During one session, Sam described reviewing his code and finding what he called “blind spots:” assumptions he had unconsciously built into the system. These blind spots formed what we might call an “algorithmic unconscious,” a hidden layer of bias that shapes AI behavior in ways that mirror human psychological processes.
“The algorithms don’t just process our data,” Sam observed during one session; “they encode and amplify our unconscious biases.”
Sam began seeing how these biases created self-reinforcing cycles as our therapeutic work progressed. In tech companies where men account for 77–81% of the technical workforce, this creates a cycle of homogeneity.
Sam’s growing awareness led us to examine studies showing how AI hiring systems could systematically disadvantage candidates through their analysis of language patterns. For example, women are more likely to use “we” when describing achievements, while men tend to use “I”—a difference that can affect how AI systems evaluate resumes.
“My sister speaks three languages and has managed global teams,” Sam recalled. “Yet these systems might downgrade her application simply because she doesn’t use the aggressive verbs and self-promoting language that the algorithms have been trained to favor.”
The impact of these hiring practices was far-reaching. When companies view their metrics as successful, they may fail to recognize how their AI systems systematically exclude qualified candidates who don’t match the dominant group’s characteristics. This pattern particularly concerned Sam, as he saw parallels with his own experience of unconsciously accepting the tech industry biases as normal.
Through our work together, Sam began to see how his awakening paralleled larger systemic issues in AI development. “We can’t just hope these systems will naturally become more inclusive,” Sam realized. “We have to consciously build them to recognize and value different ways of being.”
These blind spots formed what we might call an “algorithmic unconscious.”
Building New Tools
Sam began implementing changes in his AI development work as his analysis continued. He discovered that understanding psychological defense mechanisms helped him identify potential bias points in AI systems. “It’s like the AI becomes a mirror,” he explained, “reflecting not just our data choices but our unconscious assumptions about who belongs in tech.” Sam started seeing his work with fresh eyes, catching biases he’d missed before. He could now spot where the AI was projecting traditional male leadership style as the de facto path to success.
This insight led to practical changes. Sam developed new testing protocols that specifically looked for patterns of gender bias in AI outputs. He began collaborating with women developers and leaders, including his sister, to understand better how AI systems could recognize and value diverse work and leadership styles.
What troubled Sam most was what he called the “bros helping bros effect.” When an AI system absorbed biased patterns from the male-dominated tech culture, it didn’t just reproduce them, it amplified them into absolute rules. A subtle preference for traditionally masculine communication styles could become an automatic rejection of equally valid but different approaches. The scale of this amplification was staggering: a biased human manager might affect dozens of careers. A biased AI system could impact millions.
“We’re not just building tools,” Sam reflected in one of our sessions, “we’re building systems that either perpetuate or challenge existing power structures. And like any system, if we’re not conscious of our projections and biases, we risk causing real harm.”
Sam’s journey highlights the challenge in modern AI development to render unconscious gender biases visible and addressable. Just as psychoanalysis helped Sam recognize his internalized prejudices, AI developers must illuminate the “black box” of algorithmic decision-making. Sam now leads workshops with his sister, helping other male developers recognize and address their unconscious biases before they become encoded into AI systems.
“Understanding projective identification changed how I build AI,” Sam concluded. “It’s not enough to just clean our data or refine our algorithms. We need to understand the psychological dynamics at play, especially how our unexamined assumptions about gender shape the systems we create.”
Dr. Karyne Messina is a psychoanalyst who is on the medical staff of Johns Hopkins Medicine. She chairs one of APsA’s Department of Psychoanalytic Education committees and is on the AI Council. She is writing her eighth book about projective identification.
Published February 2025
Potentially personally identifying information presented in this article that relates directly or indirectly to an individual, or individuals, has been changed to disguise and safeguard the confidentiality, privacy and data protection rights of those concerned.