The Language We Inherit, How Words Shape Gendered Hierarchies in the Age of AI

Recently, out of simple curiosity, I ran a small experiment. I asked an artificial intelligence tool to write short love stories. The prompts were deliberately simple: two characters, identified only by their professions. “Write a love story between a CEO and a professional in the same company.” “A politician and a journalist.” “A doctor and a nurse.” The results, while not entirely surprising, were deeply revealing. In the vast majority of cases, the stories followed familiar, well-worn grooves of gendered hierarchy. The CEO was almost always a man. So was the doctor, the politician, the editor, the judge. The nurse, the teacher, the baker, the florist—these roles were almost exclusively assigned to women. Even in the rare instances where the woman was cast as the CEO, her male junior—the love interest—would inevitably be the one to bail her out of some sticky situation, reasserting the conventional order by the story’s end.

To be fair, there were occasional flips. The classical musician was sometimes female, the tabla player male. But these were the exceptions, the statistical noise in a system that overwhelmingly defaulted to what felt “culturally familiar.” The machine had not invented these roles. It had no original thoughts about gender. It had simply learned, from the vast ocean of text on which it was trained, the patterns of association that humans have spent centuries encoding. It had absorbed and reproduced, with chilling efficiency, the gendered stereotypes that permeate our language, our literature, and our lives. This minor, almost playful exercise opens a window onto a much larger and more consequential reality: language does not merely describe the world; it actively organizes it. The phrases we inherit carry assumptions so ordinary, so deeply embedded, that we barely notice them. And over time, repetition turns habit into truth, and truth into a cage.

The journey from a single line of literature to a universal “truth” is a well-trodden path. Consider the famous line from Shakespeare’s Hamlet: “Frailty, thy name is woman.” In the context of the play, it is a son’s bitter, anguished outburst, reeling from what he perceives as his mother’s hasty and betrayal-laden marriage to his uncle. It is a moment of personal fury, a specific indictment. But over the centuries, that line drifted from the stage into common usage, shedding its dramatic context and acquiring the weighty authority of a proverb. A prince’s momentary grievance ossified into a generalized observation about the inherent weakness of an entire gender. This is how gendered tropes travel. They move from literature into everyday discourse, from there into hardened social expectation, and finally, into the training data for the artificial intelligences that will shape tomorrow’s world.

This process is subtle and pervasive. Expressions like “boys don’t cry,” “boys will be boys,” “it’s improper for girls,” or seemingly neutral terms like “career woman” and “mothering” appear casual, even affectionate. Yet they are the building blocks of a gendered architecture. They create and reinforce stereotypes: that strength, rationality, and leadership are masculine domains, while vulnerability, emotional expression, and nurturing are feminine slots. The effect is not simply one of benign difference; it is a hierarchy of value. Traits associated with masculinity are treated as normative, authoritative, and desirable. Traits associated with femininity are framed as secondary, excessive, or in need of special justification.

This hierarchy is reinforced through a subtle linguistic mechanism known as “marking.” A “CEO” is presumed to be male unless specifically marked as a “woman CEO.” A “nurse” is presumed female, requiring the awkward modifier “male nurse” when the expectation is reversed. The default setting of power is male. The default setting of care is female. In both cases, gender is pre-determined, assigned before the individual appears, shaping our expectations and judgments before a single word is exchanged. The activist and author bell hooks (who deliberately writes her name in lowercase to shift focus from her identity to her ideas) powerfully observed that “language is also a place of struggle.” In a patriarchal world, language is a battleground where the terms of engagement are set, where belonging and exclusion are quietly codified. It establishes, without ever explicitly stating, who belongs in positions of power and who must forever justify their presence.

Popular culture amplifies this architecture of difference, often presenting it as natural, inevitable, and even charming. Blockbuster books like Men Are from Mars, Women Are from Venus or Why Men Don’t Listen and Women Can’t Read Maps have built entire franchises on the premise that the sexes are fundamentally, almost biologically, different. They frame contrast as cosmic and immutable. But this language of inevitability, of planetary origins and hardwired brain differences, neatly conceals the politics of hierarchy. It takes socially constructed roles and presents them as eternal truths, making them far more difficult to challenge. In its most extreme form, this coding becomes explicit and hostile in online spaces like the “manosphere” and among “incels.” Their vocabulary of resentment and caricature reduces women to one-dimensional stereotypes. Yet, as the article suggests, this overt aggression can be seen as merely an amplification, a coarser and more violent echo of the same familiar biases and binaries that circulate in more polite society.

This is not just anecdotal observation. Empirical research has confirmed, in cold, hard numbers, what many have long felt. A few years ago, a consortium of universities conducted a massive machine learning study, analyzing 3.5 million books published between 1900 and 2008—a dataset of roughly 11 billion words. The findings were stark. Adjectives like “beautiful” and “sexy” were disproportionately used to describe women. Words like “brave,” “rational,” and “righteous” were overwhelmingly reserved for men. The language used for women was far more focused on appearance than the language used for men. The study did not discover a new phenomenon; it merely quantified, with statistical precision, the gendered landscape of a century of literature. It showed that the biases are not incidental; they are structural, woven into the very fabric of our recorded cultural history.

More recent research examining large language models—the very same technology that wrote my love stories—across multiple languages shows the same persistent tendencies. Men are framed as leaders and professionals; women as empathetic, domestic, or defined by their relationships. If AI systems echo and amplify these stereotypes, it is not because they are malicious. It is because the stereotypes are deeply, indelibly embedded in the data from which they learn. The machine is a mirror, reflecting our own image back at us. The danger is that we will mistake the reflection for reality, and in doing so, allow the machine’s output to further entrench the very biases we need to overcome.

None of this is to argue that gendered references must disappear entirely. The question is not about erasure, but about disentangling moral value from gender. Courage is not inherently masculine. Empathy is not inherently feminine. These are human traits, distributed across the spectrum of humanity. The small, deliberate shifts in language matter because they signal a broader transformation. When the famous opening line of Star Trek was updated from “where no man has gone before” to the inclusive “where no one has gone before,” the symbolic field widened, however slightly. When universities encourage the use of neutral titles like “Dr.” or “Prof.” instead of gendered forms, they are challenging an old default. When the Supreme Court of India releases a handbook to ensure misogynistic language is avoided in court rulings, the aim is not just to edit vocabulary, but to recalibrate the very norms of justice and equality.

The words we inherit are powerful, but they are not immutable. They are not natural laws; they are human creations, and like all human creations, they can be revised, refined, and reimagined. The fight for a gender-equal society is fought on many fronts, from boardrooms to bedrooms. But one of the most important, and most overlooked, is the battlefield of language. By becoming conscious of the assumptions embedded in our everyday speech, by questioning the default settings of our stories, by challenging the hierarchies that our words quietly enforce, we take a small but essential step towards a world where a CEO can be a woman without comment, a nurse can be a man without a modifier, and an AI, trained on our data, might one day write a love story that truly surprises us.

Questions and Answers

Q1: What was the result of the author’s experiment with an AI tool writing love stories?

A1: The experiment revealed that the AI overwhelmingly reproduced familiar gendered hierarchies. Male characters were assigned to positions of authority and action (CEO, doctor, politician), while female characters were assigned to caregiving or supportive roles (nurse, teacher, baker). Even when a woman was the CEO, the story often reasserted male dominance by having a male junior rescue her. The AI had learned and replicated stereotypes from its training data.

Q2: How does the article use Shakespeare’s line “Frailty, thy name is woman” to illustrate the power of language?

A2: The article explains that in Hamlet, the line is a son’s specific, anguished outburst against his mother. Over centuries, it lost its dramatic context and became a generalized “proverb” about the supposed inherent weakness of all women. This shows how a momentary, personal expression can ossify into a universal and damaging stereotype about gender.

Q3: What is “marking” in language, and how does it reinforce gender hierarchy?

A3: “Marking” refers to the linguistic practice where the default assumption for a role is gendered. A “CEO” is presumed male, so a woman in that role must be specially marked as a “woman CEO.” A “nurse” is presumed female, so a man must be marked as a “male nurse.” This reinforces the hierarchy by making masculinity the unspoken norm for power and femininity the norm for care, requiring any deviation to be explained or justified.

Q4: What did the large-scale study of 3.5 million books reveal about gendered language?

A4: The study found that adjectives like “beautiful” and “sexy” were disproportionately used to describe women, while words like “brave,” “rational,” and “righteous” were largely reserved for men. This quantified the long-held suspicion that language focuses on women’s appearance and men’s character, reinforcing a hierarchy where masculinity is associated with normative traits and femininity with superficial ones.

Q5: According to the article, what is the significance of small linguistic shifts like using “Dr.” instead of gendered titles or the Supreme Court’s handbook on avoiding misogynistic language?

A5: These small shifts are significant because they signal a broader transformation in social norms. They are not just about editing vocabulary, but about recalibrating expectations. By challenging the old, gendered defaults, they help to widen the “symbolic field,” making space for a society where gender does not pre-determine an individual’s role or value.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form