Stanford University: Researchers uncover AI bias against older working women
- Global Research Partnerships
- Oct 16
- 1 min read

Just how deeply embedded are social biases about gender and age? A new study published in Nature finds that inaccurate stereotypes about older women are not only pervasive in online images and videos but are perpetuated and amplified by large language models (LLMs ).
While previous research has focused on age-related gender bias in specific settings, this research aims to "characterize a culture-wide trend," explains Douglas Guilbeault, an assistant professor of organizational behavior at Stanford Graduate School of Business. In a series of large-scale studies conducted with Solène Delecourt, PhD '20, of the University of California, Berkeley, Haas School of Business, and Bhargav Srinivasa Desikan of the University of Oxford/Autonomy Institute, Guilbeault found widespread evidence of bias against older women on popular image and video sites and in the algorithms that power popular AI tools such as ChatGPT.
The study explored how gendered expectations shape our mental picture of women at work – including a tendency to see women in certain jobs as younger than they are. Previous research into this bias has looked at "value-based judgments that it is 'bad' to be an older woman," Guilbeault says. This new research took a broader view, exploring how assumptions about gender and age shape depictions of women in particular roles. "We were first looking at the statistical relationship," Guilbeault says. "Before we even talk about bias – do people simply perceive women in some jobs as younger, period?"



