Gender bias in visual categorization: When DiCaprio is an actor but Jolie is a woman

Poster Presentation 16.335: Friday, May 15, 2026, 3:45 – 6:00 pm, Banyan Breezeway
Session: Face and Body Perception: Social cognition 1

Rui Zhe Goh1 (), Jiayi Li2, Ian Phillips1, Chaz Firestone1; 1Johns Hopkins University, 2Cambridge University

Objects can be categorized at differing levels of abstraction; for example, the same image may be classified as a blue jay, a bird, an animal, or a living thing. This pattern also arises for people; for example, the same image may be classified as Gordon Ramsay, a chef, a Scot, or a person. But which categories are most visually salient—and is such salience biased by gender? We investigated this question using a visual categorization task. Subjects saw images of objects and people, and were asked simply to name any category corresponding to the image. In addition to objects such as vegetables and buildings, each subject saw one woman and one man, drawn from one of the following classes: 1) famous people from various professions (e.g., Angelina Jolie and Leonardo DiCaprio, Serena Williams and LeBron James); 2) stock photographs of anonymous people from minimally-gendered categories (e.g., tourist, pedestrian); 3) AI-generated images of these same minimally-gendered categories, created to be identical except for gender. Every image was part of a matched pair of men and women from the same image class, but no subject saw a man and woman from the same matched pair. (For example, a subject could see Angelina Jolie and LeBron James, but never Angelina Jolie and Leonardo DiCaprio.) Remarkably, subjects were more likely to categorize women according to their gender than men. Put differently, they were biased to say “actor” for DiCaprio but “woman” for Jolie, and “pedestrian” for male pedestrians but “woman” for female pedestrians. This trend arose for all three image classes, suggesting that it was not driven by idiosyncratic image differences or knowledge of the people depicted. Our results reveal a gender bias in visual categorization that may have pervasive cognitive and social implications.