Efficient Eyes: Face Recognition Ability Shapes How Much Information Is Needed—and How Much the Eyes Are Used—for Identity and Gender Processing
Poster Presentation 43.334: Monday, May 18, 2026, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Face and Body Perception: Individual differences
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Laurianne Côté1, Jérémy Lamontagne1, Mélodie Potvin-Poirier1, Caroline Blais1, Daniel Fiset1; 1Université du Québec en Outaouais
Previous studies have documented a tight link between face-recognition ability and the amount of visual information required to perform a face-recognition task (Royer et al., 2015). Better recognizers not only succeed with less, they rely more on the eye region (Royer et al., 2018), a feature strongly associated with recognition success (e.g. Butler et al., 2010). However, it remains unclear whether the best face recognizers are more efficient only when processing identity, or also in other eye-based tasks such as gender categorization (e.g. Dupuis-Roy et al., 2009). In the present study, 97 participants completed two tests of face-recognition ability (CFMT and GFMT), an identity-matching task, and a gender-categorization task. To quantify the information required for task performance, we combined the Bubbles technique, which randomly samples face regions, with QUEST (Watson & Pelli, 1983) to estimate the minimum information needed to reach a 75% accuracy threshold. Our findings replicated previous work: individuals with superior face-recognition ability required significantly less visual information to perform the identity-matching task (r=–.58, p<.001). Face-recognition scores were likewise associated with needing less visual information to perform the gender-categorization task (r=–.26, p=.009), indicating that superior face recognizers require less visual information, whether judging identity or gender. Furthermore, the amount of visual information each participant required in the two tasks was positively correlated (r=.47, p<.001), suggesting a stable, cross-task signature in the quantity of information each individual needs to process faces. Importantly, this link extended to which information was used: replicating prior work, better face-recognition abilities predicted greater reliance on the eye region in both the identity-matching (r=.27, p=.008) and gender-categorization tasks (r =.25, p =.012). Together, these findings extend previous results, showing that face-recognition ability predicts the use of diagnostic eye information across tasks where this feature is relevant.
Acknowledgements: Natural Sciences and Engineering Research Council of Canada (NSERC) and Fonds de recherche du Québec - Nature et technologies