Unraveling the Neural Code for Real Life Facial Expression Perception

Poster Presentation 36.407: Sunday, May 19, 2024, 2:45 – 6:45 pm, Pavilion
Session: Face and Body Perception: Neural mechanisms 1

Arish Alreja1, Michael Ward2, Taylor Abel3, Mark Richardson4, Louis-Philippe Morency1, Avniel Ghuman3; 1Carnegie Mellon University, 2University of California, Los Angeles, 3University of Pittsburgh, 4Massachusetts General Hospital and Harvard University

We study face perception to understand how our brains process the identity, expressions, and facial movements of friends, family, coworkers, and others in real life. Controlled experiments have revealed many aspects of how the brain codes for faces, but little is known about how the brain codes for the natural intensity and expressions during real life interactions. We collected intracranial recordings from epilepsy patient-participants who wore eye-tracking glasses to capture everything they saw on a moment-to-moment basis during hours of natural unscripted interactions with friends, family, and experimenters. Face pose, identity, expressions, and motion were parameterized using computer vision, deep learning, face AI and state space models. Fixation locked facial features and brain activity were related using a bidirectional model which maximized the correlation between them in a jointly learned latent neuro-perceptual space. The model predicted brain and face dynamics from each other accurately (d’ of approximately 1.8, 2.47, 1.02 for overall, between and within identity comparisons). Reconstructed brain activity revealed an important role for the recently proposed putative social vision pathway alongside traditional face areas in ventral temporal cortex. Probing the representational space for facial expression and motion revealed a person’s resting facial expression as an important anchor point and that neural populations were more sharply tuned to changes in expression than their intensity. Lastly, the brain exhibited greater sensitivity to small changes from a person’s resting face, such as a coy smile, compared to similar differences between a big and a slightly bigger smile, a potential analog of the Weber-Fechner law for facial expressions. Together, these results demonstrate that during real world interactions, instances of individual fixations on a person’s face are coded with “oval” shaped tuning spaces wherein the oval pointed to the resting expression (norm) and became bigger further from that expression.

Acknowledgements: NSF (1734907) and NIH (R01MH132225, R01MH107797)