Idiosyncratic Fixation Patterns generalize across Dynamic and Static Facial Expression Recognition

Poster Presentation 63.334: Wednesday, May 22, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Eye Movements: Perception, cognition and memory

Anita Paparelli1, Nayla Sokhn1, Lisa Stacchi1, Antoine Coutrot2, Anne-Raphaëlle Richoz1, Roberto Caldara1; 1Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland, 2Laboratoire d’Informatique en Image et Systèmes d’information, French Centre National de la Recherche Scientifique, University of Lyon, Lyon, France

Facial expression recognition (FER) is crucial for understanding the emotional state of others during social interactions. It has been assumed that all humans share universal visual sampling strategies to achieve this feat. While several recent studies have revealed striking idiosyncratic fixation patterns during face identification, very little is yet known about whether such idiosyncrasies extend to the recognition of static and more ecologically valid dynamic facial expressions of emotion (FEE). To this aim, we tracked observers’ eye movements categorizing static and dynamic faces displaying the six basic FEE, all normalized for time presentation (1s), contrast, luminance and the overall sampled energy. We used robust data-driven analyses combining statistical fixation maps (iMap) with hidden Markov Models (EMHMM). Then, by dividing our subjects’ fixations into 12 conditions (2 visual modality x 6 basic expressions) we assessed the generalizability from their grouping with EMHMM. Incorporating both spatial and temporal dimensions of eye-movements provides powerful and well-suited measures to assess the presence of reliable individual differences in face scanning strategies. With the use of these comprehensive statistical computational tools, our data revealed the presence of marked idiosyncratic fixation patterns. Interestingly, these individual visual sampling strategies generalized for the decoding of both static and dynamic modalities of FEEs. Moreover, the fixation patterns varied with the expression at hand. Importantly, altogether our data show that spatiotemporal idiosyncratic gaze strategies also occur for the biologically relevant recognition of emotions, further questioning the universality of this process.

Acknowledgements: This work was supported with funding from the Swiss National Science Foundation awarded to RC (10001C_201145).