Neural representations of dynamic facial expressions in rSTS reflect perceived emotional and social similarity
Poster Presentation 56.329: Tuesday, May 19, 2026, 2:45 – 6:45 pm, Banyan Breezeway
Session: Face and Body Perception: Neural mechanisms
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Hilal Nizamoğlu1, Katharina Dobs1; 1Justus Liebig University in Giessen
The right superior temporal sulcus (rSTS) is critically involved in processing dynamic social cues, yet how it encodes complex facial expressions remains unclear. Here, we investigated how well both stimulus-derived features and perceptual similarity judgments predict neural representations of dynamic facial expressions in this region. We recorded fMRI data while participants viewed 48 video clips (4 actors × 12 expressions) depicting emotional and conversational facial expressions. As model predictors, we used perceptual similarity judgments from a behavioral experiment and independent ratings of emotional (valence, arousal, affectiveness), social (communicativeness, friendliness), and motion-based properties for the same stimuli. Using representational similarity analysis (RSA), we tested whether these models could explain multivoxel response patterns in rSTS. Preliminary results (N=10) showed that rSTS activity patterns were best predicted by arousal, affectiveness, communicativeness—and critically, by behavioral similarity judgments. Among all predictors, only the behavioral model uniquely explained variance in rSTS, suggesting that this region encodes a perceptually grounded, multidimensional representation organized by social-emotional meaning. Our findings reveal an alignment between neural and perceptual representations of dynamic facial expressions, bridging behavior and brain.