Face Perception: Disorders, individual differences, and social cognition
Talk Session: Saturday, May 20, 2023, 8:15 – 9:45 am, Talk Room 1
Moderator: Brad Duchaine, Dartmouth
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 8:15 am, 21.11
Strong modulation of face distortions in prosopometamorphopsia by color
Antônio Mello1, Daniel Stehr1, Krzysztof Bujarski1, Viola Störmer1, Brad Duchaine1; 1Dartmouth College
Prosopometamorphopsia (PMO) is a rare perceptual condition in which people see distortions to faces. Individuals with PMO often find the distortions disturbing, so developing interventions that reduce them would be valuable. Here, we report the case of VS, a 58-year-old man with full-face PMO who sees remarkably similar distortions on every face he encounters in daily life. These distortions include severely stretched eyes, ears, and mouths, widened noses, three deep grooves on the cheeks, two on the forehead, and one shallow groove on the chin. Two characteristics of VS’s case make his PMO unique: he does not see distortions on normally colored faces on screens or paper, and his distortions are strongly modulated by color. To visualize his distortions, we asked VS to compare simultaneously presented real-world faces and photos of them taken under the same conditions. Using image editing software and VS’s real-time feedback, we produced the first photorealistic visualizations of PMO distortions. To measure the effect of color on his distortions, VS described and rated, on a scale from zero (“no distortion”) to 10 (“maximum distortion”), what he saw when looking at real-world stimuli with and without Roscolux colored plastic filters in front of his eyes. The intensity and nature of the face distortions were not affected by view (frontal, profile), orientation (0º, 90º, 180º, 270º), or the proportion of the face visible. Distortions were strongly and consistently affected, however, by color, with green filters decreasing (median = 1.50) and red filters increasing (median = 7.00) the distortions relative to the no-filter baseline (median = 5.00). Aside from paragraphs and grid patterns, no other stimuli were perceived as distorted. The results demonstrate that color filters can substantially reduce face distortions in PMO and suggest that color may play a role in the conscious perception of face shape.
Talk 2, 8:30 am, 21.12
Tracking the emergence of hyperfamiliarity for faces: Late covert discrimination followed by hyperfamiliarity due to disrupted post-perceptual processes
Marie-Luise Kieseler1 (), Katie Fisher2, Rebecca Nako2, Kira Noad3, David Watson3, Timothy Andrews3, Martin Eimer2, Brad Duchaine1; 1Dartmouth College, 2Birkbeck College, University of London, 3University of York
Nell is a 49-year-old woman who had a severe migraine in August 2020. Since then, every face she has looked at has felt familiar. In an old-new test, she performed perfectly with old faces but miscategorized 63% of the new faces. She scored at chance when selecting the celebrity from face pairs consisting of a celebrity and the celebrity’s doppelganger. When presented with 318 famous and 314 non-famous faces during an EEG experiment, Nell categorized every face as “probably familiar” or “definitively familiar”. However, like typical participants, her N250 and P600 were stronger for famous than non-famous faces. While judging whether two sequentially-presented faces showed the same person, Nell inaccurately reported that similar-looking different pairs matched on 91% of trials, yet the N250r over the left hemisphere distinguished between same-identity and different-identity pairs. Nell’s ERP results indicate her visual identity face matching is intact and that her hyperfamiliarity arises at post-perceptual stages. To identify neural correlates underlying Nell’s hyperfamiliarity, she participated in an fMRI experiment in which she viewed a compilation of scenes from Game of Thrones (GoT). Nell had not watched GoT before but every face felt familiar to her. Nell’s results were compared with control groups who had (N=23) or had not (N=22) previously watched GoT. Bilateral regions of anterior temporal cortex and hippocampus showed significantly greater inter-subject correlations between Nell and the familiar group than between Nell and the unfamiliar group. Functional connectivity between face-selective areas in Nell was more strongly correlated with the connectivity of the familiar than the unfamiliar group. Together, Nell’s results indicate that 1) covert discrimination between familiar and unfamiliar faces occurs in hyperfamiliarity, 2) early (N250) and even late discrimination (P600) between familiar and unfamiliar faces can fail to reach awareness, and 3) post-perceptual processes modulate information about familiarity in face-selective areas.
Talk 3, 8:45 am, 21.13
Weaker face recognition in adults with autism arises from perceptually based alterations
Marissa Hartston1 (), Yoni Pertzov2, Galia Avidan3, Bat-Sheva Hadad1; 1University of Haifa, 2The Hebrew University of Jerusalem, 3Ben-Gurion University of the Negev
Face recognition has been shown to be impaired in autistic spectrum disorder (ASD). However, it is still debated whether face processing deficits in autism arise from perceptually based alterations. We tested individuals with ASD and matched typically developing (TD) individuals using a delayed estimation task in which a single target face was shown either upright or inverted. Participants selected a face that best resembled the target face out of a cyclic space of morphed faces. To enable the disentanglement of visual from mnemonic processing, reports were required either following a 1 and 6 second retention interval, or simultaneously while the target face was still visible. Individuals with ASD made significantly more errors than TD in both the simultaneous and delayed intervals, indicating that face recognition deficits in autism are also perceptual rather than strictly memory based. Moreover, individuals with ASD exhibited weaker inversion effect than TD individuals, on all retention intervals. This finding, that was mostly evident in precision errors, suggests that contrary to the more precise representations of upright faces in TD, individuals with ASD exhibit similar levels of precision for both inverted and upright faces. These results suggest that weakened memory for faces reported in ASD may be secondary to the underlying deficit in face processing.
Acknowledgements: This research was funded by the Israel Science Foundation (ISF), grant #882/19 to BH.
Talk 4, 9:00 am, 21.14
In the face of diversity: Face ethnicity influences the use of face features for social trait perception
Valentina Gosetti1, Laura B. Hensel1, Robin A. A. Ince1, Oliver G. B. Garrod1, Philippe G. Schyns1, Rachael E. Jack1; 1University of Glasgow
Psychological science is constrained by a lack of diversity (e.g. Cook & Over, 2021). One notable example is social trait face perception research (e.g., trustworthiness, dominance; e.g., Oosterhof & Todorov, 2008) which is based primarily on White faces (but see e.g., Sutherland et al., 2018). Recent work suggests that face ethnicity influences these social judgements (e.g., Freeman & Johnson, 2016; Xie et al., 2021) though it remains unknown how this affects the causal features that subtend social perception. To examine this, we modelled the 3D face features (shape/complexion) that drive perceptions of trustworthiness and dominance in Black, East Asian, and White faces using reverse-correlation (Zhan et al., 2019). In a between-subjects design, we generated 2400 face identities per face ethnicity by adding randomly sampled principal components representing individual identity variance to an average face (Black, East Asian, or White; gender-balanced). Participants (N=60, White Western, gender-balanced) rated each face on trustworthiness and dominance (e.g., very submissive-very dominant) in separate tasks. To model the 3D face features that drive these perceptions, we linearly regressed the stimulus features presented on each trial with each participant’s responses. We then compared the resulting 3D face models across face ethnicity (N=20 per ethnicity, social trait, sex of face) using a combination of data-reduction and machine learning techniques. Results revealed that social trait perception is driven by a core set of facial features (e.g., affect-related cues: frowning/smiling) plus ethnicity-specific variations, including the mouth in Black faces and the eyes in East Asian faces. Our results provide new insights into how demographic facial cues influence social trait perception with direct implications for current theoretical accounts, highlighting the importance of diversity in psychological science (Jack et al., 2018).
Acknowledgements: We thank the Engineering and Physical Sciences Research Council [EP/T021136/1] awarded to LHB; the Wellcome Trust [Senior Investigator Award, UK; 107802] awarded to PGS; the European Research Council under the European Union’s Horizon 2020 research and innovation program [759796] awarded to REJ.
Talk 5, 9:15 am, 21.15
The face of mischief: A stereotyped signal of norm violation within a Magic Circle
Loren Matelsky1 (), Hong B. Nguyen1, Colleen Macklin1, Benjamin van Buren1; 1The New School for Social Research
To navigate the social world, we must recognize and communicate about rules and norms through a variety of channels, including language, gestures, and facial expressions. This is quite a feat, because social rules are often highly intricate. For example, during play, a general norm (e.g. not hitting one another) may not apply within a specific context (e.g. a pillow fight) — a concept known in game studies as the ‘magic circle’. Could the presence of such a hierarchically embedded rule system be communicated by a single facial expression? In other words, is there a ‘face of mischief’? In Study 1, we used a reverse correlation approach to determine whether there is a stereotyped facial expression signaling mischievous intent. Subjects viewed pairs of faces with opposite noise patterns superimposed, and reported which of the two faces looked more like someone plotting to do something mischievous. The average of their selected faces had an expression which looked distinctly mischievous, and this was confirmed by an independent sample of raters. In Study 2, we investigated whether the face of mischief signals the intent to violate a norm within the bounds of a magic circle. Each subject read one of three types of social scenarios — Magic Circle + Harm (e.g. having a pillow fight); Magic Circle + No Harm (e.g. building a pillow fort); or No Magic Circle + Harm (e.g. stealing pillows from a hotel) — and for several pairs of faces, judged which one better matched the described behavior. An independent group of observers rated the average selected face for the Magic Circle + Harm scenarios as much more mischievous than the averages for the other scenarios. These results show that there is a distinct face of mischief, and that this expression communicates nuanced meaning about playful norm violations.
Talk 6, 9:30 am, 21.16
The spatiotemporal dynamics of social scene perception in the human brain
Emalie McMahon1, Taylor Abel2, Jorge Gonzalez-Martinez2, Michael F. Bonner1, Avniel Ghuman2, Leyla Isik1; 1Johns Hopkins University, 2University of Pittsburgh
Social perception is an important part of everyday life that develops early and is shared with non-human primates. To understand the spatiotemporal dynamics of naturalistic social perception in the human brain, we first curated a dataset of 250 500-ms video clips of two people performing everyday actions. We densely labeled these videos with features of visual social scene, including scene and object features, visual social primitives, and higher-level social/affective features. To investigate when and where these features are represented in the brain, patients with implanted stereoelectroencephalography electrodes viewed the videos. We used time-resolved encoding models in individual channels to investigate the time course of representations across the human brain. We find that an encoding model based on all of our social scene features predicts responses in a subset of channels around 400 ms after video onset. The channels that are best predicted by the social scene model are mostly non-overlapping with the channels that are best predicted by a model of early visual responses (the second convolutional layer of an ImageNet-trained AlexNet). Future analyses will investigate when and where individual features of the social scene model are predictive of neural responses, and how these interact with visually-selective channels to extract high-level social information from visual input.