V-VSS, June 1-2

Face Perception

Talk Session: Thursday, June 2, 2022, 6:30 – 8:00 pm EDT, Zoom Session

Times are being displayed in EDT timezone (Florida time): Friday, September 30, 12:48 pm EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 6:30 pm, 85.61

Gaze behaviour as a visual cue to animacy

Colin Palmer1 (), Peter Kim1, Colin Clifford1; 1UNSW Sydney

A characteristic that distinguishes biological agents from inanimate objects is that the former can have a direction of attention. While it is natural to associate a person’s direction of attention with the appearance of their face, attentional behaviours are also a kind of relational motion, in which an entity rotates a specific axis of its form in relation to an independent feature of its environment. Here, we examined whether gaze-like motion behaviours provide a visual cue to animacy independent of the human form. We generated animations in which the rotation of a geometric object (the agent) was dependent on the movement of a target. Participants made judgements about how creature-like the objects appeared, and these were highly sensitive to the correspondence between objects over and above their individual motion. We varied the dependency between agent rotation and target motion in terms of temporal synchrony, temporal order, cross-correlation, and the complexity of their shared trajectory. These affected the perceived animacy of the agent to differing extents. When the behaviour of the agent was driven by a model of predictive tracking that incorporates a sensory sampling delay, perceived animacy was broadly tuned across changes in rotational behaviour induced by the sampling delay of the agent. Overall, the tracking relationship provides a salient cue to animacy independent of biological form, provided that temporal synchrony between objects is within a certain range. This motion relationship may be one to which the human visual system is highly-attuned, due to its association with attentional behaviour and the presence of other minds in our environment.

Acknowledgements: This work was supported by an Australian Research Council Discovery Project grant (DP200100003). CP is also supported by an Australian Research Council Discovery Early Career Researcher Award (DE190100459).

Talk 2, 6:45 pm, 85.62

Integrating faces and bodies in social trait perception

Ying Hu1,2 (), Alice O'Toole3; 1State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China, 2University of Chinese Academy of Sciences, Beijing, China, 3The University of Texas at Dallas, USA

Faces and bodies spontaneously elicit social trait judgments such as trustworthiness and laziness (Oosterhof & Todorov, 2008; Hu et al., 2018). We examined how first impressions formed by viewing the face and body contribute to the overall impression formed by seeing the whole person (Experiment 1), and how seeing the whole person affects first impressions of the face and body (Experiment 2). First, participants assigned personality traits to images of faces, bodies, and whole persons. Multivariate analyses (Correspondence Analysis and Linear Regressions) showed that the relative contribution of faces and bodies to whole-person perception depended on the specific trait being judged. Using the Big Five framework, faces primarily informed judgments about traits related to agreeableness (e.g., warm, aggressive, 48% explained variance, EV); bodies informed conscientiousness traits (e.g., dependable, careless, 23% EV); and whole persons informed extraversion traits (e.g., dominant, quiet, 15% EV). A control experiment showed that both clothing and body shapes contribute to whole-person first impressions. These results highlight the need to understand face and body perception in the context of the whole person. Second, participants rated personalities of the same faces (bodies) in isolation, and subsequently, in the whole person context. When trait ratings assigned to the face or body differed, the ratings of a contextualized face (body) were biased towards the ratings of the body (face) in the whole-person context (p < .001 for both face and body), supporting the face-body integration theory (Hu et al., 2020). Finally, to understand face-body integration in trait perception, we propose a framework that incorporates the processes of visual perception, stereotyping, and trait inferences/integration with predictive factors (raters, ratees, and situations). This study offers a first investigation of the relationship among trait perceptions of faces, bodies, and whole persons and lays the groundwork for understanding trait inferences in person perception.

Talk 3, 7:00 pm, 85.63

Race categories modulated the perceived lightness of faces

Linlin Yan1 (), Yiwen Zhu1, Yang Shen1, Yajie Liang1, Zhe Wang1, Yu-Hao P. Sun1, Naiqi Xiao2; 1Zhejiang Sci-Tech University, 2McMaster University

The perception of lightness is context-dependent, which can be revealed in the influence of face-races on the perceived face lightness. Participants perceived Black faces as darker than White faces. However, some findings indicated that the distorted lightness perception remained even when face race information was undetectable. The discrepancy challenges the role of race categories: Is the distorted lightness perception induced by the knowledge about face-race? To address this question, we recruited 123 Asian participants, who saw two rapidly sequentially presented faces (500ms). The two faces within each trial were of the same race: Black or White. While the luminance of the first face was matched across the Black and White face trials, the luminance of the second face was decreased or increased by 4 levels (-20, -12, -8, -4, +4, +8, +12, & +20). Participants were asked to report whether the second face was lighter or darker than the first face. To examine the role of face-race knowledge, as opposed to low-level perceptual cues, we further manipulated the orientation of the two faces across participants: Upright-Upright, Inverted-Inverted, Upright-Inverted, and Inverted-Upright. Overall, Black faces were perceived significantly darker than White faces, even their luminance was matched (p < 0.05). Moreover, this face-race distortion effect was only found when faces were upright, but not when they were inverted. Lastly, the distortion effect was found only when the first face was upright regardless of the orientation of the second face. These findings of orientation-specificity suggest that the effect was not due to low-level perceptual cues but driven by conceptual knowledge of face races. Together, these findings indicated the role of race categories in the perception of lightness and highlighted a special mechanism, where face-race conceptual information modulates the perception of low-level facial information, such as lightness.

Acknowledgements: This research was supported by grants from the Fundamental Research Funds of Zhejiang Sci-Tech University (2019Q075) and Zhejiang Provincial Natural Science Foundation of China (LY20C090010,LY19C090006).

Talk 4, 7:15 pm, 85.64

Happy face advantage disappears in Chinese context: The constancy of holistic processing for emotional faces

Dongyan Ren1 (), Guomei Zhou1; 1Sun Yat-sen University

It is well documented that facial identity is processed in a holistic manner. Chen & Cheung (2021) recently found that happy faces evoked a larger holistic face processing than angry faces in Caucasian context. Here we adopted the complete composite face paradigm to investigate how facial emotion influenced holistic processing of facial identity in Chinese context. In Experiment 1, we used Chinese faces with happy, neutral and surprise emotion. Participants judged whether the top halves of the two successively presented composite faces were same or different, while their facial emotion was always consistent in a trial. We found equal composite face effects (CFE) for the three emotional faces. In Experiment 2, either top halves or bottom halves were judged, and angry, happy, fearful and neutral faces were compared. Results showed that although judging the top half induced a larger CFE than judging the bottom half, the CFE was equal among four emotional faces. In Experiment 3, angry and happy faces with high/low emotional intensity were further compared. We also found equal CFE for angry and happy faces, which was not modulated by the emotional intensity. These results indicated that the holistic processing of facial identity is robust and constant regardless of emotion categories and emotion intensity in Chinese context. The relation between cultural difference and holistic processing of emotional faces should be considered in the future.

Acknowledgements: This research was supported by the National Natural Science Foundation of China (31771208)

Talk 5, 7:30 pm, 85.65

My group is more important than yours in the Cheerleader Effect of facial attractiveness perception

Ruoying Zheng1 (), Guomei Zhou1; 1Sun Yat-sen University, Guangzhou, China

Our judgment of a target is influenced by its context. So is it for the judgment of facial attractiveness. The current research aimed to explore when there were multiple social groups in the context, how faces of different social groups affected the attractiveness of an individual face, resulting in varying degrees of cheerleader effect. We presented the target face individually or in the context containing two social groups: both the own group and the other group of the target face and manipulated the context into five conditions: HOHT (high attractive own group and high attractive other group), HOLT (high attractive own group and low attractive other group), LOHT (low attractive own group and high attractive other group), LOLT (low attractive own group and low attractive other group), ALONE (no surrounding faces). Target faces were nine faces with continuously increasing attractiveness. We used Black female faces and White female faces as two social groups in Experiment 1a and 2a, and Asian female faces and White female faces in Experiment 1b and 2b. The task in Experiment 1 was to judge whether the target face in each context was attractive or unattractive. The task in Experiment 2 was to rate the attractiveness of the target face in each context. Both experiments showed the contrast effect that the attractiveness of target faces increased significantly in LOLT, with ALONE as the baseline. Besides, Experiment 2 showed that the attractiveness increment of the target face in LOHT and HOLT was also significant with the effect in LOLT being greater than that in LOHT which was greater than that in HOLT. Our results indicated that low attractive surrounding faces increased the attractiveness of the target face, and the weight of the own group was greater than that of the other group.

Acknowledgements: This research was supported by the grant from the National Natural Science Foundation of China (32071048) to Guomei Zhou.

Talk 6, 7:45 pm, 85.66

Synthetic Faces Are More Trustworthy Than Real Faces

Sophie Nightingale1 (), Hany Farid2; 1Lancaster University, 2University of California, Berkeley

The photo realism of synthetic media (deep fakes) continues to amaze and entertain, as well as alarm those concerned about abuses in the form of non-consensual pornography, fraud, and disinformation campaigns. We have previously shown that synthetic faces are visually indistinguishable from real faces. Because in just milliseconds faces illicit implicit inferences about traits such as trustworthiness, we wondered if synthetic and real faces illicit different responses of trustworthiness. We synthesized 400 faces using StyleGAN2, ensuring diversity across gender, age, and race. A convolutional neural network descriptor was used to extract a perceptually meaningful representation of each face, from which a matching real face was selected from the Flickr-Faces-HQ dataset. Mechanical Turk participants (N=223) read a brief introduction explaining that the purpose of the study was to assess face trustworthiness on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Each participant then saw 128 faces, one at a time, and rated their trustworthiness. Participants had an unlimited amount of time to respond. The average trustworthy rating for synthetic faces of 4.82 is more than the rating of 4.48 for synthetic faces. Although only 7.7% more trustworthy, this difference is significant (t(222) = 14.6, p < 0.001, d = 0.49). Although a small effect, Black faces were rated more trustworthy than South Asian faces, but otherwise there was no effect across race. Women were rated as significantly more trustworthy than men, 4.94 as compared to 4.36 (t(222) = 19.5, p < 0.001, d = 0.82). Synthetically-generated faces are not just photo realistic, they are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, and ready or not, synthetically-generated faces have emerged on the other side of the uncanny valley.