Talk 1, 2:30 pm
Perceived age is distorted in visual memory: A phenomenon of “forward” and “backward” aging for faces
When we meet someone, we quickly make judgments about them based on how old they look (e.g. about their physical abilities, cognitive abilities and personality traits). But how is a person’s age represented in the mind in the first place? Do we remember certain people as younger, or as older, than they actually were? One possibility is that representations of facial age exhibit ‘representational momentum’, such that observers remember a face as older than it actually was. Another possibility is that our memory for facial age is biased towards the average of the faces that we have seen previously, in which case observers might misremember faces as closer to middle age. To explore these possibilities, we ran three experiments which tested participants’ memory for the age of a briefly presented face. Participants saw a target face which was either young (30 years old) or old (60 years old). Subsequently, they saw two new faces – one 10 years younger and another 10 years older than the target. Participants selected the face that matched the target. Contrary to our initial predictions, we did not find a bias to remember faces as older, or as closer to middle age. Instead, a distinct pattern emerged — observers were biased to remember young targets as younger (i.e. ‘backward aging’), and old targets as older (i.e. ‘forward aging’). Remarkably, these biases held across sexes (male, female) and races (asian, black, white) of the target face, across artificially-aged and real faces, and regardless of the observers’ own age. Further, the results persisted even when the decoys’ identities differed from that of the target face — suggesting that this bias operates over abstract representations of age. Thus, social categories of ‘young’ and ‘old’ shape and distort our visual memories of faces.
Talk 2, 2:45 pm
Face-specific identification impairments following sight-providing treatment may be alleviated by an initial period of low visual acuity
Sharon Gilad-Gutnick1 (), Fengping Hu2, Kirsten A. Dalrymple3, Priti Gupta4, Pragya Shah5, Chetan Ralekar1, Dhun Verma6, Kashish Tiwari,7, Piyush Swami8, Suma Ganesh6, Umang Mathur6, Pawan Sinha1; 1Massachusetts Institute of Technology, 2New York University, 3HealthPartners Institute, 4Institute of Technology, New Delhi, 5Institute of Human Behavior and Allied Sciences, New Delhi, 6Dr. Shroff’s Charity Eye Hospital, New Delhi, 7Dr. Rajendra Prasad Center for Ophthalmology, All India Institute of Medical Sciences, New Delhi, 8Technical University of Denmark
The ability to identify individual faces is critical for social and cognitive functioning, and as such, the human brain has evolved to perform this task quickly and accurately. However, many questions remain about how this skill emerges in early development, and specifically about how early visual experience impacts skill acquisition later in life. In our previously published work, we proposed that the poor visual acuity that newborns experience in the first year of life may play a facilitatory role in scaffolding the processes needed to develop face-identification later in life. Indeed, our computational simulations supported the potential downsides of ‘High Initial Acuity’ for the development of facial-identification. Motivated by this, we predicted that children who are treated for congenital cataracts late in life and begin their visual journey with higher than newborn acuity will exhibit persistent impairments in face- but not object- identification. We tested this prediction by assessing the development of facial-identification skill in three subject groups: children treated for congenital cataracts whose pre-treatment visual acuity was worse than that of a newborn, those treated for congenital cataracts whose pre-treatment visual acuity was better than that of a newborn, and age-matched controls. As predicted, we found that children with pre-operative acuity worse than a newborn did not show any improvements on face-identification tasks despite years of visual experience and improving on the object-identity tasks. In contrast, those with pre-treatment acuity comparable to a newborn showed improvements on both the object- and face-identification tasks. Overall, our data is consistent with the idea that beginning one’s visual journey with a period of low-resolution visual input followed by high-resolution input can be facilitatory for acquiring face-identification skill later in life, whereas higher resolution input right at the outset of vision can be detrimental to facial- but not object-identification.
Acknowledgements: NIH R01EY020517
Talk 3, 3:00 pm
What’s left in face processing? Evidence from hemispheric differences in developmental prosopagnosia
Alison Campbell1,2 (), Xian Li1,4, Michael Esterman1,3, Joseph DeGutis1,4; 1Boston Attention and Learning Laboratory, VA Boston Healthcare System, Boston, MA, 2Department of Psychiatry, Boston University Chobanian and Avedisian School of Medicine, Boston MA, 3National Center for PTSD, VA Boston Healthcare System, Boston, MA, 4Department of Psychiatry, Harvard Medical School, Boston MA
According to classic models, face processing is right-lateralized with little involvement of the left hemisphere. This is challenged by fMRI findings that developmental prosopagnosics (DP) consistently show reduced face-selective responses in the left OFA/FFA and less consistent differences in the right OFA/FFA. To account for this, we hypothesized that right hemisphere regions primarily subserve processes for face perception and, as this is highly variable in DP, we predicted that right-sided regions would only be implicated in those with greater perceptual impairment. In our sample, DPs with low performance (>1SD below controls) on at least two perceptual tests (Benton Face Recognition Test, Cambridge Face Perception Test, and face matching) were classified as perceptually impaired (N=17) and the remaining were classed as perceptually unimpaired (N=18). No controls were impaired (N=22). Using a face localizer (Faces>Objects), we found that perceptually-impaired DPs had reduced face-selective activation in both the left and right OFA, whereas perceptually-unimpaired DPs had reduced activation only in the left OFA. Both groups had reduced activation in the left but not the right FFA. Furthermore, resting-state functional connectivity between the left and right OFA was significantly reduced in perceptually-impaired but not perceptually-unimpaired DPs, consistent with neural abnormalities spanning both hemispheres in the presence of perceptual deficits. The results support the hypothesis that right hemisphere abnormalities (especially in the OFA) reflects a perceptual processing deficit that is variable in DP and explains why not all studies observe right hemisphere differences at the group level. Critically, our results suggest that left hemisphere abnormalities are common to all DPs. Although future work is needed to clarify the functional roles of the left OFA/FFA, their implication in DP suggests that they are essential for normal face recognition and are required for a complete neural model of face processing.
Acknowledgements: This work was supported by a grant to JD from the National Eye Institute (R01 EY032510-02).
Talk 4, 3:15 pm
Comparing face viewpoint, expression and identity selectivity in fMRI-defined face patches of macaque frontal cortex
Eline Mergan1,2 (), Qi Zhu1,2,3, Wim Vanduffel1,2,4,5; 1KU Leuven, 2Leuven Brain Institute, 3CEA DRF/JOLIOT/NEUROSPIN, Univ. Paris-Saclay, 4Radiology, Harvard Medical School, 5Martinos Centre for Biomedical Imaging
Perceiving and interpreting facial information such as identity, expression, and head orientation is essential for primates as these features provide important social communication cues. To gain insights into the underlying neural mechanisms processing these facial features in prefrontal cortex, we conducted single- and multi-unit recordings in 4 fMRI-defined face patches in 3 macaques: POa and POp in orbitofrontal cortex, PA in ventrolateral prefrontal cortex, in addition to face patch AM in anterior inferotemporal cortex. In each face patch, we found face-selective neurons tuned to identity, expression, and head orientation. A large fraction of these neurons was sensitive to head orientation irrespective of identity or expression. While face-selective neurons preferentially tuned for expressions were mostly present in POa, most face-selective neurons tuned for identity resided in AM. Surprisingly, not only face-selective but also non-face-selective neurons carried similar information about these facial features. Most neurons within each face patch exhibited visual response latencies that were comparable for different face features. At population level, visual response latencies for faces were similar (~70 ms) for both orbitofrontal (POa and POp) and anterior IT (AM) face patches. In prefrontal face patch PA, however, most cells responded much faster to faces, as quick as 30 ms. Neurons generally exhibited fastest face-selective responses (face contrasted with objects), followed by selectivity to head orientations, and yet later for different expressions and identities. While the range of visual response latencies in each face patch is relatively small (interquartile range (iqr): 30 - 80 ms), the selective responses to various face features showed a considerable variation (iqr: 80 - 190 ms). These findings reveal complex prefrontal face-processing signals potentially involving multiple and parallel feedback loops with different areas, prompting a reconsideration of the role of the face-processing system in representing face viewpoint, expression and identity.
Acknowledgements: FWO G0C1920N, The European Union's Horizon 2020 Framework Program for Research and Innovation under Grant Agreement No 945539 (Human Brain Project SGA3), KU Leuven C14/21/111, CEA PE bottom up 2020 (20P28), ANR-20-CE37-0005
Talk 5, 3:30 pm
From Divergence to Convergence: A Model-Guided Synthesis of Findings in the Human and Macaque Face Processing Networks
Fernando M. Ramirez1,2 (); 1Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, 2Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health
Recognizing faces regardless of viewpoint is critical for social interactions. Evidence from single-neuron electrophysiological recordings in macaques suggests a three-step architecture revealing a sharp transition from a strictly view-tuned representation in the macaque middle-lateral/middle-fundus (ML/MF) face patches to a mirror-symmetric representation in the anterior-lateral (AL) face-patch, before achieving viewpoint invariance in the anterior-medial (AM) face-patch, at the highest level of the hierarchy. However, human studies combining functional magnetic resonance imaging (fMRI) and Representational Similarity Analysis (RSA) have led to divergent conclusions in all core face selective areas, including the Fusiform Face Area (FFA). This makes it hard to relate observations within and across species. We previously proposed a geometric configuration in multivariate space that accounts for divergent observations in human FFA. Here, by considering the impact on RSA of signal imbalances across conditions and measurement scale, we show that this geometric configuration is compatible with observations in macaque area ML/MF, but not AL. Our account shows that key assumptions of RSA sometimes break down. Specifically, we show that inferences about neuronal coding with RSA are influenced by translation and rotation of the data. We also show that abstracting from the measurement process and relying directly on the rank-order of entries of dissimilarity matrices to relate representations across species and techniques leads to error when marked signal-imbalances are observed across conditions. We demonstrate with biologically-motivated network models, forward models, as well as previously published empirical fMRI data and single-cell monkey electrophysiological recordings that it is necessary to consider details of the measurement process to validly relate measurements across species and techniques. These findings suggest limitations in RSA, urging a nuanced approach for cross-species comparisons, and support the idea that human FFA is view-tuned like macaque area ML/MF, rather than mirror-symmetrically tuned like area AL.
Acknowledgements: This work was supported by the Intramural Research Program at NIMH (ZIAMH002783)
Talk 6, 3:45 pm
Intracerebral recordings evidence that unfamiliar face-identity recognition is supported by face-selective neural populations in the human ventral occipito-temporal cortex
Simen Hagen1 (), Corentin Jacques1, Louis Maillard1,2, Sophie Colnat-Coulbois1,3, Jacques Jonas1,2, Bruno Rossion1,2; 1Université de Lorraine, CNRS, Nancy, France, 2Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France, 3Université de Lorraine, CHRU-Nancy, Service de Neurochirurgie, Nancy, France
In humans, the recognition of a visual stimulus as a face – generic face recognition (GFR) - and of its specific identity – face identity recognition (FIR) - are intricately linked and both functions are supported by specialized neural regions in the human ventral occipito-temporal cortex (VOTC). However, whether they are instantiated by the same or different neural populations remains unclear. On the one hand, FIR could rely largely on “different” neural populations that receives input from neural populations involved in GFR. On the other hand, FIR could rely on “shared” neural populations that support both functions potentially at different time scales. Here, we directly compared the spatio-temporal profile of the two recognition functions in a large group of epileptic patients (N=109) implanted with intracerebral electrodes in the gray matter of the VOTC. Both GFR (i.e., significantly different responses to faces vs. non-face visual objects; Jonas et al., 2016) and FIR (i.e., significantly different responses to different unfamiliar face identities; Jacques et al., 2020) neural activity was isolated with separate frequency-tagging protocols within patients. This approach provides an objective measure of the two recognition functions, parceling out general visual responses, and providing high spatial and temporal resolution. Across all the significant FIR recording contacts, we found that ~85% also showed significant GFR responses (i.e., were face-selective). This high spatial overlap was found along the posterior-anterior axis and within all core face regions. Moreover, in the overlap contacts, the amplitudes for the two functions correlated (r>.8) and the temporal onset of amplitudes for GFR and FIR was strikingly similar regardless of posterio-anterior location, but with a relatively slower build-up of the FIR amplitudes. Overall, this original dataset suggest that unfamiliar FIR is essentially supported by face-selective neural populations in the human VOTC, with GFR signals potentially transmitted faster than FIR signals.
Acknowledgements: Funded by: ANR IGBDEV ANR-22-CE28-0028; ERC AdG HUMANFACE 101055175
Talk 7, 4:00 pm
The Hidden Details: Effects of Partial Occlusion on Response Dynamics in the Primate Inferotemporal Cortex
The primate brain can recognize objects even when partially concealed by occluders. To investigate the effect of occlusion on temporal dynamics of neuronal responses, we conducted experiments in two male macaques, recording single units in body-responsive regions in the posterior and anterior inferotemporal cortex (PIT & AIT) during fixation. Seven levels of occlusion were applied to static bodies, ranging from 5 to 60 percent occlusion. In both monkeys and regions, three key findings emerged: 1) average response strength decreased and 2) response onset and peak latency gradually increased by ~70 ms with degree of occlusion, PIT responses consistently preceding AIT. 3) The first response peak was followed by a trough and a stronger second peak under occlusion. To examine the role of visual information loss in the latency shifts, reduced responses, and response peaks, we presented, in addition to the partially occluded bodies, the same stimuli on top of the occluding pattern, and with an invisible occluding pattern, creating bodies with cut-outs. Interestingly, onset latency only shifted ~20 ms for the highest cut-out levels and remained unaffected by the background occluding pattern. Thus, onset latency shifts with occlusion may result from bottom-up occluder-related processing. Despite cut-out-induced response weakening, cut-outs with 60% information loss maintained selectivity similar to that observed during occlusion. However, the trough formation was pronounced when bodies were presented on top of the occluder. Intriguingly, the second peak did not align with response onset shifts but maintained latency differences between regions, occurring earlier in PIT. Thus, the second response peak in PIT is unlikely to arise from recurrent processing within the region or feedback from AIT. If generated by top-down feedback, one would expect it to appear earlier in AIT and may expect better body selectivity. Yet, it never surpassed early response selectivity based on neural decoding.
Acknowledgements: This research was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement 856495).