Characterizing idiosyncrasies in perception and neural representation of real-world scenes

Poster Presentation 43.435: Monday, May 20, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Scene Perception: Categorization

Gongting Wang1,2 (), Matthew Foxwell4, Lixiang Chen1, David Pitcher4, Radoslaw Martin Cichy1, Daniel Kaiser2,3; 1Department of Education and Psychology, Freie Universität Berlin, 2Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, 3Center for Mind, Brain and Behavior, Justus Liebig University Gießen and Philipps University Marburg, 4Department of Psychology, University of York

The efficiency of visual perception is not solely determined by the structure of the visual input. It also depends on our expectations, derived from internal models of the world. Given individual differences in visual experience and brain architecture, it is likely that such internal models differ systematically across the population. Yet, we have no clear understanding of how such differences shape the individual nature of perception. Here, we present a novel approach that uses drawing to directly access the contents of internal models in individual participants. Participants were first asked to draw typical versions of different scene categories (e.g., a kitchen or a living room), taken as descriptors of their internal models. These drawings were converted into standardized 3d renders to control for differences in drawing ability and style. During the subsequent experiments, participants viewed renders that were either based on their drawings (and thus similar to their internal model), based on other people’s drawings, or based on arbitrary scenes they were asked to copy (thereby controlling for memory effects). In a series of behavioral experiments, we show that participants more accurately categorize briefly presented scene renders when they are more similar to their personal internal models. This suggests the efficiency of scene categorization is determined by how well the inputs resemble individual participants’ internal scene models. Using multivariate decoding on EEG data, we further demonstrate that similarity to internal models enhances the cortical representation of scenes, starting from perceptual processing at around 200ms. A deep neural network modelling analysis on the EEG data suggests that scenes that are more similar to participants’ internal models are processed in more idiosyncratic ways, rendering representations less faithful to visual features. Together, our results demonstrate that differences in internal models determine the personal nature of perception and neural representation.

Acknowledgements: European Research Council (ERC) starting grant (ERC-2022-STG 101076057); “The Adaptive Mind”, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art; China Scholarship Council (CSC)