Idiosyncrasies in Internal Models Predict Individual Differences in Spatiotemporal Neural Processing of Natural Scenes

Talk Presentation 25.17: Saturday, May 16, 2026, 5:15 – 7:00 pm, Talk Room 1
Session: Scene Perception

Micha Engeser1,2 (micha.engeser@math.uni-giessen.de), Thea Schmitt1, Daniel Kaiser1,2,3; 1Neural Computation Group, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Giessen, 35392 Giessen, Germany, 2Center for Mind, Brain and Behavior (CMBB), Philipps University Marburg, Justus Liebig University Giessen, and Technical University Darmstadt, 35032 Marburg, Germany, 3Cluster of Excellence “The Adaptive Mind”, Justus Liebig University Giessen, Philipps University Marburg, and Technical University Darmstadt, 35392 Giessen, Germany

Why do humans differ in how they perceive the world around them? Traditionally, this question has received limited attention, with variability between participants often dismissed as noise. Building on predictive processing theories, we propose that idiosyncrasies in internal models—expectations about what the world should look like—are a key source of such perceptual variability. Using an inter-subject representational similarity analysis (IS-RSA), we tested whether inter-individual similarities in internal models for natural scene categories predict similarities in neural fMRI and EEG responses when viewing scenes from these categories. To characterize internal models, participants drew what they considered the most typical version of specific scene categories. We then used deep-learning tools to transform the drawings into photorealistic images and, in turn, quantify inter-individual similarities in the resulting images. Relating the resulting inter-individual similarities in internal models to inter-individual similarities in neural responses yielded two key insights: First, participants with more similar internal models showed greater alignment in fMRI BOLD time courses within lateral occipital and lateral prefrontal cortices. Second, participants with more similar internal models exhibited more similar scene representations in EEG signals, emerging around 400 ms after stimulus onset. Together, these findings demonstrate that individual priors regarding the structure of the world offer a parsimonious explanation for why spatiotemporal processing in the visual system varies across individuals.