Visual Memory: Individual differences, imagery
Talk Session: Tuesday, May 19, 2026, 10:45 am – 12:15 pm, Talk Room 1
Moderator: Joan Ongchoco, University of British Columbia
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 10:45 am, 52.11
A unique neural signature of individual differences in long-term memory from EEG inter-electrode correlation with event-related potentials
Chong Zhao1, Edward K. Vogel1, Monica Rosenberg1; 1University of Chicago
Classic memory models suggest that the processes supporting visual working memory (VWM) play a central role in determining how effectively information is encoded into long-term memory (LTM). Consistent with this view, VWM and LTM performance are typically correlated, leaving open the question of whether LTM performance relies on neural mechanisms that are distinct from those supporting VWM encoding. To address this, we recorded EEG activity while participants performed recognition memory tasks with large set sizes (32 and 128 items), far exceeding typical VWM capacity. Using interelectrode correlation (IC) analyses, we found that IC patterns robustly predicted individual differences in LTM performance across both set sizes, indicating a stable, domain-general neural signature of LTM formation. Crucially, the predictive power of our IC model persisted even after statistically controlling for VWM capacity and attentional control, demonstrating that the model captures variance specifically related to individual differences in LTM performance rather than general cognitive performance. Temporally, IC-based predictive signals emerged shortly after stimulus onset and remained significant for approximately 500–600 ms. Distinct correlation patterns characterized early versus late encoding windows, suggesting that multiple, dynamically shifting neural processes contribute to individual differences in LTM. Together, these results identify a reliable and temporally dynamic neural signature during LTM encoding that tracks individual differences in LTM performance, independent of VWM and attentional control abilities.
We acknowledge the funding from National Institute of Mental Health (grant ROIMH087214); Office of Naval Research (grant N00014-12-1-0972).
Talk 2, 11:00 am, 52.12
Hippocampal volume and functional patterns in early childhood reflect the development of adult-like visual memory patterns
Xiaohan (Hannah) Guo1, Yuan Chang Leong1, Wilma A. Bainbridge1; 1The University of Chicago
The memorability effect refers to the phenomenon that adults reliably remember and forget the same visual stimuli as each other. This adult-like consistency emerges in childhood: 4-year-olds show adult-like memory patterns, whereas 3-year-olds remember the same stimuli within age but differ from adults (Guo & Bainbridge, 2024). This study tests whether neural development—particularly within the hippocampus—drives the emergence of adult-like visual memory by age 4, given that adults’ sensitivity to memorability is decodable in the ventral visual stream and medial temporal lobe, including the hippocampus (Bainbridge & Rissman, 2018). We ask: (1) whether brain activity tracks the emergence of adult-like visual memory in children by age 4, and (2) whether hippocampal volume and functional activity explain the degree to which children show adult-like memory patterns. Using an existing fMRI movie-watching dataset (ages 3–adult; N = 155; Richardson et al., 2023), we computed a memorability score for each movie timepoint using a deep neural network (ResMem) trained to predict memorability (Needell & Bainbridge, 2022). A cross-validated model trained on adults’ brain data successfully predicted movie-frame memorability scores from children’s neural data beginning at age 4, but not at age 3, suggesting that adult-like neural sensitivity to memorability emerges by age 4, replicating behavioral findings. Structural analyses indicated that age and hippocampal volume measures—but not overall brain volume—significantly predicted the adult-likeness of children’s visual memory patterns. Functional searchlight analyses further revealed that the inferior temporal gyrus reflects movie-frame memorability from age 3 onward, while the hippocampus shows this sensitivity starting at age 5. Together, these results suggest that the emergence of adult-like visual memory patterns is driven by structural and functional development in the hippocampus and IT cortex around age 4.
Talk 3, 11:15 am, 52.13
Adaptive Deployment of Working Memory in Older Adulthood
Nir Shalev1 (), Liz Atias1; 1University if Haifa, Israel
Working memory (WM) supports the maintenance of goal-relevant information during interaction with the environment. Standard laboratory tests consistently report age-related declines in WM, yet they often fail to capture how WM is used in the real world. In everyday situations, we rarely rely on our maximum WM capacity; instead, we balance between forming internal representations in WM and resampling information from the environment. For example, when following a cake recipe, we can either hold several ingredients in mind while searching the pantry or repeatedly look up each ingredient in the recipe. Understanding this adaptive allocation of WM resources is essential for explaining how memory decline affects people in everyday life and for linking lab-based findings with real-world performance. We tested whether younger (20–30 years) and older (65–75 years) adults differ in their flexible use of WM in an immersive Virtual Reality (VR) object copying task that approximates real-world activity. Participants (N=100) encoded the identity and location of items on a simulated model board, searched for targets among distractors, and placed them in matching slots. We manipulated movement demands to examine how reliance on WM changes as information sampling becomes more effortful. Older adults were generally slower and relied on fewer items in memory, yet greater movement demands prompted both groups to adapt and use more memory. Adaptation patterns diverged: younger adults showed sharper increases in encoding time and memory use as movement demands rose, whereas older adults incurred larger costs in completion time. VR-based performance measures correlated with behaviour outside the task, supporting ecological validity: memory use was associated with cognitive ability, and completion time with functional mobility. These findings demonstrate adaptive WM use across adulthood and show how ecologically valid VR assessments can reveal age-related differences that conventional tests overlook.
Supported by the Israeli Science Foundation Research Grant (1073/24)
Talk 4, 11:30 am, 52.14
Pain effects on visual working memory are modulated by the interaction between individual differences and attentional priority
Phivos Phylactou1, Angdi Chu2, Mathieu Piché3, Siobhan Schabrun2, David Seminowicz2; 1University of Nevada, Reno, 2Western University, 3Université du Québec à Trois-Rivières
The effects of pain on cognition are often mixed, with evidence showing bidirectional pain effects on cognitive performance. This has led the field to propose a two-group categorization: individuals who perform better (A-types) in cognitive tasks under pain and individuals who perform worse (P-types). Alternatively, the bidirectional effects may instead be due to an interaction of pain with attentional priority rather than individual differences. To explore this alternative, we performed four experiments (total n = 110) that manipulated attentional priority via retro-cues in an orientation change-detection visual working memory (VWM) task, while participants experienced thermal pain. Experiment 1 (n = 45) indicated worse VWM performance when pain was introduced after compared to before attentional prioritization (BF = 4.22). However, when accounting for individual differences, evidence showed that pain effects were evident when introduced prior to attentional prioritization for A-types (BF = 353.93), but after attentional prioritization for P-types (BF = 28.82). These effects were absent during an auditory distractor control condition. To further explore whether the dissociable effects were better explained through attentional mechanisms or individual differences, Experiment 2 (n = 20) modeled VWM performance as a function of thermal pain intensity. Model fitting comparisons indicated that accounting for individual differences provides a better fit than attentional mechanisms but also showed better model fitting for P-types versus A-types (BF = 3.49). We investigated this further in Experiments 3 (n = 25) and 4 (n = 20) that showed that P-types are more susceptible to VWM pain effects, both during prioritized (Experiment 3: BF = 4.76) and unprioritized (Experiment 4: BF = 24.81) attentional states. Contrary to our predictions, these findings illustrate that the nexus between pain and VWM is better understood through an interaction between individual differences and attentional prioritization, rather than one over the other.
This work was partly supported through the 2023 Postdoctoral Associate Recruitment Award of the Faculty of Health Sciences at Western University, awarded to PP and SMS
Talk 5, 11:45 am, 52.15
Can you imagine in colour?: Memory confusion as an index for colour imagery vividness
Zenith A. Zyn1, Joan Danielle K. Ongchoco1; 1The University of British Columbia
Our perceptual experiences are full of colour—the red of an apple or the blue of the sky—that involve a dynamic interplay between surface reflectance, illumination, and viewing conditions. But consider the colour of your mental images. Some people report their mental images to be rich in colour, whereas others experience colours as dim, even nearly black-and-white. The present study develops an objective property-specific measure of colour imagery using a source memory confusion paradigm. Observers first completed a baseline imagery calibration in which they adjusted the lightness (i.e., L in HSL colour space) of four canonical colours (pink, cyan, orange, green) to match their mental images. These settings were used for abstract shapes that were presented either fully coloured (in the lightness provided by the observer) or merely outlined. In coloured trials, observers studied the shape for a period of time; in outline trials, they imagined “filling in” the shape with a cued colour for the same duration. At test, all shapes appeared coloured, and observers reported whether each shape was old or new—and, for recognized shapes, whether its colour had been seen or imagined. Finally, all observers completed the Vividness of Visual Imagery Questionnaire (VVIQ). Shape recognition was robust, confirming successful encoding for coloured and outlined shapes. But the critical question was whether observers confused outlined shapes as having been seen in colour, where successful imagery would predict greater source-memory confusion. Individual differences in source-memory confusion were predicted by colour-specific VVIQ items (with greater confusion in observers who responded the maximum rating of 5), but not by overall VVIQ scores and lightness settings. These results expand our notion of “vivid” mental imagery: imagery vividness may be better captured by independent visual features, and successful “vivid” imagery may blur (in memory) the line between what was perceived versus imagined.
Talk 6, 12:00 pm, 52.16
The Role of V1 Fovea in Visual Mental Imagery
Yangjianyi Cao1 (), Yanna Mao2, Shuai Chang3, Li Zhaoping4, Ming Meng1; 1University of Alabama at Birmingham, United States, 2South China Normal University, China, 3Zhongshan Ophthalmic Center of Sun Yat-Sen University, China, 4University of Tübingen, Max Planck Institute for Biological Cybernetics, Germany
Prior research on visual perception suggests that the foveal region of V1 exhibits strong top-down feedback (Zhaoping, 2023, Trends Cog. Sci.) and contains high-level information about objects located in the peripheral visual field (Williams et al., 2008, Nat. Neurosci.). Although visual mental imagery is the cognitive ability to voluntarily generate internal representations without sensory inputs, it activates remarkably overlapping brain regions with visual perception, particularly in the primary visual cortex (V1). However, it remains unknown whether and how the foveal region is involved in representing imagined targets located in the periphery. Using functional magnetic resonance imaging (fMRI), we measured neural activity in individuals with aphantasia and typical imagers during a voluntary imagery task. Participants were instructed to imagine a green-vertical or red-horizontal Gabor patch in their left or right visual field. We employed univariate analysis and multivariate pattern analysis (MVPA) within retinotopically defined V1 to assess BOLD signal amplitude and decode the combined spatial and feature information of the imagined targets. Results revealed that, unlike typical imagers who showed no significant activation in the foveal region of V1, individuals with aphantasia exhibited significant negative BOLD signal changes compared with the no imagery period. Crucially, this signal reduction did not compromise multivariate pattern decoding in the foveal region, as its accuracy remained significantly above chance level. This suggests that, despite the absence of subjective visual experience, aphantasic individuals may recruit alternative neural strategies to engage the foveal region when attempting to generate the specific internal representation. Collectively, these results advance our understanding of visual imagery generation by identifying the foveal region of V1 as a critical hub for top-down feedback modulation. Furthermore, this finding supports the argument that V1 does not merely passively receive external input but also actively shapes subjective experience, thereby serving as a necessary substrate for visual awareness.