Learning shapes neural similarity of visual working memory representations
Poster Presentation 23.315: Saturday, May 16, 2026, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Working Memory: Interactions with long-term memory
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Vivien Chopurian1, Jacob Miller2, Lana Gaspariani1, Janna Wennberg1, Arielle Tambini3,4, Anastasia Kiyonaga1; 1UC San Diego, 2University of Miami, 3Nathan Kline Institute for Psychiatric Research, 4New York University Grossman School of Medicine
The transient maintenance of a detailed visual memory relies on representations in distributed brain regions. These representations can change over a range of time scales, within a single trial or across days or even months. For instance, representations in visual areas may be transformed across a trial from a sensory-like code into a more abstract goal-directed format. We have previously shown that repeated exposure to complex visual stimuli over months of learning increased the expression of item-specific and categorical neural representations in prefrontal cortex. Here, we reanalyze this dataset to examine how the representation of different visual properties evolves across learning to support behavior. This study takes a dense sampling approach, where 3 human subjects underwent repeated fMRI scanning over 3 months. During each session, they completed a series of tasks that incorporated a set of 18 unique fractal stimuli. During each trial of the working memory task, subjects had to remember either one of these trained fractals or a similar novel fractal over a jittered delay. Some fractals were remembered well from Day 1, whereas others started out poor but were learned over time. How might these behavioral patterns track with neural differentiation in visual areas? We specified hypothetical models of visual similarity based on output from different layers of a neural network (VGG-16). Then we tested how well those models explain the neural patterns across visual and parietal regions over time. Across all sessions, earlier model layers better explained patterns in earlier visual regions (e.g., V1, V2), while later model layers better explained patterns in later visual regions (e.g., V4). These patterns also changed on different timescales, illustrating how learning unfolds for various levels of visual representation in distributed cortical areas. Evolving neural differentiation in visual areas may support discrimination between confusable working memory stimuli with learning.