Comparing representational structures for simple and complex stimuli in visual working memory

Poster Presentation 36.459: Sunday, May 19, 2024, 2:45 – 6:45 pm, Pavilion
Session: Visual Memory: Working memory and objects, features

Isabella Longoria-Valenzuela1 (), Timothy Brady1, John Serences1; 1University of California, San Diego

Different stimulus spaces have different representational structures: Gabor representations in perception and memory may depend primarily on tuning functions in early visual areas, while faces may depend on a whole hierarchy of face-selective and non-selective visual representations across the ventral stream. How do these differing representational structures affect memory errors when tasked to remember these stimuli? To test this, we examined working memory error distributions and neural representations for 3 stimulus spaces: gabors, and two face “wheels.” To create these face wheels, a generative adversarial network was used by picking a random plane in the latent space and generating a circle with either a small or large radius. This gave us one wheel with more similar and one with much more distinct face stimuli. Target stimuli for the memory task were then sampled from these wheels uniformly every 10 degrees, creating 90 target stimuli in total across the 3 wheels. In a behavioral experiment, participants were asked to memorize either 2 or 4 stimuli from these wheels and report a target stimulus’s location on the corresponding wheel. Memory error on each trial was calculated as the difference between the target stimulus and the recorded wheel response. Across all wheels, remembering 4 stimuli produced larger memory errors than trials showing 2 stimuli, and there were large differences in memory error distributions for the three stimuli wheels that allowed us to model the representational structure of simple and more complex stimulus spaces as revealed by behavior.

Acknowledgements: NSF BCS- 2146988 to TFB & R01-EY025872 to JTS