Probing the structure of working memory representations in vision and audition
Poster Presentation 26.437: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Visual Memory: Objects, features
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Abigail Noyce1, Raina Alam1, Will (Yuhang) Li2, Eli Bulger1, Stephanie Bronfman1,3, Kammiee Ardo1, Barbara Shinn-Cunningham1; 1Carnegie Mellon University, 2University of Wisconsin, 3University of Pittsburgh
Working memory (WM) is a capacity-limited resource. Grouping simultaneously-presented features into coherent objects provides a scaffolding that supports WM efficacy, but less is known about how to-be-remembered items are grouped together over time. In hearing, the organization of sound information into auditory objects is intrinsically temporal, dramatically impacting WM storage and retrieval. To ask whether the same is true for vision, we manipulated temporal grouping and WM retrieval demands in visual and auditory WM in separate experiments. In visual WM, the memoranda were a series of locations occupied by a stimulus disc. In the temporally grouped condition, the disc moved smoothly between locations; in the temporally ungrouped condition, it jumped discontinuously. In auditory WM, the memoranda were a series of sound stimuli. In the temporally grouped condition, these sounds were complex tones with identical timbre; in the temporally ungrouped condition, these were snippets of diverse environmental sounds. WM retrieval of individual items was probed via Sternberg-type old/new judgment of a single item; WM retrieval of the temporally-grouped object was probed via sequence change detection. Our preliminary data suggest that temporal grouping works differently in the two senses. Visual WM performance was similar regardless of temporal grouping; auditory WM performance replicated our prior result. WM capacity thus depends not only on stimulus features and task demands, but also on fundamental characteristics of the underlying perceptual process, which differ across sensory modalities.
Acknowledgements: Supported in part by ONR MURI N00014-19-12332; NIH R25 DC020922-02; CMU HURAY.