The coding of multiple visual features in visual working memory

Poster Presentation 56.311: Tuesday, May 19, 2026, 2:45 – 6:45 pm, Banyan Breezeway
Session: Visual Memory: Mechanisms, models, individual differences

Yaoda Xu1, Marvin Chun1; 1Yale University

Compared to visual perception, visual working memory has a limited storage capacity and, in general, evokes much lower neural responses. How does this impact the coding of multiple visual features in VWM? Here, we examined the coding of shape features from different objects and the coding of color and shape from the same object in VWM. In visual perception, neural responses for a pair of objects shown together can be approximated by the linear average of those of each object shown alone. Such response averaging is considered an effective mechanism to combat representational distortion due to neuronal response saturation. Given that response amplitudes are much lower in VWM than those in perception, does averaging exist in VWM? Across two experiments, we found that, after accounting for task factors such as load, VWM representations for two objects can also be approximated by the linear average of those of each component object in both human occipitotemporal cortex (OTC) and posterior parietal cortex (PPC). In contrast, although an object’s color and shape are coded in a largely independent manner during visual perception in OTC, we found in a third experiment that these features form a partially integrated representation in VWM. Specifically, a linear classifier trained to decode a pair of objects in one color showed a significant decoding drop when asked to decode the same pair of objects in a different color during VWM delay, but not during VWM encoding. This indicates interactive feature coding in VWM, unlike that in perception. Together, our results show that features from different objects maintain their representational separation in VWM as they do in perception, despite a drop in response amplitude; meanwhile, features from the same object appear to lose their representational independence from perception to VWM and form partially integrated representations in VWM.

Acknowledgements: Supported by NIH Grant R01EY030854 to YX.