Where’s Waldo in the mind: Accessing perceptual and semantic attributes in perception and working memory.

Poster Presentation 43.322: Monday, May 20, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Memory: Encoding, retrieval

Edyta Sasin1 (), Ying Zhou1, Aytac Karabay1, Sulav Shrestha1, Daryl Fougnie1; 1New York University Abu Dhabi

During perception, low-level features (such as color) are processed faster than high-level features (such as semantic properties). But what about accessing information from working memory? Recent work (Kong & Fougnie, 2021) has shown that search in working memory may be distinct from visual search regarding which features are most efficient. Further, research on long-term memory (Linde-Domingo, Treder, Kerrén, & Wimber, 2019) has shown that semantic information is retrieved more rapidly than perceptual information. However, it is not yet known whether semantic properties are accessed faster from working memory than perceptual attributes. In two experiments, participants were shown four images that were either animate or inanimate objects (semantic property) and which could be in the form of a photograph or drawing (perceptual property). Participants were pre-cued (perception – Experiment 1) or post-cued (working memory – Experiment 2) to the location of one of these objects. The cues were accompanied by either a semantic (“animate or inanimate?”) or perceptual (“drawing or photograph?”) question. Unsurprisingly, perceptual aspects were discriminated faster than semantic aspects when the information was available to visual perception. However, when the task required accessing no longer presented information from working memory, participants took less time to respond to semantic than perceptual queries. These experiments, together with other recent findings, point to a reversal of the processing hierarchy for perception and memory. While visual perception is feed-forward, retrieving information in memory might first involve accessing high-level properties such as semantic categories, followed by access to lower-level visual properties.

Acknowledgements: NYUAD Research Institute Grant CG012