Visual Memory: Working and behavior

Talk Session: Sunday, May 19, 2024, 10:45 am – 12:30 pm, Talk Room 1
Moderator: Sven Ohl, Humboldt-Universität zu Berlin

Talk 1, 10:45 am, 32.11

Signal intrusion reconciles divergent effects of perceptual distraction on working memory

Ziyao Zhang1 (), Jarrod Lewis-Peacock1; 1The University of Texas at Austin

Perceptual distraction distorts visual working memories. Recent research has shown divergent effects of distraction on memory performance, including attractive or repulsive biases in memory reports, improving or impairing memory precision, and increasing or decreasing guess rates. These effects are sensitive to target-distractor similarity and thus have been attributed to sensory interference according to the sensory recruitment hypothesis of working memory. Here, we propose a novel Distractor Intrusion Model (DIM), an extension of the Target Confusability Competition (TCC) framework, to reconcile the discrepant results of perceptual distraction. We hypothesized that sensory interference, in all instances, is driven by the integration of a target memory signal and an intrusive distractor signal. We tested this model against the classical mixture model and other candidate models. Model comparisons showed that TCC-DIM had a superior fit to memory error distributions across six delay-estimation tasks with distraction (N = 220). Both passive and active distraction tasks were examined and target-distractor similarity was varied between 18° and 153°. According to the model, distractor intrusions decreased along with target-distractor similarity, in accordance with the sensory recruitment hypothesis. Moreover, we found that TCC-DIM successfully replicated divergent effects of distraction on memory bias, precision, and guesses using only this one intrusion mechanism. This model also makes a novel, and somewhat surprising, prediction that low-fidelity memories are likely to benefit from distractor intrusions, whereas high-fidelity memories are likely to become impaired. Our data support this prediction such that participants (N=49) with lower memory precision benefited from distraction and showed a reduction in memory errors relative to no-distraction trials. Those with higher memory precision showed greater errors following distraction. These results collectively suggest that perceptual distractors affect working memories through signal intrusions, thus providing a unified mechanism to explain diverse and divergent effects of distraction on working memory performance.

Acknowledgements: This work was completed with support from the National Institute of Health Grant R01EY028746 awarded to J.A.L.-P

Talk 2, 11:00 am, 32.12

Action planning biases interactions between visual working memory representations

Caterina Trentin1 (), Christian N.L. Olivers1, Heleen A. Slagter1; 1Vrije Universiteit Amsterdam

Recent studies suggest that planning an action on an object in visual working memory (VWM) can modulate its sensory representations. In this study, we investigated how planning an action on objects in VWM influences the way in which VWM representations interact – specifically whether different associated action plans also lead to more differentiated mnemonic representations of sensory input. We hypothesized that associating two visual orientations with different action plans in VWM would make them appear more dissimilar in memory than two orientations linked to the same action plan. Participants (n=32) memorized the orientation of two bars, sequentially presented on a touch screen. Following a delay, they manually reproduced each of the orientations. Each bar was followed by an action cue informing participants which action had to be performed at test to reproduce the memorized orientations. In the different action condition, the bars were associated with different action plans, i.e., a grip and a slide action. In the same action condition, they were linked to the same action plan, namely both grip or both slide actions. Our results show that similarly oriented bars repelled each other in both conditions (the well-known repulsion effect), but more so when associated with different action plans. Preliminary results from a control experiment indicate that the observed repulsion effect cannot be explained by differential motor biases, but is driven by action planning-induced changes in the mnemonic representations themselves (i.e., is perceptual in nature). Thus, not only visual features, but also action attributes modulate the way VWM representations interact: planned actions on objects in VWM can influence the extent to which their VWM representations look more or less similar to our mind’s eyes.

Talk 3, 11:15 am, 32.13

Latent memory traces for prospective items in visual working memory

Luzi Xu1 (), Andre Sahakian1, Surya Gayet1, Chris Paffen1, Stefan Van der Stigchel1; 1Utrecht University

Visual working memory is a capacity-limited cognitive system that allows for keeping task-relevant information available for goal-directed actions. When selecting a subset of items for encoding in working memory (e.g., pears, pasta, and yogurt from a shopping list), observers can be simultaneously exposed to other items (e.g., tomatoes and eggs, on the same list) that are not selected for imminent action (hereafter: ‘prospective items’). Here, we asked whether prior exposure to such prospective items facilitates subsequent visual working memory encoding of these items, when they are selected for imminent action later. We used a so-called ‘copy task’, in which participants reproduced an arrangement of colored polygons (the ‘model grid’), in an adjacent empty grid. During placement, prospective items (i.e., hitherto unplaced items) in the model grid either remained at a fixed position or were swapped. The latter condition hampered the buildup of memory traces for prospective items. In three experiments, using different approaches to manipulate the stability of prospective items, we consistently observed that - when prospective items remained stable - participants took less time inspecting the model when encoding these items in a later stage (compared to when they were swapped). This reduced inspection duration was not accompanied by a higher number of inspections or an increase in errors. We conclude that the memory system gradually builds up latent memory traces of items that are not selected for imminent action, thus increasing the efficiency of subsequent visual working memory encoding. The present work reveals one way in which the mnemonic system circumvents its capacity limitations to efficiently operate in a complex visual world.

Talk 4, 11:30 am, 32.14

Storage in working memory recruits a modality-independent pointer system

Henry Jones1,2 (), Darius Suplica1, William Thyer1,2, Edward Awh1,2; 1Department of Psychology, University of Chicago, 2Institute for Mind and Biology, University of Chicago

Prominent theories of working memory (WM) have proposed that distinct working memory systems may support the storage of different types of information. For example, distinct dorsal and ventral stream brain regions are activated during the storage of spatial and object information in visual WM. Although feature-specific activity is likely critical to WM storage, we hypothesize that a content-independent indexing process may also play a role. Specifically, spatiotemporal pointers may be required for the sustained indexing and tracking of items in space and time, even while features change, within an unfolding event. Past evidence for such a content-independent pointer operation includes the finding that signals tracking the number of individuated representations in WM (load) generalize across colors, orientations and conjunctions of those features. However, overlapping orientation and color codes in early visual cortices may mimic a generalizable signal. Here, we provide a stronger demonstration of content-independence by using pairs of features that are as cortically disparate as possible. Study 1 (n=16) used color and motion coherence stimuli, and showed that load decoding models generalized across these disparate features. In addition, we used representational similarity analysis (RSA) to document “pure” load signals that tracked the number of items stored regardless of attended feature, while simultaneously documenting and controlling for feature-specific neural activity. Extending these observations, in Study 2 (n=24; n=16) we applied similar analytic approaches to demonstrate a common load signature between auditory and visual sensory modalities, while controlling for modality-specific neural activity and the spatial extent of covert attention. Our findings suggest that content-independent pointers may play a fundamental role in the storage of information in working memory, and may contribute to its overall limited capacity.

Talk 5, 11:45 am, 32.15

The relative dominance of visual and semantic information when visual stimuli are retrieved from memory based on images or words

Adva Shoham1 (), Itay Yaron1, Liad Mudrik1, Galit Yovel1; 1Tel Aviv University

Familiar concepts can be described by their visual and semantic features. These types of information are hard to dissociate in mental representations. In a recent study we used visual and language DNNs to disentangle and quantify the unique contributions of visual and semantic information in human mental representations of familiar stimuli. We revealed a larger contribution of visual than semantic information during stimuli presentation in perception but a reversed pattern when recalled from memory based on their names. Here we adopt the same methodology to ask how long after stimulus offset does visual dominance shifts to semantic dominance. The duration for which visual information is retained following stimulus offset has been debated. To that end, across two studies, we manipulated the delay between stimulus offset and its recall from memory. In Study 1, participants rated the visual similarity of pairs of familiar faces in simultaneous presentation and in a sequential presentation with 2sec, 5sec or 10sec delays. We extracted representations of faces from a face-trained DNN, and of their Wikipedia description from a language model. In Study 2, we used data collected by Bainbridge et al (2019), in which participants were presented with an image of a scene and were asked to copy it while looking at the scene, 1 second or 10 mins after it was removed; or draw the scene based on its name with no prior exposure. We extracted representations of drawings from an object-trained DNN fine-tuned for drawings, and of their Wikipedia description from a language model. Both experiments revealed visual dominance after stimulus offset across all delays, and semantic dominance when retrieved from memory based on names. We conclude that visual information is dominant even 10 minutes after visual stimulus offset. Semantic information dominates the representation when a stimulus is recalled based on verbal information

Acknowledgements: ISF 917/21

Talk 6, 12:00 pm, 32.16

Further evidence that the speed of working memory consolidation is a structural limit

Benjamin J. Tamber-Rosenau1 (), Lindsay A. Santacroce1,2, Brandon J. Carlos1,3; 1University of Houston, 2Toronto Metropolitan University, 3Ball State University

It has been proposed that the typically slow consolidation of information from vision to working memory (WM) is under flexible control, and thus can be speeded based on task demands. Recently (Carlos et al., 2023, doi: 10.3758/s13414-023-02757-7), we showed that consolidation is not sped even when it is prioritized over a subsequent competing decision task (T2). However, other research (Nieuwenstein et al., 2015, doi: 10.1167/15.12.739; Woytaszek, 2020) has manipulated the proportion of trials with T2 present and suggested that anticipated interference from competing tasks can lead to speeding of consolidation. Here, we present evidence against speeding of consolidation even when interference can be anticipated, providing an additional line of evidence against flexible control of WM consolidation. Using a within-subjects manipulation, participants completed blocks of a WM task with T2 presented at varying delays from the WM sample, on either 50% or 100% of trials. Retroactive interference from T2 onto WM was similar regardless of block (i.e., T2 probability). In another manipulation, we also varied the delay from T2 response to WM probe and found that this second delay’s duration had no effect on WM reports. Importantly, this suggests that changes in WM performance with sample-T2 delay measure only the interruption of WM consolidation and are not contaminated by proactive interference from T2 onto the report of information from WM. In sum, the present results are consistent with the transfer of information from vision to WM being a slow process that is not under flexible control—either from explicit volitional prioritization, or implicit demands to counter anticipated interference.

Acknowledgements: This material was supported by the United States National Science Foundation under grant number 2127822.

Talk 7, 12:15 pm, 32.17

Probing bidirectional serial dependence in an N-back orientation estimation task

Jongmin Moon1, Hoyeon Yoon1, Oh-Sang Kwon1; 1Department of Biomedical Engineering, Ulsan National Institute of Science and Technology

Vision is continuously shaped by a phenomenon known as serial dependence, wherein the estimation of stimulus features, such as orientation, is systematically biased by past visual input. This bias is believed to leverage the temporal autocorrelation in visual scenes, enhancing perceptual stability and sensitivity to change. To harness the full potential of temporal continuity, observers should consider not only the preceding stimulus but also the following one, when estimating a remembered stimulus feature embedded in a sequence of stimuli. Here, we used an N-back orientation estimation task to investigate whether serial dependence extends to memorized stimuli, with the preceding and/or following stimuli inducing the effect. Subjects were presented with a sequence of randomly oriented Gabor stimuli. The sequence terminated with a constant hazard rate, prompting subjects to recall the orientation of the 1-back stimulus (i.e., the target). Therefore, subjects had to keep in mind both the target and the following stimuli when prompted to recall the target. A probabilistic mixture model was employed to quantify contributions of different sources of error, excluding trials where subjects mistakenly reported the preceding or following stimulus instead of the target. Results revealed a highly consistent pattern of repulsive bias in the forward direction (preceding stimulus biases target estimation) and a weak trend of repulsive bias in the backward direction (following stimulus biases target estimation). Intriguingly, the strength of the repulsive bias was more pronounced for the preceding stimulus, despite the more recent presentation of the following stimulus, which would intuitively be expected to have a stronger working memory trace. These results underscore that our memory of visual scenes is influenced by both preceding and following stimuli, with the bias in forward direction prevailing in bidirectional serial dependence. Overall, our findings contribute to a deeper understanding of mechanisms underlying serial dependence in visual working memory.

Acknowledgements: This research was supported by the National Research Foundation of Korea (NRF‐2020S1A3A2A02097375).