VSS, May 13-18

Visual Memory: Capacity, encoding

Talk Session: Wednesday, May 18, 2022, 10:45 am – 12:30 pm EDT, Talk Room 1
Moderator: Wilma Bainbridge, University of Chicago

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:10 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 10:45 am, 62.11

Evidence of perceptual history propagation from decoding of visual evoked potentials

Giacomo Ranieri1, Alessandro Benedetto2, Hao Tam Ho2, David C. Burr1, Maria Concetta Morrone2; 1University of Florence, 2University of Pisa

It is well known that recent sensory experience influences the perception of new stimuli. However, the underlying neural mechanisms mediating this influence are poorly understood. We measured ERP responses to pairs of stimuli presented randomly to the left or right hemifield. 17 participants judged whether the upper or lower half of the grating had higher spatial frequency, independently of its horizontal position. This design allowed us to trace the memory signal modulating the task, and also the implicit memory signal associated with hemispheric position. Using classification techniques, we decoded the position of the current and previous response based on voltage scalp distribution of the current trial. The representation of previous stimuli was not activated before onset of the current stimulus, and its classification reached full significance only 500 ms later, suggesting retrieval of an activity-silent memory trace both for the task relevant and the task-irrelevant characteristic of the stimuli. Overall, our data provide evidence for a framework wherein recent experience is reactivated concurrently with present neural activity to facilitate the enactment of serial integration.

Acknowledgements: This research was funded by the European Research Council—grant agreement no. 832813—ERC Advanced ‘‘Spatio-temporal mechanisms of generative perception—GenPercept’’.

Talk 2, 11:00 am, 62.12

Lest we forget: Does remembering new information help improve forgetting?

Edyta Sasin1 (), Yuri Markov2, Daryl Fougnie1; 1New York University Abu Dhabi, 2HSE University, Russia

Information that was once relevant may cease to be important. How do we forget irrelevant information and how is this affected by remembering new information? We explored this by embedding a directed forgetting task in a visual long-term memory paradigm. Participants were shown a series of images. Each image was followed by the cue to Remember or Forget the previous image, or it was followed by a new image with the instruction to Remember-Both images or Replace the previous image with a new one. In a later recognition test, all images were tested. Directed forgetting was effective—we found worse memory for Forget than Remember images. However, we found no evidence that new information helped in forgetting old information. There was no difference between Forget and Replace images and no difference between Remember and Remember-Both images. Experiment 2 demonstrated that recognition was worse for images replaced by subcategory-related than unrelated images. Experiment 3 found no difference in memory between images replaced by high and low memorability images. Similarly, Experiment 4 found no difference in recognition for images replaced by low and high-value images (determined by points). Across all studies to-be-forgotten or replaced images were remembered less well than to-be-remembered images, demonstrating that information is getting removed from memory. However, we see very little evidence that remembering new information improves control over forgetting. Strikingly, forgetting of no longer relevant information is not influenced by instructions to replace that information with something else (unless the new information is conceptually related, which could be from interference at memory retrieval). Further, memory was not influenced by the memorability of new information, nor by whether it’s associated with high or low value. While we have many cognitive tools to influence how well we remember, control over how well we forget is quite limited.

Talk 3, 11:15 am, 62.13

Spatial Massive Memory

Jeremy Wolfe1,2 (), Wanyi Lyu1; 1Brigham and Womens Hospital, 2Harvard Medical School

After viewing hundreds of objects, observers can discriminate old from new items with accuracy of ~80%. Does this massive memory extend to WHERE the objects were seen? Thirteen observers saw arrays of 15 items, randomly located in jiggled 7x7 arrays. Arrays were shown for 30 seconds with each item highlighted for 2 seconds. After each array, Os were tested with 15 old and 15 new images. If an item was deemed “old”, observers clicked on the screen location where they thought the item had been located. observers completed 20 screens (300 objects). After those 20 screens, observers were retested with all 300 old objects and 300 new objects. They repeated the old/new and localization tasks. Old/new discrimination was good (test: d’=2.6, retest:1.8). To measure spatial memory, we tabulated the number of localization clicks that fell within +/- 1 cell of the target location in the 7x7 array. From this we subtracted the number that could have fallen in that ROI by chance. By this conservative measure, observers correctly localized 6.6 of 15 items per screen at test. On retest, they correctly localized 76(!) of the 300 old objects. This is surprisingly robust memory for items presented for 2 sec in random arrays with no semantic clues. We repeated the experiment at array sizes of 5, 25, and 49 items (300 total old items). Observers correctly localized 50-100 items at retest, though a substantial fraction of on-line observers seemed to guess about location in retest after viewing 25 and 49 item arrays. This “spatial massive memory” is surprisingly good, obviously far in excess of any working memory capacity. Spatial massive memory appears to be smaller than massive memory for identity showing that it is possible to recognize that you have seen an item and not remember where.

Acknowledgements: NEI EY017001, NSF 1848783, Minds, Brain, and Behavior program, Harvard College,

Talk 4, 11:30 am, 62.14

Semantics, not Atypicality Reflect Memorability Across Concrete Objects

Max A. Kramer1 (), Martin N. Hebart2, Chris I. Baker3, Wima A. Bainbridge1; 1University of Chicago, 2Max Planck Institute, 3National Institute of Mental Health

Why do we remember some things while forgetting others? Prior work has demonstrated remarkable consistency in what stimuli people will later remember, a quantifiable and robust property known as memorability. To explain memorability of faces and scenes, prior research has used singular image features or a linear combination of image features with limited success. In order to provide a richer account of memorability, we utilize a spatial framework with individual images represented as points in a multidimensional space generated from a broad, general stimulus set. Specifically, we leveraged THINGS, a naturalistic object image database of 26,107 images that representatively samples concrete objects to examine the image features that most strongly influence memorability and whether it is the most prototypical or most atypical items that are most memorable. We focus on the role of semantic and visual features and their distribution in determining which images are best remembered. We collected memory performance data from 13,946 participants and utilized three complementary measures of typicality including human typicality judgments, similarity across object space dimensions, and across deep neural network features to relate stimulus typicality to memorability. Our results suggest that semantic information has a stronger influence on memorability than visual information with a slight bias toward the most prototypical items being the most memorable. These findings run counter to the predominant view that semantic information is not required to explain what is memorable and that the most atypical images are best remembered. Our findings shed new light on the determinants of what makes something memorable that could only be found using a large, representative dataset.

Acknowledgements: Intramural Research Program of the National Institutes of Health (ZIA-MH-002909)

Talk 5, 11:45 am, 62.15

Worse remembering of a dog when viewed in a sequence of dogs is dominated by changes in memory mechanisms as opposed to sensory adaptation

Catrina M. Hacker1 (), Barnes G.L. Jannuzi1, Travis Meyer1, Madison L. Hay1, Nicole C. Rust1; 1University of Pennsylvania

The categorical context in which an image is viewed influences how well it will be remembered. For example, an image of a dog presented in a sequence of other images of dogs is typically less memorable than the same image viewed in a random image sequence. To determine the neural correlates of these contextual influences on image memorability, we recorded from large populations of isolated neurons in monkey inferotemporal cortex (ITC) while one rhesus monkey performed a single-exposure visual memory task. The monkey viewed one image per trial and determined whether it was novel or repeated. Images were presented in one of two types of blocks: 1) categorical blocks in which 80% of the images came from a single category (expected images) and the other 20% came from random categories (oddballs) and 2) random blocks in which images came from several categories. Like humans, the monkey showed worse remembering for expected images as compared to oddball and random images. We also found that this pattern of behavior was at least qualitatively predicted by neural responses in ITC. To test the neural mechanisms that shaped these ITC neural responses, we explored two non-mutually exclusive hypotheses. First, that these effects result from sensory adaptation, characterized by context-dependent changes in the robustness of ITC visual representations. Second, that these effects result from changes in memory mechanisms, reflected by altered ITC memory signals. In the expected condition, we found minimal evidence for sensory adaptation (1.1% reduction in the firing rates to novel images), but strong reductions in memory signals (20% reduction in repetition suppression). These results suggest that the contextual modulations of image memorability are dominated by changes in memory mechanisms as opposed to sensory adaptation.

Acknowledgements: NIH R01EY032878; Simons Collaboration on the Global Brain 543033

Talk 6, 12:00 pm, 62.16

Images that are harder to reconstruct are more memorable and benefit more from additional encoding times

Qi Lin1 (), Zifan Li1, John Lafferty1, Ilker Yildirim1; 1Yale University

Decades of work have established the reconstructive nature of memory. Yet these works focus almost exclusively on retrieval. Here we ask whether the brain recruits reconstructive processing -- in a more automatic and fast form -- even at the earliest stage of memory formation, during perceptual encoding. Image memorability -- that some images are systematically better remembered than others -- provides an opportunity to investigate what bridges perception and memory. Existing accounts of memorability solely evaluate bottom-up feature hierarchies optimized for image classification (using standard deep convolutional neural networks [DCNNs]) as the underlying mechanism but fail to consider the role of reconstructive processes during visual encoding, which can support generative functions and compression during memory formation. In computational models, such reconstructive processes are often implemented in non-feedforward modules (e.g., top-down, lateral) and we hypothesize that they should express a distinct temporal signature in behavior. To test this hypothesis, we used two models to capture reconstructive processing during visual encoding: a sparse coding model and a generative adversarial network. In Study 1, in a scene memorability dataset with over 2000 images, we found that images with larger reconstruction error are more memorable (ps<.001), and that reconstruction error captures additional variance in memorability (ps<.001) beyond what can be explained using distinctiveness, a measure derived from the feature space of a DCNN trained for classification. To demonstrate that the variances captured by distinctiveness and reconstruction error result from functionally distinct processes, we ran a pre-registered Study 2 (N=45) using the rapid serial visual presentation (RSVP) paradigm with varying encoding times (34 to 167 ms). We found that images with large reconstruction error benefit more from longer encoding times than images with small reconstruction error, controlling for distinctiveness (all ps<.05). These results reveal reconstruction error as a previously unrecognized source of image memorability.

Acknowledgements: This project was funded by an AFOSR Young Investigator Program award to IY.

Talk 7, 12:15 pm, 62.17

Serial dependence to prior stimuli and past responses

Timothy Sheehan1 (), Ben Carfano1, John Serences1,2; 1UC San Diego, 2Kavli Institute for Brain and Mind

Previous work on serial dependence has centered on whether attractive biases emerge during early sensory processing (Fisher & Whitney 2014, Cicchini et al., 2017) or are driven by decionsional or response production processes (Pascucci et al. 2019, Sadil et al. 2021, Sheehan & Serences, 2021). In Experiment 1 (n=13) we sought to isolate the effects of the stimulus on serial bias in a spatial delayed report task where the stimulus was visible on some trials and imagined on others. We found systematic attraction to the previous stimulus irrespective of stimulus visibility (p<.0005) suggesting that sensory processing is not necessary to induce attractive biases. However, on 1/3rd of trials the previous stimulus did not require a response (“drop trial”) but still induced an attractive bias (p<.05) suggesting that overt report is also not essential. Thus Experiment 1 suggests that neither sensory processing nor an explicit response is required to induce serial dependence. In Experiment 2 (n=20), we further disentangled contributions of stimulus and response by adding trials where subjects responded 180 degrees from the actual stimulus location (“flip-response”) or they responded to a new random location (“random response”). In this paradigm, we again found attractive serial biases regardless of whether the current or previous stimulus was visible (p’s <.001). However, attraction was stronger when the current stimulus was visible (p<.05), suggesting that both sensory and non-sensory components contribute. Critically, we found that responses were attracted to the previously responded-to location following both “flip-response” and “random-response” trials (p<.0001) but showed no residual attraction to the previously remembered location. Together these studies suggest that serial dependence simultaneously reflects contributions from stimulus processing, memory maintenance, and response production and argue against a unitary account.

Acknowledgements: NEI R01-EY025872 to JTS; NIMH Training Grant in Cognitive Neuroscience (T32- MH020002) to TCS