Visual Memory: Space, time, features, objects
Talk Session: Tuesday, May 23, 2023, 10:45 am – 12:30 pm, Talk Room 1
Moderator: Brian Scholl, Yale University
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 10:45 am, 52.11
A signal-detection model evaluates feature dependence in visual long-term memory for real-world objects
Igor Utochkin1 (), Daniil Grigorev2; 1University of Chicago, 2HSE University
Whether complex real-world objects are represented in visual long-term memory as bound units or a set of independent features is debated. We apply a signal-detection approach to answering this question. Four groups of observers (n=100 each group) memorized the same set of 120 objects and then three groups performed 2-AFC recognition and the fourth performed 4-AFC recognition. For any given target, in one group, the foil that was a different exemplar (backpack A versus backpack B); in another group, the foil was the same exemplar but changed state (open backpack A versus closed backpack A). In a third group, the foil was a different exemplar in a changed state (open backpack A versus closed backpack B). The fourth group made 4-AFC with all three foil types. We calculated SDT discriminability (d’) for each target-foil combination and recovered a 2D signal-detection space for each target and its three foils. d’-s for discriminations based on exemplar (d’_exemplar) or state (d’_state) alone were set as centers of target familiarity distributions on the corresponding feature dimensions. Discriminations based on both features were determined by the separability of the bivariate signal and noise distributions d’_exemplar+state = f(d’_exemplar, d’_state, rho), where rho is a measure of noise correlation between the dimensions. We found that the majority of discrimination spaces showed relative feature independence (rho’s close to 0, median rho = 0.26), yet a fraction of spaces tended to strong dependence (rho’s close to 1). We also found that d’ estimates from the 2-AFC tasks yielded precise and specific predictions for 4-AFC (hits and false alarms to each individual foil), and together with the dependence measure rho further increased the precision of these predictions. We conclude that feature unity/separability of memory representations is not all-or-none but a continuous property depending on noise correlation in underlying discrimination spaces.
Talk 2, 11:00 am, 52.12
Deriving the Representational Space and Memorability of Object Concepts and Features
Meng-Chien Lee1 (), Marc G. Berman1, Wilma A. Bainbridge1, Andrew J. Stier1; 1University of Chicago
Why are some object concepts (e.g., windshield vs. toothpaste) more memorable than others? Prior studies have suggested that visual and semantic features (e.g., color) and typicality (e.g., for birds: robin vs. penguin) of object images influence the likelihood of being remembered with mixed success (Kramer et al., 2022). One reason for these modest effects may be due to the visual and memory spaces being predominantly modeled using Euclidean geometry, which may not reflect the true structure of the space. In this study, we examined whether an entirely different geometric relationship – such as a continuous hyperbolic space which approximates discrete hierarchies – explains differences in their memorability. Specifically, we hypothesized that image concepts would be geometrically arranged in hierarchical structures and that memorability would be explained by a concept’s depth in these hierarchical trees (where deeper concepts would be less remembered). To test this hypothesis, we constructed a hyperbolic representation space of object concepts (N=1,854) from the THINGS database (Hebart et al., 2019), which consists of naturalistic images of concrete objects, and a space of 49 feature dimensions (e.g., red, tall) derived from data-driven models. Using ALBATROSS (Stier et al., in prep.), a stochastic topological data analysis technique that detects underlying structures of data, we demonstrated that hyperbolic geometry efficiently captures the organization of object concepts and their memorability better than a Euclidean geometry. Specifically, we found that concepts closer to the center of the hyperbolic representational space are more prototypical and more memorable; in contrast, there was no consistent geometric organization of memorability and typicality in the Euclidean space. Taken together, we discover that concept typicality and depth in the hierarchical structure of image concepts contribute to how likely a concept is remembered across people.
Talk 3, 11:15 am, 52.13
TALK CANCELLED: Individual preferences for space or time in visual working memory are related to spatial and temporal abilities and persist over months
Anna Heuer1 (), Martin Rolfs1; 1Department of Psychology, Humboldt-Universität zu Berlin, Germany
Both space and time serve as scaffolding dimensions that support visual working memory (VWM). While the relative importance of space versus time depends on the distribution of items in either dimension, individuals have also been observed to differ in their weightings of spatial and temporal information, which remain stable across different testing sessions (Heuer & Rolfs, 2021). Here, we followed up on this finding with a longitudinal study with repeated measurements after two weeks and six months to determine (a) if such individual preferences are stable over extended periods of time, and (b) if they are related to spatial and temporal abilities more generally. Each measurement consisted of two testing days. On day one, participants performed a colour change detection VWM task, which we used to assess individual preferences as the difference in performance with temporal and spatial retrieval contexts. As spatiotemporal information was entirely task-irrelevant, this task measured the incidental encoding of space and time without encouraging reliance on either dimension. On day two, participants performed six independent but analogously designed spatial and temporal tasks to assess spatial/temporal abilities: (1) discrimination and classification of line lengths/durations, (2) reproduction of a single location/duration and (3) reproduction of a spatial configuration/temporal sequence. Individual weightings of space and time in VWM were positively correlated across all measurement time points, indicating that such preferences persist over extended periods of time. Moreover, they were related to performance in one pair of spatial/temporal tasks: Participants with a temporal preference in VWM more precisely reproduced temporal sequences than spatial configurations, whereas participants with a spatial preference exhibited the opposite pattern. Thus, individuals have stable, trait-like preferences for coding VWM contents spatially or temporally, which are not just an isolated characteristic of VWM but associated with specific spatial and temporal abilities.
Acknowledgements: This work was supported by the Deutsche Forschungsgemeinschaft (DFG research grants HE 8207/1-1, HE 8207/1-2, RO 3579/11-1 and RO 3579/11-2; DFG's Heisenberg program RO 3579/8-1 and RO 3579/12-1).
Talk 4, 11:30 am, 52.14
Micro-timing of iconic memory readout
Karla Matic1,2,3, Issam Tafech1,3, John-Dylan Haynes1,2,3,4,5; 1Charité—Universitätsmedizin Berlin, 2Max Planck School of Cognition, 3Humboldt-Universität zu Berlin, 4German Center for Neurodegenerative Diseases, 5Technische Universität Dresden
The human visual system is immensely efficient at extracting and briefly retaining visual information. The classic Sperling task has suggested the existence of a high-capacity memory store—iconic memory—that decays rapidly following stimulus offset. But when exactly does perception end and when does iconic memory begin? We measured the availability of visual information at the temporal transition between perception and iconic memory. In a partial report task, we parafoveally presented radial arrays of 4, 8, 12, and 16 consonant letters, coupled with exogenous cues shown before, simultaneously with, or after the stimulus. Contrary to the expectation of ultra-high iconic capacity, we observed a significantly reduced availability of information when the probe was presented immediately at stimulus offset compared to during stimulus presentation. At first sight, this may suggest that iconic memory, even at its highest capacity, is a significantly degraded representation of the stimulus. Instead, we propose that this decrease in available information is partially due to a cue-readout delay, the time it takes to process the spatial cue and initiate the read-out from the stimulus representation. To estimate the length of the cue-readout delay, we modeled the availability of visual information in two stages: the exogenous stage when sensory information is available externally from the stimulus; and the endogenous stage when information is read out of a rapidly decaying iconic memory, approximated by a three-parameter exponential function. Our model indicated very short cue-readout delays on the order of 25 ms. Our findings suggest that the estimated time constants of iconic memory decay need to be adjusted for these cue-readout delays. We routinely assume that memory begins only after the offset of a stimulus, but our data suggest that experimental probing of memory contents may be required already during stimulus exposure, a stage that would usually be considered “perception”.
Acknowledgements: Supported by BMBF and Max Planck Society
Talk 5, 11:45 am, 52.15
The “unfinishedness” of dynamic events is spontaneously extracted in visual processing: A new ‘Visual Zeigarnik Effect’
Joan Danielle K. Ongchoco1 (), Kimberly W. Wong1, Brian Scholl1; 1Yale University
The events that occupy our thoughts in an especially persistent way are often those that are *unfinished* — half-written papers, unfolded laundry, and items not yet crossed off from to-do lists. And this factor has also been emphasized in work within higher-level cognition, as in the “Zeigarnik effect”: when people carry out various tasks, but some are never finished due to extrinsic interruptions, memory tends to be better for those tasks that were unfinished. But just how foundational is this sort of “unfinishedness” in mental life? Might such unfinishedness be spontaneously extracted and prioritized even in lower-level visual processing? To explore this, we had observers watch animations in which a dot moved through a maze, starting at one disc (the ‘startpoint’) and moving toward another disc (the ‘endpoint’). We tested the fidelity of visual memory by having probes (colored squares) appear briefly along the dot’s path; after the dot finished moving, observers simply had to indicate where the probes had appeared. On ‘Completed’ trials, the motion ended when the dot reached the endpoint, but on ‘Unfinished’ trials, the motion ended shortly *before* the dot reached the endpoint. Although this manipulation was entirely task-irrelevant, it nevertheless had a powerful influence on visual memory: observers placed probes much closer to their correct locations on Unfinished trials. This same pattern held across several different experiments, even while carefully controlling for various lower-level properties of the displays (such as the speed and duration of the dot’s motion). And the effect also generalized across different types of displays (e.g. also replicating when the moving dot left a visible trace). This new type of *Visual Zeigarnik Effect* suggests that the unfinishedness of events is not just a matter of higher-level thought and motivation, but can also be extracted as a part of visual perception itself.
Acknowledgements: This project was funded by ONR MURI #N00014-16-1-2007 awarded to BJS.
Talk 6, 12:00 pm, 52.16
Lingering distractor representations bias memory reports
Ziyao Zhang1 (), Jarrod A. Lewis-Peacock1; 1The University of Texas at Austin
Working memory must persist through distraction to guide goal-directed behaviors. Prior research has consistently shown that memory reports are systematically biased towards distractors from the same feature space as memoranda, but the neural mechanisms that drive this effect are not well understood. We hypothesized that distractor interference could arise from memory representations being shifted towards the distractors or from distractor representations that linger in working memory. Here, we recorded EEG signals from N = 23 participants (M = 19.1 years, 20 females) while they performed a delayed-estimation task with a distraction task inserted into the delay period. Participants encoded two orientations and were provided with a retro-cue (80% validity) to prioritize one of them. A brief visual ping (3 white circles) was shown to reactivate memory representations before participants reproduced the probed orientation. In half of the trials, a sensory-motor distraction task was inserted into the delay period in which participants manually rotated a black bar to match the orientation of a white bar on the screen. Behavioral results showed reliable attractive biases towards distractors for both prioritized (3.0 °) and unprioritized memories (1.6 °). Multivariate EEG analyses revealed that the prioritized orientation was reliably decoded in trials without distraction, but not in trials that included distraction. In these trials, the distractor orientation was decoded instead, and larger attractive biases were found in trials that showed stronger distractor representations. Importantly, the loss of decodability of prioritized memories did not lead to catastrophic memory loss. These memories could have been preserved in another region or format to survive the distraction, or they could have been stored silently during distraction and then reactivated to guide memory responses. Regardless, these findings suggest that attractive biases in memory reports following distraction are driven by lingering distractor representations, not by biased memory representations.
Acknowledgements: This work was completed with support from the National Institute of Health Grant R01EY028746 awarded to J.A.L.-P
Talk 7, 12:15 pm, 52.17
It’s a match! Visual template matching enhances concurrent task processing
Yi Ni Toh1 (), Vanessa G. Lee; 1University of Minnesota Twin-Cities
Multitasking often causes human errors, traffic accidents, and lowered productivity. Yet increasing attention to one task does not always distract: sometimes, it enhances concurrent task processing. In the attentional boost effect (ABE), detecting and responding to attentionally demanding targets yields better memory for background images, relative to baseline or distractor trials. Theoretical accounts have attributed the ABE to a temporal orienting response upon the detection of a behaviorally relevant event. But what triggers the ABE? Here we tested the roles of visual template matching, target classification, and response. Participants in Experiments 1 and 2 encoded objects to memory while simultaneously monitoring a stream of letters and digits. They pressed a button for letters and made no response to digits. In different blocks, the targets were either a specific letter, allowing visual template matching for its detection, or a broader category of eight letters. The ABE was robust in both conditions but was significantly stronger in the specific-letter condition. Experiments 3 and 4 examined whether the ABE could be found when participants delayed response to the trial following the target. When the target was a specific letter, memory enhancement occurred for the objects paired with the target, but not for the next object (response-paired). In contrast, when the target was a broader category of letters, the ABE was abolished for both the target-paired and the response-paired objects. Visual template matching to a specific target may have sharpened the temporal orienting response, yielding a larger or more robust ABE. These findings show that visual template matching, a key component of the biased competition theory of attention, not only increases attention to the target stimuli but also broadly facilitates concurrent task processing.
Acknowledgements: This study was supported by University of Minnesota's Distinguished McKnight Fund to VGL