The interplay of visual memory and high-level vision

Wednesday, June 1, 2022, 12:00 – 2:00 pm EDT, Zoom Session

Organizer: Sharon Gilaie-Dotan1,2; 1Bar Ilan University, 2UCL
Presenters: Timothy F Brady, Noa Ofen, Sharon Gilaie-Dotan, Yoni Pertzov, Galit Yovel, Meike Ramon

< Back to 2022 Symposia

While different studies show phenomenal human long term visual memory capacity, there are also indications that visual long term memory is influenced by different factors as familiarity, depth of processing, and visual category. In addition, individual differences also play a role and this is especially evident by individuals having exceptional visual long term memory for certain visual categories (e.g. super-recognizers for faces), while others may have very weak visual memory for these categories (e.g. prosopagnosia for faces). Furthermore, visual perception has long been regarded as a rather lower-level process preceding the more cognitive high-level visual long-term memory processes, and not much attention has been given to the possible influences of memory on perception and the interplay between perception, visual representations, memory, and behavior. In this symposium we will examine through a series of talks spanning different methods, populations and perspective the different influences on visual memory for different visual categories, culminating with the unique category of faces. Importantly support for a bi-directional interplay between perception and memory will be presented, relating to mechanisms, behavior and development. Tim Brady will open, describing perceptual constraints on visual long-term memory and the role of interference in limiting long-term memory for visual objects. He will also propose an interplay between perception and concepts in long-term memory. Noa Ofen will follow describing age dependent changes in the neural correlates of visual memory for scenes developing from childhood to adolescence that involve not only MTL and PFC but also visual cortex. Sharon Gilaie-Dotan will describe how during naturalistic encoding without task-related modulations physical image properties and visual categories influence image memory. Yoni Perzov will describe how face and object image familiarity (i.e. image memory during the perceptual process) influences visual exploration based on eye movement investigations, and that these may allow detection of concealed memories. Galit Yovel will describe how conceptual and social information contributes to face memory such that faces are learnt from concepts to percepts, and how this relates to face representations. Lastly, Meike Ramon will describe how exceptional individual abilities influence face-memory (aka super-recognizers) based on her deep-data approach on these individuals, and propose that their superior face memory is associated with consistency of identity-based representation rather than viewpoint (image/perception based) representation. These will be followed by a panel discussion, where we will consider present and future challenges and directions.

Presentations

What limits visual long-term memory?

Timothy F Brady1; 1University of California San Diego

In visual working memory and visual attention, processing and/or remembering one item largely comes at the expense of all the other items (e.g., there is a ‘resource’ limit). By contrast, when we encode a new object in long-term memory, it does not seem to come directly at the expense of the other items in memory: Remembering the clothes your child is wearing today does not automatically squeeze out your memory of the food on your child’s breakfast plate. Yet at the same time, we do not perfectly or even partially remember the visual details of everything we encounter, or even everything we actively attend to and encode into long-term memory. So what determines which items survive in visual long-term memory and how precisely their visual features are remembered? In this talk, I’ll discuss several experiments detailing both perceptual constraints on visual long-term memory storage for visual features and the role of interference in limiting long-term memory for visual objects. I’ll suggest there is a rich interplay between perception and concepts in visual long-term memory, and that therefore long-term memory visual objects can be a helpful case study for vision scientists in understanding the structure of visual representations in general.

The neural correlates of the development of visual memory for scenes

Noa Ofen1; 1Wayne State University

Episodic memory – the ability to encode, maintain and retrieve information – is critical for everyday functioning at all ages, yet little is known about the development of episodic memory systems and their brain substrates. In this talk, I will present data from a series of studies with which we investigate how functional brain development underlies the development of memory for visual scenes throughout childhood and adolescence. Using functional neuroimaging methods, including functional MRI and intracranial EEG, we identified age differences in information flow between the medial temporal lobes and the prefrontal cortex that support the formation of memory for scenes. Critically we also identified activity in visual regions including the occipital cortex that plays a critical role in memory formation and shows complex patterns of age differences. The investigation of the neural basis of memory development has been fueled by recent advances in neuroimaging methodologies. Progress towards a mechanistic understanding of memory development hinges on the specification of the representations and functional operations that underlie the behavioral phenomena we wish to explain. Leveraging the rich understanding of visual representations offers a unique opportunity to make significant progress to that end.

Influences of physical image properties on image memory during naturalistic encoding

Sharon Gilaie-Dotan1,2; 1Bar Ilan University, 2UCL

We are constantly exposed to multiple visual scenes, and while freely viewing them without an intentional effort to memorize or encode them, only some are remembered. Visual memory is assumed to rely on high-level visual perception that shows a level of cue-invariance, and therefore is not assumed to be highly dependent on physical image cues as size or contrast. However, this is typically investigated when people are instructed to perform a task (e.g. remember or make some judgement about the images), which may modulate processing at multiple levels and thus may not generalize to naturalistic visual behavior. Here I will describe a set of studies where participants (n>200) freely viewed images of different sizes or of different levels of contrast while unaware of any memory-related task that would follow. We reasoned that during naturalistic vision, free of task-related modulations, stronger physical image cues (e.g. bigger or higher contrast images) lead to higher signal-to-noise ratio from retina to cortex and would therefore be better remembered. Indeed we found that physical image cues as size and contrast influence memory such that bigger and higher contrast images are better remembered. While multiple factors affect image memory, our results suggest that low- to high-level processes may all contribute to image memory.

How visual memory influences visual exploration

Yoni Pertzov1; 1The Hebrew University of Jerusalem

Due to the inhomogeneity of the photoreceptor distribution on the retina, we move our gaze approximately 3 times a second to gather fine detailed information from the surroundings. I will present a serious of studies that examined how this dynamic visual exploration process is effected by visual memories. Participants initially look more at familiar items and avoid them later on. These effects are robust across stimulus type (e.g. faces and other objects) and familiarity type (personally familiar and recently learned). The effects on visual exploration are evident even when explicitly instructing participants to avoid it. Thus, eye tracking could be used for detection of concealed memories in forensic scenarios.

Percepts and Concepts in Face Recognition

Galit Yovel1; 1Tel Aviv University

Current models of face recognition are primarily concerned with the role of perceptual experience and the nature of the perceptual representation that enables face identification. These models overlook the main goal of the face recognition system, which is to recognize socially relevant faces. We therefore propose a new account of face recognition according to which faces are learned from concepts to percepts. This account highlights the critical contribution of the conceptual and social information that is associated with faces to face recognition. Our recent studies show that conceptual/social information contributes to face recognition in two ways: First, faces that are learned in social context are better recognized than faces that are learned based on their perceptual appearance. These findings indicate the importance of converting faces from a perceptual to a social representation for face recognition. Second, we found that conceptual information significantly accounts for the visual representation of faces in memory, but not in perception. This was the case both based on human perceptual and conceptual similarity ratings as well as the representations that are generated by unimodal deep neural networks that represent faces based on visual information alone, and multi-model networks that represent visual and conceptual information about faces. Taken together, we propose that the representation that is generated for faces by the perceptual and memory systems is determined by social/conceptual factors, rather than our passive perceptual experience with faces per se.

Consistency – a novel account for individual differences in visual cognition

Meike Ramon1; 1University of Fribourg

Visual cognition refers to the processing of retinally available information, and its integration with prior knowledge to generate representations. Traditionally, perception and memory have been considered as isolated, albeit related cognitive processes. Much of vision research has investigated how input characteristics relate to overt behavior, and hence determine observed cognitive proficiency in either perception or memory. Currently, however, comparatively less focus has been devoted to understanding the contribution of observer-related aspects. Studies of acquired expertise have documented systematic changes in brain connectivity and exceptional memory feats through extensive training (Dresler et al., 2017). The mechanisms underlying naturally occurring cognitive superiority, however, are unfortunately much less understood. In this talk I will synthesize findings from studies of neurotypical observers with a specific type of cognitive superiority — that observed for face identity processing. Focussing on a growing group of these so-called “Super-Recognizers” (Russell et al., 2009) identified with the same diagnostic framework (Ramon, 2021), my lab has taken a deep-data approach to provide detailed case descriptions of these unique individuals using a range of paradigms and methods. Intriguingly, their superior abilities cannot be accounted for in terms of enhanced processing of certain stimulus dimensions, such as information content, or stimulus memorability (Nador et al., 2021a,b). Rather, our work convergingly points to behavioral consistency as their common attribute.

< Back to 2022 Symposia