Visual remapping: From behavior to neurons through computation

Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT & Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany
Presenters: Julie Golomb, Patrick Cavanagh, James Bisley, James Mazer, Fred Hamker

< Back to 2018 Symposia

Symposium Description

Active vision in both humans and non-human primates depends on saccadic eye movements to accurately direct the foveal portion of the retina towards salient visual scene features. Saccades, in concert with visual attention, can faciliate efficient allocation of limited neural and computational resources in the brain during visually guided behaviors. Saccades, however, are not without consequences; saccades can dramatically alter the spatial distribution of activity in the retina several times per second. This can lead to large changes to the cortical scene representation even when the scene is static. Behaviors that depend on accurate visuomotor coordination and stable sensory (and attentional) representations in the brain, like reaching and grasping, must somehow compensate for the apparent scene changes caused by eye movements. Recent psychophysical, neurophysiological and modeling results have shed new light on the neural substrates of this compensatory process. Visual “remapping” has been identified as a putative mechanism for stabilizing visual and attentional representations across saccades. At the neuronal level, remapping occurs when neuronal receptive fields shift in anticipation of a saccade, as originally described in the lateral intraparietal area of the monkey (Duhamel et al., 1992). It has been suggested that remapping facilitates perceptual stability by bridging pre- and post-saccadic visual and attentional representations in the brain. In this symposium we will address the functional role of remapping and the specific relationship between neurophysiological remapping (a single-neuron phenomenon) and psychophysically characterized perisaccadic changes in visual perception and attentional facilitation. We propose to consider computational modeling as a potential bridge to connect these complementary lines of research. The goal of this symposium is to clarify our current understanding of physiological remapping as it occurs in different interconnected brain regions in the monkey (V4, LIP and FEF) and to address how remapping at the neuronal level can account for observed perisaccadic changes in visual perception and attentional state. Symposium participants have been drawn from three different, yet complementary, disciplines: psychophysics, neurophysiology and computational modeling. Their approaches have provided novel insights into remapping at phenomenological, functional and mechanistic levels. Remapping is currently a major area of research in all three disciplines and, while there are several common themes developing, there remains substantial debate about the degree to which remapping can account for various psychophysical phenonomena. We propose that bringing together key researchers using different approaches to discuss the implications of currently available data and models will both advance our understanding of remapping and be broad interest to VSS members (both students and faculty) across disciplines.

Presentations

Remapping of object features: Implications of the two-stage theory of spatial remapping

Speaker: Julie Golomb, The Ohio State University, Columbus, OH

When we need to maintain spatial information across an eye movement, it is an object’s location in the world, not its location on our retinas, which is generally relevant for behavior. A number of studies have demonstrated that neurons can rapidly remap visual information, sometimes even in anticipation of an eye movement, to preserve spatial stability. However, it has also been demonstrated that for a period of time after each eye movement, a “retinotopic attentional trace” still lingers at the previous retinotopic location, suggesting that remapping actually manifests in two overlapping stages, and may not be as fast or efficient as previously thought. If spatial attention is remapped imperfectly, what does this mean for feature and object perception? We have recently demonstrated that around the time of an eye movement, feature perception is distorted in striking ways, such that features from two different locations may be simultaneously bound to the same object, resulting in feature-mixing errors. We have also revealed that another behavioral signature of object-location binding, the “spatial congruency bias”, is tied to retinotopic coordinates after a saccade. These results suggest that object-location binding may need to be re-established following each eye movement rather than being automatically remapped. Recent efforts from the lab are focused on linking these perceptual signatures of remapping with model-based neuroimaging, using fMRI multivoxel pattern analyses, inverted encoding models, and EEG steady-state visual evoked potentials to dynamically track both spatial and feature remapping across saccades.

Predicting the present: saccade based vs motion-based remapping

Speaker: Patrick Cavanagh, Glendon College, Toronto, ON and Dartmouth College, Hanover, NH

Predictive remapping alerts neurons when a target will fall into its receptive field after an upcoming saccade. This has consequences for attention which starts selecting information from the target’s remapped location before the eye movement begins even though that location is not relevant to pre-saccadic processing. Thresholds are lower and information from the target’s remapped and current locations may be integrated. These predictive effects for eye movements are mirrored by predictive effects for object motion, in the absence of saccades: motion-based remapping. An object’s motion is used to predict its current location and as a result, we sometimes see a target far from its actual location: we see it where it should be now. However, these predictions operate differently for eye movements and for perception, establishing two distinct representations of spatial coordinates. We have begun identifying the cortical areas that carry these predictive position representations and how they may interface with memory and navigation.

How predictive remapping in LIP (but not FEF) might explain the illusion of perceptual stability

Speaker: James Bisley, Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, California

The neurophysiology of remapping has tended to examine the latency of responses to stimuli presented around a single saccade. Using a visual foraging task, in which animals make multiple eye movements within a trial, we have examined predictive remapping in the lateral intraparietal area (LIP) and the frontal eye field (FEF) with a focus on when activity differentiates between stimuli that are brought on to the response field. We have found that the activity in LIP, but not FEF, rapidly shifts from a pre-saccadic representation to a post-saccadic representation during the period of saccadic suppression. We hypothesize that this sudden switch keeps attentional priorities of high priority locations stable across saccades and, thus, could create the illusion of perceptual stability.

Predictive attentional remapping in area V4 neurons

Speaker: James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT

Although saccades change the distribution of neural activity throughout the visual system, visual perception and spatial attention are relatively unaffected by saccades. Studies of human observers have suggested that attentional topography in the brain is stablized across saccades by an active process that redirects attentional facilitation to the right neurons in retinotopic visual cortex. To characterize the specific neuronal mechanisms underlying this retargeting process we trained two monkeys to perform a novel behavioral task that required them to sustain attention while making guided saccades. Behavioral performance data indicate that monkeys, like humans, can sustain spatiotopic attention across saccades. Data recorded from neurons in extrastriate area V4 during task performance were used to access perisaccadic attentional dynamics. Specificially, we asked when attentional facilitation turns on or off relative to saccades and how attentional modulation changes depending on whether a saccade brings a neuron’s receptive field (RF) into or out of the attended region. Our results indicate that for a substantial fraction of V4 neurons, attentional state changes begin ~100 ms before saccade onset, consistent with the timing of predictive attentional shifts in human observers measured psychophysically. In addition, although we found little evidence of classical, LIP-style spatial remapping in V4, there was a small anticipatory shift or skew of the RF in the 100ms immediately saccades detectable at the population level, although it is unclear of this effect corresponds to a shift towards the saccade endpoint or reflects a shift parallel to the saccade vector.

Neuro-computational models of spatial updating

Speaker: Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany

I review neuro-computational models of peri-saccadic spatial perception that provide insight into the neural mechanisms of spatial updating around eye movements. Most of the experimental observations can be explained by only two different models, one involves spatial attention directed towards the saccade target and the other relies on predictive remapping and gain-fields for coordinate transformation. The latter model uses two eye related signals: a predictive corollary discharge and eye position, which updates after saccade. While spatial attention is mainly responsible for peri-saccadic compression, predictive remapping (in LIP) and gain-fields for coordinate transformation can account for the shift of briefly flashed bars in total darkness and for the increase of the threshold in peri-saccadic displacement detection. With respect to the updating of sustained spatial attention, recently, two different types were discovered. One study shows that attention lingers after saccade at the (irrelevant) retinotopic position, another shows that shortly before saccade onset, spatial attention is remapped to a position opposite to the saccade direction. I show new results which demonstrate that both observations are not contradictory and emerge through model dynamics: The lingering of attention is explained by the (late-updating) eye position signal, which establishes an attention pointer in an eye-reference frame. This reference shifts with the saccade and updates attention to the initial position only after saccade. The remapping of attention opposite to the saccade direction is explained by the corollary discharge signal, which establishes a transient eye-reference frame, anticipates the saccade and thus updates attention prior to saccade onset.

< Back to 2018 Symposia