Action for perception: functional significance of eye movements for vision

Action for perception: functional significance of eye movements for vision

Friday, May 9, 2008, 3:30 – 5:30 pm Orchid 1

Organizers: Anna Montagnini (Institut de Neurosciences Cognitives de la M�diterran�e) and Miriam Spering (Justus-Liebig University Giessen, Germany)

Presenters: Maria Concetta Morrone (Facolt� di Psicologia, Universit� Vita-Salute S Raffaele, Milano, Italy), Tirin Moore (Stanford University School of Medicine, USA), Michele Rucci (Boston University), Miriam Spering (Justus-Liebig University Giessen, Germany; New York University), Ziad Hafed (Systems Neurobiology Laboratory, Salk Institute), Wilson S. Geisler (University of Texas, Austin)

Symposium Description

When we view the world around us, our eyes are constantly in motion.

Different types of eye movements are used to bring the image of an object of interest onto the fovea, to keep it stable on this high-resolution area of the retina, or to avoid visual fading. Moment by moment, eye movements change the retinal input to the visual system of primates, thereby determining what we see. This critical role of eye movements is now widely acknowledged, and closely related to a research program termed �Active Vision� (Findlay & Gilchrist, 2003).

While eye movements improve vision, they might also come at a cost.

Voluntary eye movements can impair perception of objects, space and time, and affect attentional processing. When using eye movements as a sensitive tool to infer visual and cognitive processing, these constraints have to be taken into account.

The proposed symposium responds to an increasing interest in vision sciences to use eye movements. The aims of the symposium are (i) to review and discuss findings related to perceptual consequences of eye movements, (ii) to introduce new methodological approaches that take into account these consequences, and (iii) to encourage vision scientists to focus on the dynamic interplay between vision and oculomotor behavior.

The symposium spans a wide area of research on visuomotor interaction, and brings to the table junior and senior researchers from different disciplines, studying different types of eye movements and perceptual behaviors. All speakers are at the forefront of research in vision and brain sciences and have made significant contributions to the understanding of the questions at hand, using a variety of methodological approaches.

Concetta Morrone (Universit� Vita-Salute, Italy) reviews findings on the perisaccadic compression of space and time, and provides a Bayesian model for these perceptual phenomena. Tirin Moore (Stanford University, USA) discusses the neural mechanisms of perisaccadic changes in visual and attentional processing. Michele Rucci (Boston University, USA) argues for an increase in spatial sensitivity due to involuntary miniature eye movements during fixation, which are optimized for the statistics of natural scenes.

Miriam Spering (University of Giessen, Germany) focuses on the relationship between smooth pursuit eye movements and the ability to perceive and predict visual motion. Ziad Hafed (Salk Institute, USA) discusses the effect of eye movements on object perception, pointing out an intriguing role of oculomotor control for visual optimization. Wilson Geisler (University of Texas, USA) uses ideal-observer analysis to model the selection of fixation locations across a visual scene, demonstrating the high degree of efficiency in human visuomotor strategy.

The topic of this symposium is at the same time of general interest and of specific importance. It should attract at least three groups of VSS attendants � those interested in low-level visual perception, in motor behavior, and those using eye movements as a tool. We expect to attract both students, seeking an introduction to the topic, and faculty, looking for up-to date insights. It will be beneficial for VSS to include a symposium devoted to the dynamic and interactive link between visual perception and oculomotor behavior.

Abstracts

Perception of space and time during saccades: a Bayesian explanation for perisaccadic distortions

Maria Concetta Morrone, Paola Binda and David Burr

During a critical period around the time of saccades, briefly presented stimuli are grossly mislocalized in space and time and both relative distances and durations appear strongly compressed. We investigated whether the Bayesian hypothesis of optimal sensory fusion could account for some of the mislocalizations, taking advantage of the fact that auditory stimuli are unaffected by saccades. For spatial localization, vision usually dominates over audition during fixation (the �ventriloquist effect�); but during perisaccadic presentations, auditory localization becomes relatively more important, so the mislocalized visual stimulus is seen closer to its veridical position. Both the perceived position of the bimodal stimuli and the time-course of spatial localization were well-predicted by assuming optimal Bayesian-like combination of visual and auditory signals. For time localization, acoustic signals always dominate. However, this dominance does not affect the dynamics of saccadic mislocalization, suggesting that audio-visual capture occurs after saccadic remapping. Our model simulates the time-course data, assuming that position in external space is given by the sum of retinal position and a noisy eye-position signal, obtained by integrating the output of two neural populations, one centered at the current point of gaze, the other centered at the future point of gaze. Only later the output signal is fused with the auditory signal, demonstrating that some saccadic distortions take place very early in visual analysis.

This model not only accounts for the bizarre perceptual phenomena caused by saccades, but provides a novel vision-based account of peri-saccadic remapping of space.

Neural mechanisms and correlates of perisaccadic changes in visual perception

Tirin Moore

The changes in visual perception that accompany saccadic eye movements, including shifts of attention and saccadic suppression, are well documented in psychophysical studies. However, the neural basis of these changes is poorly understood. Recent evidence suggests that interactions of oculomotor mechanisms with visual cortical representations may provide a basis for modulations of visual signals and visual perception described during saccades. I will discuss some recent neurophysiological experiments that address the impact of oculomotor mechanisms, and of saccade preparation, on the filtering of visual signals within cortex. Results from these experiments relate directly to the observed enhancement and suppression of visual perception during saccades.

Fixational eye movements, natural image statistics, and fine spatial vision

Michele Rucci

During visual fixation, small eye movements continually displace the stimulus on the retina. It is known that visual percepts tend to fade when retinal image motion is eliminated in the laboratory. However, it has long been debated whether, during natural viewing, fixational eye movements have other functions besides preventing the visual scene from fading. In this talk, I will summarize a theory for the existence of fixational eye movements, which links the physiological instability of visual fixation to the statistics of natural scenes. According to this theory, fixational eye movements contribute to the neural encoding of natural scenes by attenuating input redundancy and emphasizing the elements of the stimulus that cannot be predicted from the statistical properties of natural images. To test some of the predictions of this theory, we developed a new method of retinal image stabilization, which enables selective elimination of the motion of the retinal image during natural intersaccadic fixation. We show that fixational eye movements facilitate the discrimination of high spatial frequency patterns masked by low spatial frequency noise, as predicted by our theory.

These results suggest a contribution of fixational eye movements in the processing of spatial detail, a proposal originally speculated by Hering in 1899.

Motion perception and prediction during smooth pursuit eye movements

Miriam Spering, Alexander C. Sch�tz and Karl R. Gegenfurtner

Smooth pursuit eye movements are slow, voluntary movements of the eyes that serve to hold the retinal image of a moving object close to the fovea. Most research on the interaction of visual perception and oculomotor action has focused on the question what visual input drives the eye best, and what this tells us about visual processing for eye movement control. Here we take a different route and discuss findings on perceptual consequences of pursuit eye movements. Our recent research has particularly focused on the interaction between pursuit eye movements and motion sensitivity in different tasks and visual contexts. (i) We report findings from a situation that particularly requires the dissociation between retinal image motion due to eye movements and retinal object motion. A moving object has to be tracked across a dynamically changing moving visual context, and object motion has to be estimated. (ii) The ability to predict the trajectory of a briefly presented moving object is compared during pursuit and fixation for different target presentation durations. (iii) We compare the sensitivity to motion perturbations in the peripheral visual context during pursuit and fixation. Results imply that pursuit consequences are optimally adapted to contextual requirements.

Looking at visual objects

Ziad Hafed

Much of our understanding about the brain mechanisms for controlling how and where we look derives from minimalist behavioral tasks relying on simple spots of light as the potential targets. However, visual targets in natural settings are rarely individual, point-like sources of light. Instead, they are typically larger visual objects that may or may not contain explicit features to look at. In this presentation, I will argue that the use of more complex, and arguably more “natural”, visual stimuli than is commonly used in oculomotor research is important for learning the extent to which eye movements can serve visual perception. I will provide an example of this by describing a behavioral phenomenon in which the visual system consistently fails in interpreting a retinal stimulus as containing coherent objects when this stimulus is not accompanied by an ongoing eye movement. I will then shed light on an important node in the brain circuitry involved in the process of looking at visual objects. Specifically, I will show that the superior colliculus (SC), best known for its motor control of saccades, provides a neural “pointer” for the location of a visual object, independent of the object’s individual features and distinct from the motor commands associated with this brain structure. Such a pointer allows the oculomotor system to precisely direct gaze, even in the face of large extended objects.

More importantly, because the SC also provides ascending signals to sensory areas, such a pointer may also be involved in modulating object-based attention and perception.

Mechanisms of fixation selection evaluated using ideal observer analysis

Wilson S. Geisler

The primate visual system combines a wide field of view with a high resolution fovea and uses saccadic eye movements to direct the fovea at potentially relevant locations in visual scenes. This is a sensible design for a visual system with limited neural resources. However, to be effective this design requires sophisticated task-dependent mechanisms for selecting fixation locations. I will argue that in studying the brain mechanisms that control saccadic eye movements in specific tasks, it can be very useful to consider how fixations would be selected by an ideal observer. Such an ideal-observer analysis provides: (i) insight into the information processing demands of the task, (ii) a benchmark against which to evaluate the actual eye movements of the organism, (iii) a starting point for formulating hypotheses about the underlying brain mechanisms, and (iv) a benchmark against which to evaluate the efficiency of hypothesized brain mechanisms. In making the case, I will describe recent examples from our lab concerning naturalistic visual-search tasks and scene-encoding tasks.