Active Perception: The synergy between perception and action

Time/Room: Friday, May 10, 1:00 – 3:00 pm, Royal 6-8
Organizer: Michele Rucci & Eli Brenner, Boston University & VU University
Presenters: Eli Brenner, John Wann, Heiner Deubel, Michele Rucci, Ronen Segev, Yves Frégnac

< Back to 2013 Symposia

Symposium Description

Visual perception is often studied in a passive manner. The stimulus on the display is typically regarded as the input to the visual system, and the results of experiments are frequently interpreted without consideration of the observer’s motor activity. In fact, movements of the eyes, head or body are often treated as a nuisance in vision research experiments, and care is often taken in minimizing them by properly constraining the observer. Like many other species, however, humans are not passively exposed to the incoming flow of sensory data. Instead, they actively seek useful information by coordinating sensory processing with motor activity. Motor behavior is a key component of sensory perception, as it enables control of sensory signals in ways that simplify perceptual tasks. The goal of this symposium is to make VSS attendees aware of recent advances in the field of active vision. Non-specialists often associate active vision with the study of how vision controls behavior. To counterbalance this view, the present workshop will instead focus on closing the loop between perception and action. That is, we will examine both the information that emerges in an active observer and how this information is used to guide behavior. To emphasize the fact that behavior is a fundamental component of visual perception, this symposium will address the functional consequences of a moving agent from multiple perspectives. We will cover the perceptual impact of very different types of behavior, from locomotion to microscopic eye movements. We will discuss the multimodal sources of information that emerge and need to be combined during motor activity. Furthermore, we will look at the implications of active vision at multiple levels, from the general computational strategies to the specific impact of eye movement modulations on neurons in the visual cortex. Speakers with expertise in complementary areas and with research programs involving a variety of techniques and focusing on different levels of analysis were specifically selected to provide a well-rounded overview of the field. We believe that this symposium will be of interest to all VSS participants, both students and faculty. It will make clear (to students in particular) that motor activity should not be regarded as an experimental nuisance, but as a critical source of information in everyday life. The symposium will start with a general introduction to the topic and the discussion of a specific example of closed sensory-motor loop, the interception of moving objects (Eli Brenner). It will continue discussing the visual information emerging during locomotion and its use in avoiding collisions (John Wann). We will then examine the dynamic strategy by which attention is redirected during grasping (Heiner Deubel), and how even microscopic “involuntary” eye movements are actually part of a closed sensory-motor loop (Michele Rucci). The last two speakers will address how different types of visual information emerging in an active observer are encoded in the retina (Ronen Segev) and in the cortex (Yves Fregnac).

Presentations

Introduction to active vision: the complexities of continuous visual control

Speaker: Eli Brenner, Human Movement Sciences, VU University
Authors: Jeroen Smeets, Human Movement Sciences, VU University

Perception is often studied in terms of image processing: an image falls on the retina and is processed in the eye and brain in order to retrieve whatever one is interested in. Of course the eye and brain analyse the images that fall on the retina, but it is becoming ever more evident that vision is an active process. Images do not just appear on the retina, but we actively move our eyes and the rest of our body, presumably to ensure that we constantly have the best possible information at our disposal for the task at hand. We do this despite the complications that moving sometimes creates for extracting the relevant information from the images. I will introduce some of the complications and benefits that arise from such active vision on the basis of research on the role of pursuing an object with one’s eyes when trying to intercept it. People are quite flexible in terms of where they look when performing an interception task, but where they look affects their precision. This is not only due to the inhomogeneity of the retina, but also to the fact that neuromuscular delays affect the combination of information from different sensory modalities. The latter can be overcome by relying as much as possible on retinal information (such as optic flow) but there are conditions in which people do not do so but rely on combinations of retinal and extra-retinal information instead (efferent and afferent information about one’s own actions).

Why it’s good to look where you are going

Speaker: John Wann, Dept of Psychology, Royal Holloway University of London

The control of direction and avoidance of collision is fundamental to effective locomotion. A strong body of research has explored the use of optic flow and/or eye-movement signals in judging heading. This presentation will outline research on active steering that explores the use of optic flow and eye-movement signals, but where a key aspect of effective control is where you look and when. The talk will also briefly outline studies using fMRI that highlight the neural systems that support the control model proposed from the behavioural research. Although this model is based on principles derived from optical geometry it conveniently converges on the heuristics used in advanced driver/motorcyclist training, and elite cycling, for negotiating bends at speed. Research supported by the UK EPSRC, UK ESRC, EU FP7 Marie Curie.

Motor selection and visual attention in manual pointing and grasping

Speaker: Heiner Deubel, Department Psychologie, Ludwig-Maximilians-Universitat Munchen, Germany
Authors: Rene Gilster, Department Psychologie, Ludwig-Maximilians-Universitat Munchen, Germany; Constanze Hesse, School of Psychology, University of Aberdeen, United Kingdom

It is now well established that goal-directed movements are preceded by covert shifts of visual attention to the movement target. I will first review recent evidence in favour of this claim for manual reaching movements, demonstrating that the planning of some of these actions establishes multiple foci of attention which reflect the spatial-temporal requirements of the intended motor task. Recently our studies have focused on how finger contact points are chosen in grasp planning and how this selection is related to the spatial deployment of attention. Subjects grasped cylindrical objects with thumb and index finger. A perceptual discrimination task was used to assess the distribution of visual attention prior to the execution of the grasp. Results showed enhanced discrimination for those locations where index finger and thumb would touch the object, as compared to the action-irrelevant locations. A same-different task was used to establish that attention was deployed in parallel to the grasp-relevant locations. Interestingly, while attention seemed to split to the action-relevant locations, the eyes tended to fixate the centre of the to-be-grasped object, reflecting a dissociation between overt and covert attention. A separate study demonstrated that a secondary, attention-demanding task affected the kinematics of the grasp, slowing the adjustment of hand aperture to object size. Our results highlight the import role of attention also in grasp planning. The findings are consistent with the conjecture that the planning of complex movements enacts the formation of a flexible “attentional landscape” which tags all those locations in the visual lay-out that are relevant for the impending action.

The function of microsaccades in fine spatial vision

Speaker: Michele Rucci, Boston University

The visual functions of microsaccades, the microscopic saccades that humans perform while attempting to maintain fixation, have long been debated. The traditional proposal that microsaccades prevent perceptual fading has been criticized on multiple grounds. We have recently shown that, during execution of a high-acuity task, microsaccades move the gaze to nearby regions of interest according to the ongoing demands of the task (Ko et al., Nature Neurosci. 2010). That is, microsaccades are used to examine a narrow region of space in the same way larger saccades normally enable exploration of a visual scene. Given that microsaccades keep the stimulus within the fovea, what is the function of these small gaze relocations? By using new gaze-contingent display procedures, we were able to selectively stimulate retinal regions at specific eccentricities within the fovea. We show that, contrary to common assumptions, vision is not uniform within the fovea: a stimulus displacement from the center of gaze of only 10 arcmin already causes a significant reduction in performance in a high-acuity task. We also show that precisely-directed microsaccades compensate for this lack of homogeneity giving the false impression of uniform foveal vision in experiments that lack control of retinal stimulation. Finally, we show that the perceptual improvement given by microsaccades in high-acuity tasks results from accurately positioning the preferred retinal locus in space rather than from the temporal transients microsaccades generate. These results demonstrate that vision and motor behavior operate in a closed loop also during visual fixation.

Decorrelation of retinal response to natural scenes by fixational eye movements

Speaker: Ronen Segev, Ben Gurion University of the Negev, Department of Life Sciences and Zlotowski Center for Neuroscience

Fixational eye movements are critical for vision since without them the retina adapts fast to a stationary image and the entire visual perception fades away in a matter of seconds. Still, the connection between fixational eye movements and retinal encoding is not fully understood. To address this issue, it was suggested theoretically that fixational eye movements are required to reduce the spatial correlations which are typical for natural scenes. The goal of our study was to put this theoretical prediction under experimental test. Using a multi electrode array, we measured the response of the tiger salamander retina to movies which simulated two types of stimuli: fixational eye movements over a natural scene and flash followed by static view of a natural scene. Then we calculated the cross-correlation in the response of the ganglion cells as a function of receptive field distance. We found that when static natural images are projected, strong spatial correlations are present in the neural response due to correlation in the natural scene. However, in the presence of fixational eye movements, the level of correlation in the neural response drops much faster as a function of distance which results in effective decorrelation of the channels streaming information to the brain. This observation confirms the prediction that fixational eye movement act to reduce the correlations in retinal response and provides better understanding of the contribution of fixational eye movements to the information processing by the retina.

Searching for a fit between the “silent” surround of V1 receptive fields and eye-movements

Speaker: Yves Frégnac, UNIC-CNRS Department of Neurosciences, Information and Complexity Gif-sur-Yvette, France

To what extent emerging macroscopic perceptual features (i.e., Gestalt rules) can be predicted in V1 from the characteristics of neuronal integration? We use on vivo intracellular electrophysiology in the anesthetized brain, but where the impact of visuomotor exploration on retinal flow is controlled by simulating realistic but virtual classes of eye-movements (fixation, tremor, shift, saccade). By comparing synaptic echoes to different types of full field visual statistics (sparse noise, grating, natural scene, dense noise, apparent motion noise) in which the retinal effects of virtual eye-movements is, or is not, included, we have reconstructed the perceptual association field of visual cortical neurons extending 10 to 20° away from the classical discharge field. Our results show that there exists for any V1 cortical cell a fit between the spatio-temporal organization of its subthreshold “silent” (nCRF) and spiking (CRF) receptive fields with the dynamic features of the retinal flow produced by specific classes of eye-movements (saccades and fixation). The functional features of the resulting association field are interpreted as facilitating the integration of feed-forward inputs yet to come by propagating some kind of network belief of the possible presence of Gestalt-like percepts (co-alignment, common fate, filling-in). Our data support the existence of global association fields binding Form and Motion, which operate during low-level (non attentive) perception as early as V1 and become dynamically regulated by the retinal flow produced by natural eye-movements. Current work is supported by CNRS, and grants from ANR (NatStats and V1-complex) and the European Community FET-Bio-I3 programs (IP FP6: FACETS (015879), IP FP7: BRAINSCALES(269921) and Brain-i-nets (243914)).

< Back to 2013 Symposia