Retinotopic and Non-retinotopic Information Representation and Processing in Human Vision

Retinotopic and Non-retinotopic Information Representation and Processing in Human Vision

Friday, May 8, 3:30 – 5:30 pm
Royal Ballroom 1-3

Organizers: Haluk Ogmen (University of Houston) and Michael H. Herzog (Laboratory of Psychophysics, BMI, EPFL, Switzerland)

Presenters: Doug Crawford (Centre for Vision Research, York University, Toronto, Ontario, Canada), David Melcher (Center for Mind/Brain Sciences and Department of Cognitive Sciences University of Trento, Italy), Patrick Cavanagh (LPP, Université Paris Descartes, Paris, France), Shin’ya Nishida (NTT Communication Science Labs, Atsugi, Japan), Michael H. Herzog (Laboratory of Psychophysics, BMI, EPFL, Switzerland)

Symposium Description

Due to the movements of the eyes and those of the objects in the environment, natural vision is highly dynamic. An understanding of how the visual system can cope with such complex inputs requires an understanding of reference frames, used in the computations of various stimulus attributes. It is well known that the early visual system has a retinotopic organization. It is generally thought that the retinotopic organization of the early visual system is insufficient to support the fusion of visual images viewed at different eye positions. Moreover, metacontrast masking and anorthoscopic perception show that a retinotopic image is neither sufficient nor necessary for the perception of spatially extended form. How retinotopic representations are transformed into more complex non-retinotopic representations has been long-standing and often controversial question. The classical paradigm to study this question has been the study of memory across eye movements. As we shift our gaze from one fixation to another one, the retinotopic representation of the environment undergoes drastic shifts, yet phenomenally our environment appears stable. How is this phenomenal stability achieved? Does the visual system integrate information across eye movements and if so how? A variety of theories ranging from purely retinotopic representations without information integration to detailed spatiotopic representations with point-by-point information integration have been proposed. Talks in this symposium (Crawford, Melcher, Cavanagh) will address the nature of trans-saccadic memory, the role of extra-retinal signals, retinotopic, spatiotopic, and objectopic representations for information processing and integration during and across eye movements. In addition to the challenge posed by eye movements on purely retinotopic representations, recent studies suggest that, even under steady fixation, computation of moving form requires non-retinotopic representations. This is because objects in the environment often move with complex trajectories and do not stimulate sufficiently retinotopically anchored receptive fields. Moreover, occlusions can “blank out” retinotopic information for a significant time period. These failures to activate sufficiently retinotopically anchored neurons, in turn, suggest that some form of non-retinotopic information analysis and integration should take place. Talks in this symposium (Nishida, Herzog) will present recent findings that show how shape and color information for moving objects can be integrated according to non-retinotopic reference frames. Taken together, the talks at the symposium aim to provide a recent perspective to the fundamental problem of reference frames utilized by the visual system and present techniques to study these representations during both eye movement and fixation periods. The recent convergence of a variety of techniques and stimulus paradigms in elucidating the roles of non-retinotopic representations provides timeliness for the proposed symposium. Since non-retinotopic representations have implications for a broad range of visual functions, we expect our symposium to be of interest to the general VSS audience including students and faculty.

Abstracts

Cortical Mechanisms for Trans-Saccadic Memory of Multiple Objects

Doug Crawford, Steven Prime

Humans can retain the location and appearance of 3-4 objects in visual working memory, independent of whether a saccade occurs during the memory interval.  Psychophysical experiments show that, in the absence of retinal cues, extra-retinal signals are sufficient to update trans-saccadic memory, but where and how do these signals enter the visual system? It is know that ‘dorsal stream’ areas like the parietal eye fields update motor plans by remapping them in gaze-centered coordinates, but the equivalent neural mechanisms for updating object features across saccades are less understood. We investigated the possible role of extra-retinal signals from the cortical gaze control system by applying trans-cranial magnetic stimulation (TMS) to either the human parietal eye fields or the frontal eye fields, during the interval between viewing several objects and testing their remembered orientation and location. Parietal TMS had a baseline effect on memory of one feature and reduced memory capacity from approximately three down to one feature, but only when applied to the right hemisphere near the time of a saccade. The effects of frontal cortex TMS on trans-saccadic memory capacity were similar, but were more symmetric, and did not affect baseline feature memory. In our task, the latter would occur if spatial memory were disrupted without affecting feature memory. These experiments show that cortical gaze control centers usually associated with the ‘dorsal’ stream of vision are also involved in visual processing and memory of object features during saccades, possibly influencing ‘ventral stream’ processing through re-entrant pathways.

Trans-Saccadic Perception: “Object-otopy” across Space and Time

David Melcher

Real-world perception is typically trans-saccadic: we see the same object across multiple fixations. Yet saccadic eye movements can dramatically change the location in which an object is projected onto the retina. In a series of experiments using eye tracking, psychophysics, neuroimaging and TMS, we have investigated how information from a previous fixation can influence perception in the subsequent fixation. Specifically, we have tested the idea that the “remapping” of receptive fields around the time of saccadic eye movements might play a role in trans-saccadic perception. Our results suggest that two mechanisms interact to produce “object-otopic” perception across saccades. First, a limited number of objects that are individuated in a scene (treated as unique objects potentially subject to action, as opposed to being part of the background gist) are represented and updated across saccades in a sensorimotor “saliency map” (possibly in posterior parietal cortex). Second, the updating of these “pointers” in the map leads to the remapping of receptive fields in intermediate visual areas. We have found that perception can be retinotopic, spatiotopic or even-in the case of moving objects-can involve the combination of information for the same object that is neither retinally or spatially matched. At the same time, however, the visual system must give priority to the retinal information, which tends to be most reliable during fixation of stable objects.

Spatiotopic Apparent Motion

Patrick Cavanagh, Martin Szinte

When our eyes move, stationary objects move over our retina. Our visual system cleverly discounts this retinal motion so that we do not see the objects moving when they are not. What happens if the object does move at the time of the eye movement? There is a question of whether we will see the displacement at all, but if we do see it, is the motion determined by the displacement on the retina or the displacement in space? To address this, we asked subjects to make horizontal saccades of 10°. Two dots were presented, one before and one after the saccade displaced vertically on the screen by 3° from the first. Each dot was presented for 400 msec and the first turned off about 100 msec before the saccade and the second dot turned on 100 msec after the saccade. In this basic condition, the retinal locations of the two dots were in opposite hemifields, separated horizontally by 10°. Nevertheless, subjects reported the dots appeared to be in motion vertically – the spatiotopic direction – although with a noticeable deviation from true vertical. This spatiotopic apparent motion was originally reported by Rock and Ebenholtz (1962) but for displacements along the direction of the saccade. In our experiments, we use the deviation from spatiotopic motion to estimate errors in the remapping of pre-saccadic locations that underlies this spatiotopic motion phenomenon.

Trajectory Integration of Shape and Color of Moving Object

Shin�ya Nishida, Masahiko Terao, Junji Watanabe

Integration of visual input signals along motion trajectory is widely recognized as a basic mechanism of motion detection. It is however not widely recognized that the same computation is potentially useful for shape and color perception of moving objects. This is because trajectory integration can improve signal-to-noise ratio of moving feature extraction without introducing motion blur. Indeed, trajectory integration of shape information is indicated by several phenomena including multiple-slit view (e.g., Nishida, 2004). Trajectory integration of color information is also indicated by a couple of phenomena, motion-induced color mixing (Nishida et al., 2007) and motion-induced color segregation (Watanabe & Nishida, 2007). In the motion-induced color segregation, for instance, temporal alternations of two colors on the retina are perceptually segregated more veridically when they are presented as moving patterns rather than as stationary alternations at the same rate. This improvement in temporal resolution can be explained by a difference in motion trajectory along which color signals are integrated. Furthermore, we recently found that the improvement in temporal resolution is enhanced when an observer views a stationary object while making a pursuit eye movement, in comparison with when an observer views a moving object without moving eyes (Terao et al, 2008, VSS). This finding further strengthens the connection of the motion-induced color segregation with subjective motion deblur.

A Litmus Test for Retino- vs. Non-retinotopic Processing

Michael Herzog, Marc Boi, Thomas Otto, Haluk Ogmen

Most visual cortical areas are retinotopically organized and accordingly most visual processing is assumed to be processed within a retinotopic coordinate frame. However, in a series of psychophysical experiments, we have shown that features of elements are often non-retinotopically integrated when the corresponding elements are motion grouped. When this grouping is blocked, however, feature integration occurs within retinotopic coordinates (even though the basic stimulus paradigm is identical in both conditions and grouping is modulated by spatial or temporal contextual cues only). Hence, there is strong evidence for both retino- and non-retinotopic processing. However, it is not always easy to determine which of these two coordinate systems prevails in a given stimulus paradigm. Here, we present a simple psychophysical test to answer this question. We presented three squares in a first frame, followed by an ISI, the same squares shifted one position to the right, the same ISI, and the squares shifted back to their original position. When this cycle is repeated with ISIs longer than 100ms, three squares are perceived in apparent motion. With this specific set-up, features integrate between the central squares iff integration takes place non-retinotopically. With this litmus test we showed, for example, that motion processing is non-retinotopic whereas motion adaptation is retinotopic. In general, by adding the feature of interest to the central square, it can be easily tested whether a given stimulus paradigm is processed retino- or non-retinotopically.