VSS, May 13-18

Eye movements: Perception, cognition

Talk Session: Tuesday, May 17, 2022, 8:15 – 9:45 am EDT, Talk Room 2
Moderator: Julie Golomb, Ohio State University

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:46 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:15 am, 51.21

Stimulus blanking improves orientation discrimination of foveal and peripheral stimuli

Lukasz Grzeczkowski1 (), Martin Rolfs1; 1Humboldt-Universität zu Berlin, Germany

Across a saccadic eye movement, two successive images of the world fall on the retina, one before (presaccadic image) and one after (postsaccadic image) the saccade. Converging evidence suggests that task-relevant visual features of the presaccadic image are available after saccades to be integrated with the postsaccadic image. One way to uncover the availability of presaccadic visual information after saccades is the blanking procedure: Introducing a brief (200ms) interruption in the stimulus presentation at saccade onset drastically improves the discrimination of transsaccadic changes of stimulus features. This feature-blanking effect was studied when transsaccadic changes affected the saccade target itself (Grzeczkowski, Deubel, & Szinte, 2020), a natural yet special condition in which the stimulus is first presented in the periphery and then in foveal vision after the saccade. Here, we asked whether a comparable trans-saccadic feature-blanking effect is apparent for stimuli presented in the same spatiotopic location in the periphery. Observers made a saccade to a saccade target (small dot) presented either to the left or right side of fixation. On each trial, we presented a Gabor grating either in the same location as the saccade target (foveal condition) or at a location above or below the horizontal saccade vector (peripheral condition), thus changing visual hemispheres across the saccade. Observers discriminated the change in orientation of the Gabor (ranging from 1 to 21 degrees) occurring during the saccade. Moreover, the postsaccadic Gabor was either presented with or without a blank. Performance was slightly lower in the peripheral as compared to the foveal condition. More importantly, however, feature blanking was equally effective in both the foveal and peripheral conditions, greatly improving observers’ ability to detect the transsaccadic orientation changes. These results demonstrate that the transsaccadic availability of task-relevant features is not limited to the saccade target and can operate across brain hemispheres.

Acknowledgements: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 865715) and the Heisenberg Programme of the Deutsche Forschungsgemeinschaft (grants RO 3579/8-1 and RO 3579/12-1) granted to MR.

Talk 2, 8:30 am, 51.22

Visual stability in naturalistic scenes

Jessica Parker1 (), A. Caglar Tas1; 1University of Tennessee-Knoxville

The current study examines how visual stability is established for naturalistic scenes. Previous studies have shown that detection of position shifts is significantly better when the shift occurs for the saccade target object compared to either a background shift or a whole image shift (Currie et al., 2000). Other studies have shown that removing the target object from the screen for a short period of time (i.e., blanking) significantly improves shift detection (Deubel et al., 1996). Here we tested whether the blanking effect would similarly improve shift detection for background and whole image shifts for naturalistic scenes. Participants were presented with scene images and instructed to execute a saccade to a highlighted target object. During the saccade, there could be three different shifts: only the saccade target shifted, the whole image shifted, or the background shifted but the target remained stationary. We also included control trials where no shift occurred. Half of the trials in each condition had a 250ms blank (target blank, context blank, or all blank) that occurred as soon as the saccade was detected. Participants were asked to indicate whether they detected a move. We found a significant effect of condition where saccade target shifts resulted in highest detection rate, and the background shifts had the lowest detection rate. More importantly, we found that blanking significantly interacted with condition: Blanking only improved shift detection in target shift condition but not in background or whole image shifts. These results suggest that the visual system uses a localized solution for establishing object correspondence across saccades where it mainly relies on the saccade target to determine stability.

Talk 3, 8:45 am, 51.23

Dynamic saccade context triggers spatiotopic object-location binding

Zitong Lu1 (), Julie D Golomb1; 1Department of Psychology, The Ohio State University

Despite receiving visual inputs based on eye-centered (retinotopic) coordinates, we are able to perceive the world-centered (spatiotopic) locations of objects. A long-standing debate has been how object representations are transferred from retinotopic to spatiotopic coordinates to achieve stable visual perception across eye movements. Many studies have found retinotopic effects even for higher level visual processes, like object-location binding. However, these studies often rely on fairly static contexts (prolonged fixation on one location, followed by a single saccade). What if spatiotopic object-location binding is triggered selectively in dynamic saccade contexts? To test this hypothesis, we modified the ‘spatial congruency bias’ (SCB) paradigm. Participants had to judge if two objects presented sequentially were the same or different. We conducted two experiments to investigate retinotopic vs spatiotopic object-location binding in two different states: dynamic saccade context (Experiment 1) versus static context (Experiment 2). In Experiment 1, participants performed repeated saccades throughout the task, whereas in Experiment 2, they only performed a single saccade per trial, during the delay between the two stimuli. We found that, in the static context, the SCB was purely retinotopic, consistent with previous studies. However, in the dynamic saccade context, we observed a strong spatiotopic SCB in addition to the retinotopic SCB. Thus, participants were biased to judge two objects as the same identity when they were presented in the same spatiotopic location (an indication of spatiotopic object-location binding) only in the dynamic context. Critically, the only difference between these experiments was the dynamic versus static context. Thus, these results provide strong evidence that repeated saccades can trigger spatiotopic (world-centered) object-location binding, such that object location representations appear to flip from retinotopic to spatiotopic coordinates specifically due to dynamic saccade context, a finding that is crucial to improved understanding of how the brain achieves visual stability across eye movements.

Acknowledgements: NIH R01-EY025648 (JG), NSF 1848939 (JG)

Talk 4, 9:00 am, 51.24

Different goals for oculomotor control and perception

Alexander Goettker1 (), Emma E.M. Stewart1; 1Justus Liebig University Giessen

Due to inherent processing delays, tracking a moving target with the eyes is a difficult task. To account for this, the oculomotor system uses not only the current sensory input, but also relies on recent experience. Such effects can even be found on a trial-by-trial level, where the prior trial is integrated in a reliability-weighted fashion. Here we show that even such a seemingly sophisticated integration of information for oculomotor control does not follow basic perceptual concepts as object consistency or size constancy. Participants saw two successive movements that they needed to track with their eyes. In the first screen, a car moved across a background which used perspective drawing to invoke a perception of depth. Importantly, the car could move at two different depth levels, which allowed us to dissociate retinal and perceptual size. We used a baseline psychophysical staircase procedure to create conditions where cars were perceptually identical in terms of size and velocity (perceptually-matched), but different in terms of their retinal size and velocity. In the second screen either another car or a Gaussian blob moved across a gray background (always at 10 deg/s), allowing us to directly measure the influence of the previous movement. Interestingly, the two perceptually-matched cars led to significant differences in subsequent oculomotor behavior: the retinally smaller & slower car in the back also led to slower pursuit in the subsequent trial. Interestingly, trial-by-trial effects were still present when the second stimulus was a Gaussian blob. This emphasizes the differences in the use of retinal information for oculomotor control and perception: While perception needs to integrate information to create the best possible percept, the trial-by-trial influence on oculomotor behavior seems to correct for the amount of retinal motion to reduce motion blur in subsequent trials to allow for better object recognition.

Acknowledgements: DFG Collaborative Research Centre SFB 135 “Cardinal mechanisms of perception” and DFG grant number 460533638 to E.E.M.S.

Talk 5, 9:15 am, 51.25

Sensory tuning in neuronal movement commands: neurophysiological evidence

Matthias P. Baumann1 (), Amarender R. Bogadhi1, Anna Denninger1, Ziad M. Hafed1; 1University of Tübingen

Movement control is critical for successful interaction with our environment. However, movement does not occur in complete isolation of sensation, and this is particularly true of eye movements. Here, the superior colliculus (SC) plays a fundamental role, issuing saccade motor commands in the form of strong peri-movement bursts that are very widely believed to specify both saccade metrics (direction and amplitude) and kinematics (speed). However, most models of saccade control by the SC rely on observations with small light spots as saccade targets. Instead, we asked two monkeys to “look” at images, akin to natural behavior. We tested gratings of different contrasts, spatial frequencies, and orientations, and also animate and inanimate object images. Despite matched saccade properties across trials within a given image manipulation, SC neurons’ motor bursts were strongly different for different images. Such sensory tuning in the SC neuronal movement commands could even be sharper than that in passive visual responses: the difference in movement burst strength between the most and least preferred image features (for the same saccade vector) was larger than that in visual bursts, consistent with known pre-saccadic perceptual enhancement. Most intriguingly, even purely motor neurons exhibited strong sensory tuning in their saccade-related bursts. Since SC motor bursts are relayed virtually unchanged to the cortex (Sommer & Wurtz, 2004), one implication of our results is that the visual system is primed not only about the sizes and directions of upcoming saccades, as is traditionally believed, but also about the movement targets’ visual sensory properties. Consistent with this, in a companion study (VSS 2022), we additionally found that saccade-target visual features significantly modulate peri-saccadic perception. Our results provide novel insights about the functional role of SC motor commands, and they motivate extending theoretical accounts of corollary discharge beyond just spatial movement-related reference frames.

Acknowledgements: Supported by the German Research Foundation (DFG): (1) SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms, TP 11, project number: 276693517; (2) BO5681/1-1

Talk 6, 9:30 am, 51.26

Measuring the cost function of saccadic decisions reveals stable individual gaze preferences

Tobias Thomas1 (), David Hoppe1, Constantin A. Rothkopf1; 1Centre for Cognitive Science, Technische Universität Darmstadt

Humans move their eyes multiple times every second and behind every movement is a decision, where this movement should go. Past research has predominantly focused on quantifying how properties of the task, of the scene context, or of the stimulus influence this decision. By contrast, the influence of subjective preferences on this decision has rarely been studied. One reason might be, that a fundamental problem in studying such preferences with commonly employed tasks investigating gaze shifts, is that the empirical gaze statistics are a product of all these diverse influences. Here, we introduce an experiment in the spirit of preference elicitation paradigms in economics, in which subjects reveal their subjective preferences by repeatedly deciding between alternatives. Subjects were instructed to choose between two alternative saccadic targets on each trial. We quantified individuals’ choices in terms of the saccadic amplitude, the absolute direction in visual space, and the relative change in direction relative to the previous gaze shift. All subjects showed an approximately linear preference for shorter saccades in line with known oculomotor biases also predicted by optimal motor control. Secondly, all participants showed preferences for return saccades but to varying degrees. By contrast, idiosyncratic preferences for absolute directions in visual space varied heavily between participants. All individual preferences were highly consistent throughout the experiment. To quantify and understand our subjects’ behavior, we inferred the parameters describing gaze preferences using a random utility model. Individual subjects’ choices can be described quantitatively with the relative contributions of the three features describing gaze target alternatives. This model was able to correctly predict on average 80% of participants’ decisions and the predicted utility values match the empirically observed preferences. Taken together, the experiment reveals individual differences and commonalities in oculomotor preferences and the computational model allows incorporating these preferences in gaze target prediction models.

Acknowledgements: Funded by ‘The Third Wave of AI’, the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art