Eye Movements: Perception, cognition
Talk Session: Saturday, May 20, 2023, 10:45 am – 12:30 pm, Talk Room 2
Moderator: Jorge Otero-Millan, UC Berkeley
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 10:45 am, 22.21
Motion signals at the target of saccadic eye movements modulate presaccadic foveal perception and drive predictive gaze responses
Lisa M. Kroell1,2 (), Jude F. Mitchell3, Martin Rolfs1,2; 1Humboldt-University of Berlin, 2Berlin School of Mind and Brain, 3University of Rochester
Recent evidence suggests that during the preparation of saccadic eye movements, orientation information that defines the saccade target is anticipated in foveal vision. Here, we establish that foveal prediction is not limited to surface features but operates similarly for temporally modulated signals: Coherent motion within the saccade target region predictively alters foveal perception and causes characteristic, reflexive eye movement patterns. Human observers maintained fixation in the center of a projection screen while 8000 dots distributed across the entire display moved in random directions at a velocity of 15 degrees of visual angle per second (dva/s; 50 ms lifetime). Subsequently, all dots within a 3 dva diameter circular region located 10 dva to the left or right of the screen center started moving coherently straight up or down. Observers prepared a saccade to this target while monitoring the appearance of another coherent motion signal in their presaccadic center of gaze (the foveal probe; presented in 50% of trials). Around 150 ms after target onset, observers were better at detecting the foveal probe if its motion direction matched the motion direction of the target dots, mirroring previous findings obtained with orientation-defined stimuli. Moreover, 150 to 100 ms before the saccade and during the last 15 ms immediately preceding saccade onset, vertical eye velocities reflected the motion direction within the target region rather than foveally presented motion. This ramp-up altered saccade angles and persisted after saccade landing, long before postsaccadic visual input could have driven gaze behavior and even when we rendered coherent target motion incoherent upon saccade initiation. We conclude that foveal prediction supports visual continuity for surface features as well as temporally modulated information. Beyond influencing perception, the underlying predictive signals modify pre-, intra- and early postsaccadic gaze behavior and generate a continuous oculomotor readout of target anticipation.
Acknowledgements: This research was funded by the Deutsche Forschungsgemeinschaft (grants RO3579/8-1, RO3579/9-1 and RO3579/12-1 to MR).
Talk 2, 11:00 am, 22.22
Cortical spatiotemporal reformatting tuned to saccadic amplitude
Alessandro Benedetto1,2 (), Michele A. Cox1,2, Samantha K. Jenks1,2, Jonathan D. Victor3, Michele Rucci1,2; 1Department of Brain and Cognitive Sciences, University of Rochester, NY, USA, 2Center for Visual Science, University of Rochester, NY, USA, 3Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, NY, USA
Humans use rapid eye movements (saccades) to inspect objects of interest with the fovea, the region of the retina with highest acuity. In relocating gaze, saccades abruptly shift the image across the retina, strongly modulating the input signals experienced by photoreceptors. These transients have been traditionally regarded as problematic for visual encoding, and much research has focused on how the visual system discards them. This common view has been recently challenged, and a number of studies have now suggested that the luminance modulations caused by saccades may actually facilitate the establishment of spatial representations. Below a velocity-dependent spatial frequency cutoff, saccade-induced luminance modulations counterbalance (whiten) the spectral density of natural scenes, equalizing input signals across spatial frequencies and thereby discarding redundant information present in natural visual environments. Critically, the bandwidth of this effect is inversely proportional to saccade amplitude. Here, we examined the consequences of this spatiotemporal input reformatting on electroencephalographic responses in the occipital cortex of humans. We simultaneously recorded eye movements and EEG signals while participants (N=16) executed saccades of various amplitudes (1, 3, and 6 deg) over narrow-band white noise fields centered at low (0.03 c/deg), medium (0.16 c/deg), and high (2 c/deg) spatial frequencies. As established in the literature, a prominent event-related potential (the lambda-wave) peaked over the central-occipital electrode ~90 ms following saccade offset. We report that the amplitude of this lambda-wave closely follows the predictions of saccade-induced spatiotemporal reformatting, with an amplitude that depends on the spatial frequency of the noise field for small but not large saccades (F(4,60)=4.75; p<0.01). These findings show that the space-time reformatting of the visual input resulting from saccades strongly drives neural responses and propagates to the cortex, where it shapes neural activity immediately following saccades.
Acknowledgements: Research supported by Reality Labs and NIH grants EY018363 (MR), EY07977 (JV), and T32EY007125 (SJ).
Talk 3, 11:15 am, 22.23
Visual stability and motor updating in autistic symptomatology.
Antonella Pomè1, Eckart Zimmermann2; 1Heinrich Heine University Düsseldorf
Autism spectrum disorders are associated with a weaker integration of priors and sensory evidence and with difficulties in updating their internal model, which may be underpinned by altered efference copy signals. In two experiments we tested the hypothesis that the buildup of weak priors over time depends on the accuracy of the efference copies. We tested motor and visual updating in healthy adults with various degrees of Autistic Traits. Subjects were instructed to perform a sequence of two saccades to the locations of two brief flashed targets as quickly and as accurately as possible (Exp1). Difficulties in using extra-retinal information about the first saccade, in order to update the spatial representation of the second target, were strongly associated with autistic symptomatology. These resulted in bigger deviations of the second saccade vector from the location of the second target for higher autistic traits. Visual updating was then assessed by testing trans-saccadic apparent motion (Exp2): two probes were horizontally displaced from each other to a variable degree and participants were instructed to report the tilt direction of apparent motion perceived while performing saccades between the two fixation locations. Biases in reporting the direction of motion were correlated with the autistic scores of our participants, being bigger for people scoring high on the questionnaire, suggesting an under-compensation of the eye movement and consequently a failure in spatial stability. Taken together our results suggest that the accuracy of the efference copy signals contribute to motor and visual stability. Moreover, these findings reveal a link between efference copies and motor symptoms in ASD and may point towards more specific intervention via exploring the link between action and perception.
Acknowledgements: Marie SkÅ‚odowska-Curie (grant agreement number 101029574– APPROVE) to Dr. Antonella Pomè; European Union’s Horizon 2020 research and innovation programs (grant agreement number 757184– moreSense) to Prof. Eckart Zimmermann.
Talk 4, 11:30 am, 22.24
Thinks are looking up: The extrafoveal preview effect is the largest at the upper vertical meridian, where peripheral sensitivity is worst
Xiaoyi Liu1 (), David Melcher1, Marisa Carrasco2, Nina Hanning2,3; 1New York University Abu Dhabi, 2New York University, 3Humboldt-Universität zu Berlin
Despite its degraded resolution, the pre-saccadic preview of a peripheral target enhances the speed and accuracy of its post-saccadic processing (the extrafoveal preview effect). However, acuity and contrast sensitivity vary in human peripheral vision, even for iso-eccentric locations, raising the question whether these performance field asymmetries influence the preview effect. To investigate this question, we first measured performance field asymmetries during fixation. Observers indicated the orientation (relative to vertical) of one of four peripheral Gabors presented briefly at the cardinal locations (8° eccentricity, 4cpd). Gabor contrast was titrated with adaptive staircases. Consistent with previous studies, contrast sensitivity was higher at the horizontal than vertical meridian, and at the lower than upper vertical meridian. The same observers then performed a saccade-version of the same task: They previewed four tilted Gabors while fixating at the center, then received a central cue indicating to which of the Gabors to immediately saccade. During the saccade, the target Gabor orientation either remained (valid preview) or was flipped to the opposite direction (invalid preview), with equal probability. After saccade landing, observers discriminated the orientation of this second, now foveated Gabor, which disappeared shortly after saccade offset. We found a robust preview effect for all saccade directions, i.e., higher post-saccadic contrast sensitivity after valid than invalid previews. Surprisingly, the magnitude of the preview effect was inversely related to the performance field asymmetries: Largest at the upper vertical meridian, followed by the lower vertical, then horizontal meridian. This finding suggests that the visual system actively compensates for asymmetries in peripheral vision when integrating information across saccades. These results reveal a new perceptual consequence of performance field asymmetries during active vision, and demonstrate the necessity to study trans-saccadic perceptual modulations as a function of saccade direction.
Talk 5, 11:45 am, 22.25
Motion blur near the resolution limit of the parafoveal retina
Alisa Braun1 (), Isabel L Groth1, Jorge Otero-Millan1, William S Tuten1; 1UC Berkeley
Fixational eye movements introduce a temporal component to the encoding of spatial information. Psychophysical and computational work has shown that removing these temporal signals is detrimental to visual acuity for stimulus durations longer than 750 ms, suggesting the presence of mechanisms that leverage retinal motion over longer timescales to improve resolution. By contrast, early retinal neurons sum information over shorter intervals to minimize noise. Thus, when presentation durations are restricted, retinal motion may degrade the encoding of fine patterns, leading to impaired acuity. To characterize the impact of motion blur on visual acuity, we used an adaptive optics scanning laser ophthalmoscope with a 30-Hz frame rate to control the retinal trajectory of a tumbling-E stimulus delivered to the parafovea. For all measurements, stimulus duration was 3 frames (~66 ms). First, observers (n = 4) completed a tumbling-E task to determine the letter size that yielded 80% performance. Next, performance (% correct) for this fixed letter size (MAR range: 1.82-2.12 arcmin) was determined for three retinal motion conditions: natural retinal motion, retinally-stabilized, and imposed motion. For imposed motion, stimuli were moved on the retina in randomly-selected cardinal directions by increments of the optotype bar width; these motion increments included .5, 1 and 2 bar widths per frame. When the optotype’s retinal motion was parallel to its orientation, performance was invariant to motion magnitude and unchanged from the natural motion or retinally-stabilized conditions (p > .05, multiple comparison ANOVA). However, if the optotype’s retinal motion was orthogonal to its orientation, the maximal decrease in performance (relative to the equivalent parallel motion) occurred when the optotype moved by one bar width per frame (16% reduction). These results suggest the mechanisms responsible for high-acuity vision are susceptible to motion blur when retinal motion matches the spatial frequency of the stimulus being judged.
Acknowledgements: Berkeley Center for Innovation in Vision and Optics, NEI R01EY023591, AFOSR FA9550-20-1-0-0195, AFOSR FA9550-21-1-0230, NEI R00EY027846, NEI T35EY007139
Talk 6, 12:00 pm, 22.26
Modeling internal state changes in free-viewing and visual search scanpaths with gain control in DeepGaze III
Matthias Kümmerer1 (), Matthias Bethge1; 1University of Tübingen, Tuebingen AI Center
The DeepGaze III model currently sets the state-of-the-art in predicting free-viewing human scanpaths on natural images by predicting future fixations from the observed image and recent fixation locations. Inspired by gain control mechanisms in Neuroscience, we introduce gain control layers into the network architecture which can modulate the activity in certain channels of the network depending on additional factors, such as observer biases or search targets. By comparing the prediction performance of the baseline model with the performance of such an extended model in terms of information gain, we can quantify the amount of information that additional factors contribute to fixation placement. Due to the modular DeepGaze III architecture, we can decompose the information gain into different components: (1) a first component affecting only the modulation amplitude of the fixation distribution, (2) a second component modulating which image features are salient, and (3) a third component affecting the scanpath dynamics. Applying this approach, we quantify how much a fixation’s index in a scanpath, subject identity and search targets affect scanpaths in free-viewing and visual search. For free-viewing, we find that fixation index and subject identity contribute to a similar degree to fixation placement. In the case of fixation index, this information is equally split into a part making the fixation density more uniform over time, and a part changing which image features are salient. The contribution of subject identity is mostly due to different subjects preferring different image features. For visual search on the COCO Search18 dataset, the search target increases the explained information by 18% compared to just the presented image, suggesting substantial similarities in fixation behavior across targets. Our work demonstrates how contrast gain control can be used as a very general and sample-efficient mechanism to flexibly modify neural network computation to account for additional factors of interest.
Acknowledgements: This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A and the Deutsche Forschungsgemeinschaft (DFG): Germany's Excellence Strategy - EXC 2064/1 - 390727645 and SFB 1233, Robust Vision: Inference Principles and Neural Mechanisms.
Talk 7, 12:15 pm, 22.27
VR training produces more expert-like gaze behaviour in tennis players on-court
David Mann1 (), Laure Soepenberg1, Joost Bosschert2, Han Hakkens2, Aldo Hoekstra3; 1Vrije Universiteit Amsterdam, 2SportsImprovr, 3Royal Dutch Lawn Tennis Association (KNLTB)
Skilled athletes pick-up task-specific information using specific patterns of eye movements that underpin their advantage over others (Land & McLeod, 2000). For instance, skilled tennis players use distinct eye-movement patterns to pick-up information from an opponent’s kinematics when anticipating their serve direction. However, it remains unclear whether lesser athletes can learn these eye movement patterns to accelerate their skill learning. The aim of this study was to examine the degree to which an expert-like gaze pattern could be learned to provide a performance advantage in sport. Nineteen recreational tennis players were divided into a gaze-training (n=10) or control-training group (n=9). All participants completed a three-day training intervention in VR where required to anticipate the direction of serves hit by an avatar but with the ball-flight occluded. After responding, the serve was replayed showing the ensuing ball-flight. Participants in the gaze-training group were shown, after every five trials during training, the gaze pattern of a skilled player without instructions (Smeeton et al., 2005) whereas the control-training group were not. All participants took part in an in-situ pre and post-test to assess changes in their gaze pattern against a real server on-court and to test any differences in the response time and accuracy of their returns. Results revealed that the VR gaze-training was effective in changing the gaze patterns of participants both in VR and in-situ against an opponent. Improvements in on-court response times were seen for participants in both groups (p < .01) without any change in the response accuracy (p = .19), but with no specific advantage for the gaze-training group (no interaction, p = .22). The findings show that VR gaze-training is effective in producing more expert-like gaze behaviour in VR and, more strikingly, that the changes are retained when returning serves on-court.
Acknowledgements: This work is funded by a National SportInnovator Prize from ZonMW (Netherlands Organisation for Health Research and Development) Project Number 538001044