Eye Movements: Perception and timing

Talk Session: Monday, May 20, 2024, 8:15 – 9:45 am, Talk Room 1
Moderator: Markus Lappe, University of Muenster

Talk 1, 8:15 am, 41.11

Decoding Remapped Spatial Information in the Peri-Saccadic Period

Caoimhe Moran1,2 (), Philippa A. Johnson3, Ayelet N. Landau2, Hinze Hogendoorn4; 1The University of Melbourne, 2The Hebrew University of Jerusalem, 3Leiden University, 4Queensland University of Technology

It has been suggested that, prior to a saccade, visual neurons predictively respond to stimuli that will fall in their receptive fields after completion of the saccade. This saccadic remapping process is thought to compensate for the shift of the visual world across the retina caused by eye movements. To map the timing of this predictive process in the brain, we recorded neural activity using electroencephalography (EEG) during a saccade task. Participants made saccades between two fixation points while covertly attending to oriented gratings briefly presented at various locations on the screen. Data recorded during trials in which participants maintained fixation were used to train classifiers on stimuli in different positions. Subsequently, data collected during saccade trials were used to test for the presence of remapped stimulus information at the post-saccadic retinotopic location in the peri-saccadic period, providing unique insight into when remapped information becomes available. We found that the stimulus could be decoded at the remapped location ~180 ms post-stimulus onset, but only when the stimulus was presented 100-200 ms before saccade onset. Within this range, we found that the timing of remapping was dictated by stimulus onset rather than saccade onset. We conclude that presenting the stimulus immediately before the saccade allows for optimal integration of the corollary discharge signal with the incoming peripheral visual information, resulting in a remapping of activation to the relevant post-saccadic retinotopic neurons.

Talk 2, 8:30 am, 41.12

Perception of continuous flicker: phantom array versus moving stimuli

Rytis Stanikunas1 (), Alvydas Soliunas1, Remigijus Bliumas1, Karolina Jocbalyte1, Algirdas Novickovas1; 1Vilnius University

During saccadic eye movements, an image is displaced on the retina, and the visual system must recalculate its position to maintain perceptual space constancy. However, when the flickering light is presented during the saccade, a phenomenon of the phantom array of lights is perceived. A similar array of lights can be perceived when a moving and flickering stimulus is presented during visual fixation. Here, we investigated differences in spatial and temporal aspects of the perception of flickering light during saccade versus flickering light movement during visual fixation. In the first experiment, the subjects made a saccade across a point light source flashing from 50 Hz to 4 kHz. In the second experiment, a moving and flickering stimulus was presented on the screen while the subjects maintained steady visual fixation. The speed of the stimulus was set to the same as the saccade speed for each subject. Subjects were asked to indicate the beginning and end of the array of lights, to evaluate the length of one dash, and to count the number of dashes. We found that the perceived length and localization of the moving lights array approximately corresponded to the physical representation of the stimulus on the retina, but during the saccade, a shorter length of the phantom array was perceived, and localization greatly varied between subjects. The phantom array was always perceived as composed of a smaller number of dashes compared to the moving lights array. However, the size of one dash was perceived to be of similar length as projected on the retina during both conditions. Therefore, we can state that the visual space is not compressed in size, but is compressed in time during saccades. The visual system reduces information flow by quantization mechanism and removes some repeatable representations of the same object from perceptual space.

Acknowledgements: This work was supported by the Research Council of Lithuania S-MIP-21-56

Talk 3, 8:45 am, 41.13

Timing of eye and hand movements during reaching depends on functional demands of gaze

Jolande Fooken1 (), Nethmi H. Illamperuma1, J. Randall Flanagan1; 1Queen's University

When reaching to visual targets, people are unable to shift their gaze away from the reach target to a secondary gaze target until after the reach target has been attained—a phenomenon known as gaze anchoring. Here, we compared gaze anchoring when reaching to a visual target versus a visual-haptic target providing force feedback upon contact. We also examined gaze anchoring in a bimanual context in which participants were instructed to shift their gaze to the secondary target as soon as it appeared and, at the same time, move their other hand to the secondary target. In our task, human participants (n=28) used their right hand to move the handle of a robotic manipulandum to a primary visual or visual-haptic reach target. A secondary target was present at the beginning, halfway, or end of the reaching movement and participants were instructed to make either an eye movement (unimanual trials) or a combined eye and left hand movement (bimanual trials) to this target as soon as it appeared. We found that in unimanual trials with visual targets, saccades were initiated ~125 ms after the hand cursor 'visually contacted' the reach target. In contrast, with visual-haptic targets, saccades were initiated around the time of contact. This suggests that when haptic feedback was provided, central vision was not critical for guiding the hand as it approached the target or checking target attainment. However, gaze anchoring was still observed with visual-haptic targets earlier in the reach when gaze was engaged in directing the hand towards the target. In bimanual trials, gaze anchoring was observed but anchoring did not extend to the left hand, the onset of which was decoupled from gaze. Overall, our findings indicate that the timing of eye and hand movements in object manipulation is linked to the function of target fixations.

Acknowledgements: This work was supported by the Deutsche Forschungsgemeinschaft (DFG) Research Fellowships Grant FO 1347/1-1 (JF). We thank Elissa Robichaud and Bethany Piekkola for their help with data collection and Martin York for technical support.

Talk 4, 9:00 am, 41.14

Predictive Looking and Predictive Looking Errors in Everyday Activities

Sophie Su1 (), Matthew Bezdek3, Tan Nguyen1, Christopher Hall2, Jeff Zacks1; 1Washington University in Saint Louis, 2University of Virginia, 3Elder Research

Where people look in pictures and movies has been shown to be based not only on the most salient point in the current scene, but also on predictions of what is going to happen next. The accuracy of these predictions fluctuates during movie watching. Some theories of event comprehension propose that spikes in prediction error can trigger working memory updating and the segmentation of ongoing experience into meaningful events. One previous study of predictive looking found evidence for this proposal (Eisenberg et al., 2018, CR:PI), but the paradigm used in that study could only obtain predictions intermittently, because it analyzed predictive looking to objects that an actor was about to contact. Here, we developed a continuous measure of prediction error by modeling predictive looking towards the actor's hands, and we operationalized this prediction error as the residuals from the predictive looking model. Viewers’ gaze was tracked while they watched movies of everyday activities, and mixed-effects models were used to predict the actor’ hand positions from viewers’ previous gaze location. Stepwise model comparison indicated that viewers look predictively as current gaze position accounts for hand location as far as 9 seconds in the future.  We compared the time course of gaze predictions with that of predictions generated from a computational model of event comprehension and found that gaze predictions showed higher error at moments when the computational model had higher errors. Furthermore, spikes in gaze prediction error were predictive of increases in event segmentation in a separate group of viewers. These results support proposals that event segmentation is driven by spikes in prediction error, and this method promises to give a general approach for measuring ongoing prediction error noninvasively.

Talk 5, 9:15 am, 41.15

Saccades to Partially Occluded Objects: Perceptual Completion Mediates Oculomotor Control

Michael Paavola1 (), Andrew Hollingworth1, Cathleen Moore1; 1University of Iowa

Oculomotor behavior is ultimately controlled by patterns of activity in retinotopically organized populations of neurons in areas, such as the superior colliculus and frontal eye fields, that have visuomotor receptive fields. In contrast, gaze is guided by non-retinotopic variables including task goals, attentional state, and the perceived 3 dimensional structure of the environment. We investigated how the implied extent of perceptually completed surfaces behind occluding surfaces impacts saccade landing position while searching for small targets. Each trial included four disks and four truncated disks. On half of the trials, rectangles abutted the truncated disks supporting the perception of completed disks behind occluding surfaces. Observers searched among the disks for small red or green dots, which appeared only when a saccade landed within a disk region. This design leveraged the tendency for saccades to land near the center of objects (e.g., Melcher & Kowler, 1999) to ask what constitutes an “object” to the eye-movement control system, the perceptually extended whole disk or the optically explicit truncated disk? Experiment 1 showed that distributions of landing position were biased toward the center of the implied whole disks and away from the optically explicit portion of the disk when occluders were present. Experiment 2 showed the same bias toward the center of the whole disk despite the location of the colored dot being presented at the center of the image region, which would have given a strategic advantage to use the image level representation. Experiment 3 used complementary contrast regions to demonstrate that the landing position bias shown in Experiments 1 and 2 were not due to low-level stimulus interactions caused by the presence of occluders. Taken together, these results indicate that oculomotor control mechanisms operate over object-level representations during the planning and execution of eye movements.

Talk 6, 9:30 am, 41.16

Perceiving the self-generated motion on the retina caused by smooth pursuit

Krischan Koerfer1 (), Tamara Watson2, Markus Lappe1; 1University of Muenster, 2Western Sydney University

Visual perception in humans is intermingled with eye movements. Despite the self-generated motion on the retina by smooth pursuit and saccades, we perceive a stable world, a marvelous achievement by the visual system. We developed a novel stimulus that leads to loss of visual stability across saccades and that is perceived differently if pursued, highlighting the limitations of the visual systems to compensate for eye movements and providing new insight into the underlying mechanisms. The stimulus consisted of a random dot distribution. Across frames, dots in a circular zone rotated to create a vortex motion. Independent of the first-order motion within it, the vortex then moved across the screen. We formerly reported that the vortex cannot be pursued smoothly and that tracking the vortex with frequent catch-up saccades causes a loss of visual stability. Here, we altered the vortex to make it pursuable by dislocating the dots once they became part of the motion pattern and when leaving the vortex, creating a slim ring of flickering and discontinuity around the vortex. Once participants were able to pursue the altered vortex, this also restored visual stability. Interestingly, successful smooth pursuit also changed the perception of the vortex motion pattern. When asked to identify the formerly observed motion pattern in a discrimination task, participants more often chose the pattern with additional first-order motion congruent with the motion pattern’s movement across the screen than the correct pattern. This contrasted sharply with trials involving the unaltered vortex, where participants mostly identified the correct pattern. Consequently, this indicates that motion patterns are perceived based on the retinal image, rather than their actual presentation on the screen, uncovering a novel interaction between smooth pursuit and perception.

Acknowledgements: This work was supported by the German Research Foundation (DFG La 952-7) and has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 734227.