Eye Movements: Neural mechanisms

Talk Session: Saturday, May 18, 2024, 10:45 am – 12:30 pm, Talk Room 1
Moderator: J. Patrick Mayo, University of Pittsburgh

Talk 1, 10:45 am, 22.11

A transient signal in foveal superior colliculus neurons for jumpstarting peripheral saccadic orienting

Tong Zhang1, Ziad Hafed; 1University of Tuebingen

The superior colliculus (SC) is critical for saccade generation. Recent work has shown that, despite bursting at times other than saccades, SC population activity at the time of saccade motor bursts is more temporally aligned than for visual bursts (Jagadisan & Gandhi, 2022). Similarly, population activity in motor bursts resides in different subspaces to visual bursts (Baumann et al., 2023), and even the sensory signal embedded in SC motor bursts (Baumann et al., 2023) is transformed relative to visual bursts, such that the same individual neurons “prefer” different visual features in the two bursting epochs. However, how might such a transformation from a visual regime to a motor regime be realized? Here we first show that when a planned saccade is finally released with a go signal (removal of a fixation spot), peripheral SC neurons (representing the saccade target location) exhibit a robust, short-latency pause in spiking, before the motor bursts eventually erupt. This pause starts within ~50 ms from the go signal, and it is stimulus-dependent (e.g. having a stronger firing rate dip for a salient peripheral stimulus). Additionally, this pause still occurs, to a weaker extent, with saccades to a small spot or blank. When we then recorded from foveal SC neurons in similar tasks, we found that these neurons actually burst after the go signal, rather than paused. Remarkably, these foveal bursts occurred (and peaked) several milliseconds earlier than the pauses in the peripheral neurons, and they were not explained by offset responses to the removal of the fixation spot. Foveal bursts also occurred when releasing memory-guided saccades (with no peripheral visual targets), and they were not sensitive to peripheral target appearance. Thus, we found a transient foveal SC signal jumpstarting peripheral saccadic orienting, likely facilitating a necessary representational transformation needed for saccade motor bursts to occur.

Talk 2, 11:00 am, 22.12

Mixed Selectivity for Target Selection Biases in the Superior Colliculus

Abe Leite1 (), Hossein Adeli3, Robert M. McPeek2, Gregory J. Zelinsky1; 1Stony Brook University, 2SUNY College of Optometry, 3Columbia University

How does the brain flexibly integrate the multiple sources of information needed to control arbitrary goal-directed behavior? Mixed selectivity theory argues that this cognitive flexibility is achieved through flexible neural representations, with most neurons encoding nonlinear (and in some articulations dynamic) combinations of the stimulus factors. In this view, only fundamental computations underlying many behaviors merit neurons dedicated specifically to them. Despite its importance, the question of how mixed representations shape behavior in an attention-demanding task remains open. Our study applies mixed selectivity theory to visual attention by analyzing three factors known to bias saccade target selection during search: bottom-up feature contrast, top-down target guidance, and the history of previous object fixation (inhibitory tagging). We analyzed how single neuron responses in the rhesus superior colliculus encode these three attention-guiding properties of an object landing in the response field during eye movements in visual search, then determined mixed selectivity using two methods: standard nested GLM and our extension of an application of partial information decomposition (PID) to this behavior. We found that (1) Our application of PID, in contrast to standard GLM analyses, captures the dynamics of neural selectivity over time and the subtleties of how a neuron mixes multiple variables. (2) There is ample evidence for cells that sustain their encoding of multiple factors, and also cells whose selectivity varies over the time course of target selection. (3) In addition to these mixed selectivity neurons, a substantial group of neurons is uniquely selective to whether stimuli were previously fixated while searching, suggesting that inhibitory tagging may be a fundamental computation supporting overt visual attention. We conclude that both static and dynamic forms of mixed selectivity are used to represent attention biases in the superior colliculus, and that the colliculus may participate in a neural circuit dedicated to inhibitory tagging.

Acknowledgements: This project is based upon work supported by the National Institutes of Health under Grant No. 5R01EY030669-05 and work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 2234683.

Talk 3, 11:15 am, 22.13

Neuronal population estimates of spatial attention are robust to the presence of microsaccades

Shawn Willett1,2 (), Patrick Mayo1,2; 1University of Pittsburgh Department of Ophthalmology, 2Center for the Neural Basis of Cognition

Neurons in visual area V4 exhibit attention-related changes in firing rate. Recent work (Lowet et. al., 2018) proposed that attentional modulation of V4 activity occurred only after a microsaccade towards the attended location, suggesting that microsaccades gate attention-related effects. However, other work (Gongchen, et. al., 2022) reported that attentional modulation of neuronal activity in the superior colliculus (SC) occurred before and after microsaccades, and in the absence of microsaccades. Thus, microsaccades may not contribute to attention-related effects in the SC. To determine if these contrasting findings emerge because of differences in brain structure or task demands, we investigated population measures of attention in V4 aligned to microsaccade onset while monkeys performed a visual-spatial attention task (Mayo and Maunsell, 2016), similar to the task used in prior SC work. Monkeys detected an orientation change in one of two simultaneously presented oriented Gabors. We cued attention to one target using an 80% valid visual cue on instruction trials that occurred prior to each block of test trials. During each trial, monkeys fixated until they reported a change in orientation by saccading to the changed target. We recorded over 3500 V4 units from two bilaterally implanted Utah arrays across 54 sessions. We used de-mixed principal component analysis (dPCA; Kobak et. al, 2016) to extract an attention-related latent axis from our high-dimensional neuronal activity. We projected our microsaccade aligned neural population activity onto this attention-related axis and found that attention-related population activity was flat aligned to microsaccade onset, suggesting that attention modulates V4 activity regardless of microsaccades. Trials in which microsaccades occurred appeared identical to the unchanging activity observed in trials without microsaccades. Our results indicate that the modulation of V4 neural activity by attention and microsaccades is largely separable, and that attention modulates V4 activity regardless of the occurrence of a microsaccade.

Talk 4, 11:30 am, 22.14

Prefrontal neural activity predicts and mitigates spatial uncertainty in a gaze task

Vishal Bharmauria1 (), Adrian Schütz2, Xiaogang Yan1, Hongying Wang1, Frank Bremmer2, John Douglas Crawford1; 1Center for Vision Research & Vision: Science to Applications (VISTA), York University, 2Department of Neurophysics, Phillips Universität Marburg and Center for Mind, Brain and Behavior – CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen

To predict the future, the brain must integrate past with current sensory information. Research has suggested that prefrontal cortex predicts the timing of events (Fu et al., 2023). Here, we investigated if it also predicts spatial uncertainty. To do this, we recorded neural activity in the frontal (FEF) and supplementary (SEF) eye fields of two rhesus macaques, trained to saccade toward remembered visual targets (T) in presence of a landmark (L) that was surreptitiously shifted to a new position (L’) by a fixed amplitude in one of eight randomized directions arranged circularly around L. Previously, we showed that this results in retrospective shifts in FEF/SEF memory and gaze signals (Bharmauria et al., 2020, 2021). Here, we examined the period from the initial visual response to 300 ms after the landmark shift in 147/68 spatially tuned FEF/SEF neurons for prospective coding of this shift. We used a model-fitting technique to test memory delay coding along a T-T’ continuum. Remarkably, just before the landmark shift, SEF coded a shift toward T’. Since this direction was randomized, we hypothesized that SEF might be ‘guessing’ the direction of the shift. We tested this using a 2D analysis with the real shift (T’) rotated to the right and seven other imaginary shifts circularly arranged. Shortly after the visual response, response fields developed a donut-like prediction in all directions. This did not occur in shuffled controls and could not be accounted for by attraction toward landmark position (TL) or gaze error (TG). Eventually, after the real shift, this predictive ‘donut’ code shifted toward the actual L’. These data suggest that after thousands of training trials, the monkey brain, specifically SEF, created a guessing strategy based on learned probabilities and anticipation. This might allow the brain to optimize behavior and mitigate spatial uncertainty in the surrounding world.

Acknowledgements: Canadian Institutes for Health Research (CIHR); Vision: Science to Applications (VISTA) Program: Deutsche Forschungsgemeinschaft (DFG)

Talk 5, 11:45 am, 22.15

Behavioral and neural correlates of impaired scene perception following saccadic eye movements

Yong Min Choi1 (), Tzu-Yao Chiu1, Julie D. Golomb1; 1Department of Psychology, The Ohio State University

The visual input projected to the retina shifts drastically across saccadic eye movements. Although we are not aware of it, perception of simple visual stimuli presented around the time of a saccade is impaired (Burr et al., 1994). Meanwhile, the extent to which the post-saccadic impairment influences high-level visual scene perception remains unclear. We conducted behavioral and fMRI experiments examining processing for scene images containing different spatial frequency content presented at different delays following a saccade. First, subjects performed a 6-way scene categorization task (beach, mountain, etc.) on images presented 5, 16, 50, 158, or 500 ms after saccade completion. We found lower scene categorization accuracy at 5 ms and 16 ms post-saccadic delays compared to longer delays, for both low- and high-spatial frequency filtered images, suggesting broadly impaired scene perception lasting less than 50 ms after a saccade offset. To further investigate what visual information is impaired, and in the absence of an explicit categorization task, we conducted an fMRI experiment where subjects performed a 1-back task on scene images while making saccades. Short and long post-saccadic delay trials were sorted post-hoc using eye-tracking data. Using RSA-based decoding analysis, we assessed scene category information (urban vs nature) in scene-selective brain areas, and low-level visual information (high vs low spatial frequency) in the early visual cortex. We found decreased scene category information in the posterior parahippocampal place area on short versus long post-saccadic delay trials, consistent with the behavioral impairment. Interestingly, lower-level visual information of a scene image was less impaired; spatial frequency information in the early visual cortex was not significantly different between short and long post-saccadic delay trials. Taken together, the current study presents novel evidence for impaired processing of complex scenes following saccades that may be driven by selectively interrupted neural representations of high-level scene content.

Acknowledgements: NIH R01-EY025648 (JG)

Talk 6, 12:00 pm, 22.16

Steering, optic flow, and compensatory eye movements in cortically blind drivers

Arianna P. Giguere1 (), Matthew R. Cavanaugh2,3, Brett R. Fajen4, Duje Tadin2, Krystel R. Huxlin2,3, Gabriel J. Diaz1,2; 1Rochester Institute of Technology Center for Imaging Science, 2University of Rochester Center for Visual Science, 3Flaum Eye Institute, University of Rochester Medical Center, 4Rensselaer Polytechnic Institute Department of Cognitive Science

It is well known that the control of steering (e.g., when driving) is reliant on visual information from optic flow (Kountouriotis et al. 2016). Because optic flow is spatially correlated and accurate heading judgments can be made using a sparse and partial flow field (Warren and Kurtz 1992), it is surprising that drivers with cortical blindness (CB) across ¼ to ½ of their visual field demonstrate more variable lane positioning than their visually intact counterparts (Bowers et al. 2010). We hypothesized that this deficit arises because residual noise introduced in the “blind” field affects optic flow processing in service of steering. To test this hypothesis, we analyzed steering behavior in 10 CB drivers and 5 visually-intact controls immersed in a virtual reality steering task. Participants were asked to maintain a center-lane position while traveling at 19 m/s on a procedurally generated one lane road. Turn direction (left/right) and turn radius (35, 55, or 75 m) were manipulated. Additionally, optic flow density was indirectly manipulated through variation in environmental texture density (low, medium, high). Analysis of the average distance from the inner road edge revealed that all CB drivers were biased away from their blind side, but only controls and those with right-sided deficits decreased their distance to the inner road edge on medium and high optic flow density trials. The difference between these groups and the steering behavior of left-sided CBs, who showed no impact of optic flow, could not be attributed to age differences, time since stroke, or sparing in the central 10° of the visual field. Our results suggest that left-sided CBs place less weight on optic flow than right-sided CBs and controls. Preliminary analysis of gaze data suggests the insensitivity to variations in optic flow might also be attributed to compensatory gaze behavior.

Acknowledgements: NIH 1R15EY031090 and Research to Prevent Blindness' Low Vision Research Award

Talk 7, 12:15 pm, 22.17

A rare case of bilateral damage to cortical motion processing areas 40 years after patient L.M.

Miriam Spering1 (), Philipp Kreyenmeier1, Juana Ayala Castañeda1, Jason Barton1; 1University of British Columbia

In 1983, Zihl and colleagues reported the case of patient L.M., who had suffered bilateral damage to the lateral temporal-occipital cortex and showed a “disturbance of movement vision in a rather pure form” [Zihl, von Cramon, & Mai, Brain 1983; p.313], manifesting in selective deficits in motion perception, smooth pursuit and manual tracking of moving targets, particularly at higher speeds. These findings suggest that a human homologue of the middle temporal area (MT, or area V5) was located in this region. Here we present 19-year-old female patient C.C., who suffered encephalitis at age 3, with recent MRI showing bilateral damage to lateral occipitotemporal and medial occipitoparietal cortex. Similar to L.M., patient C.C. reports feeling overwhelmed in crowded areas, struggling with ball sports, and inaccuracy with fine motor tasks that involve moving objects. We tested C.C.’s smooth pursuit eye movements to visible and occluded targets and her ability to track and rapidly intercept objects that moved unpredictably. Compared to healthy young adults, C.C.’s smooth pursuit had a reduced velocity gain (.75) and was frequently interrupted by catch-up saccades, even in response to slow (10°/s) targets. When the target was temporarily occluded (ramp-occlusion-ramp for 800 ms each), pursuit dropped to zero velocity during occlusion and did not predictively accelerate before target reappearance. These motion prediction deficits extended to the patient’s performance in naturalistic interception tasks. Whereas pointing accuracy was high (interception error 0.8°) for objects moving along simple, horizontal trajectories, performance degraded significantly for complex flyball (M = 2.4°) and occluded trajectories (M = 3.6°) with almost no ability to discriminate different trajectory types. These findings provide neuropsychological evidence for a role of C.C.’s damaged areas in the control of predictive eye and hand movements to moving objects and show that there is little compensation for these deficits.