V-VSS, June 1-2

Eye Movements, Attention

Talk Session: Wednesday, June 1, 2022, 6:30 – 7:45 pm EDT, Zoom Session

Times are being displayed in EDT timezone (Florida time): Friday, September 30, 12:40 pm EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 6:30 pm, 75.61

TALK 1 CANCELLED - Asymmetric binocular eye movements during monocular fixation

Arvind Chandna1 (), Devashish Singh1, Stephen Heinen1; 1Smith Kettlewell Eye Research Institute

Hering’s law states that the eyes are equally innervated and as such move together. However, a recent study (Chandna et al., 2021) demonstrated that an occluded eye moves asynchronously with the viewing one during smooth pursuit on the midline in apparent violation of Hering’s law. Here we investigate differences in fixational eye movements during binocular or monocular viewing. Participants underwent a multi-part clinical examination to ensure normal acuity, ocular alignment and binocular function. They were then instructed to fixate for 20 seconds the central cross-over point of the letter X within a surrounding array of 20 letters. The array was displayed on the midline at a distance of 67 cm. Participants performed the fixation task under binocular viewing, and monocular viewing with either the left or right eye occluded. The occluder was an IR passable filter that allowed recording from the covered eye. A Tobii Pro-Spectrum eye tracker sampled eye position at 1200 Hz. We found higher variability of the occluded eye’s fixation dispersion relative to the viewing eye (either during monocular or binocular viewing) when quantified with a 2-dimensional bivariate contour ellipse area (BCEA): (binocular BCEA = 0.85, monocular viewing BCEA = 1.035, monocular covered BCEA = 1.535). The occluded eye’s drift speed was also higher than that of the viewing eye, and higher than that of either eye during binocular viewing. (binocular drift speed = 5.525 d/s, monocular drift speed = 5.113 d/s, monocular covered drift speed = 5.708 d/s). This occurred regardless of which eye was dominant or covered. The results provide evidence that a covered eye moves differently than a viewing one during monocular fixation, suggesting a lack of equal innervation to the two eyes.

Talk 2, 6:45 pm, 75.62

Isolating the neural substrates of visually guided orienting of attention in healthy humans

John McDonald1 (), Daniel Tay1, David Prime2, Steven Hillyard3,4; 1Simon Fraser University, Burnaby, BC, Canada, 2Douglas College, New Westminster, BC, Canada, 3University of California San Diego, La Jolla, California, 4Leibniz Institute for Neurobiology, Magdeburg, Germany

The study of covert orienting has been an important impetus for the field of cognitive neuroscience. Seminal reaction-time studies demonstrated that a suddenly appearing visual stimulus attracts attention involuntarily, but the neural processes associated with visually guided attention orienting have been difficult to isolate because they are intertwined with sensory processes that trigger the orienting. Here, we developed a framework for disentangling orienting activity from purely sensory activities using scalp recordings of event-related potentials (ERPs). The working hypothesis was that sensory processing of a lateral abrupt onset would drive timing differences between visual evoked activities (e.g., P1 and N1 peaks) recorded contralateral and ipsilateral to the stimulus (because of projections from eye to contralateral visual cortex and the callosal projections connecting the two brain hemispheres), while covert orienting to the stimulus would drive amplitude differences between contralateral and ipsilateral ERPs. We tested this hypothesis by comparing ERPs elicited by lateral visual stimuli under two conditions: one in which participants discriminated a feature of the lateral stimulus (attend lateral), and one in which participant responded to some other, non-lateralized stimulus (attend other). It was presumed that covert orienting would be necessary in the attend-lateral condition but would be minimized in the attend-other condition. We identified an early positive ERP deflection over the ipsilateral visual cortex that was associated with the covert orienting of visual attention. Across five experiments, this ipsilateral visual orienting activity (VOA) was linked with behavioral measures of orienting (i.e., was larger when the stimulus was detected rapidly than when it was detected more slowly), and its onset occurred prior to unrestrained eye movements towards the targets. The VOA appears to be a specific neural index of the visually guided orienting of attention to a stimulus that appears abruptly in an otherwise uncluttered visual field.

Acknowledgements: Funded by NSERC and the Canada Research Chairs program.

Talk 3, 7:00 pm, 75.63

Time Compression Induced by Voluntary-Action and Its Underlying Mechanism: The Role of Eye Movement in Intentional Binding (IB)

Zheng Huang1 (), Huan Luo1, Huihui Zhang1; 1Peking University

The sensory event shortly following a voluntary action is perceived earlier compared to the one without preceding action, a phenomenon called intentional binding (IB). IB exemplifies the close link between perception and action, and is widely used to implicitly measure sense of agency, but its underlying mechanism is poorly understood. Typically, in IB studies, subjective time is measured by reading an analog clock, during which eye movements are inevitable. This study investigates the role of eye movements, especially saccades, in the temporal dynamics of IB effect. We recorded participants’ eye movements while they were tracking an analog clock’s hand that rotated at a speed of 2.5 s/cycle. In the action condition, participants pressed a button voluntarily and received a tone 250 ms or 750 ms later. In the no-action condition, however, no button-press was needed and the tone was randomly delivered at a certain time. Participants were required to report the tone onset time by referring to the clock hand position. Consistent with previous findings, compared to the no-action condition, time was compressed in the voluntary-action condition (i.e., IB effect), and the effect was stronger with the 250 ms time delay than with the 750 ms delay. For eye movements, in the 250 ms delay condition, the probability of saccade occurrence during the delay period was lower in the voluntary action condition than in the no-action condition within the same time period, and trials with saccades induced less time compression than trials without saccades. We did not observe these effects in the 750 ms delay condition. Taken together, our results suggest that the hand movement, eye movement, and perception are closely linked, and the time compression after a voluntary action is possibly due to the lack of catch-up saccades.

Talk 4, 7:15 pm, 75.64

Accurate and automated delineation of V1-V3 boundaries by a CNN

Noah C. Benson1 (), Shaoling Chen2, Hiromasa Takemura3,4,5,6, Jonathan Winawer2; 1University of Washington, 2New York University, 3National Institute for Physiological Sciences, Okazaki, Japan, 4Graduate University for Advanced Studies, SOKENDAI, Hayama, Japan, 5National Institute of Information and Communications Technology, Koganei, Japan, 6Osaka University

Introduction. Delineation of retinotopic map boundaries in human visual cortex is a time-consuming task. Automated methods based on anatomy (cortical folding pattern; Benson et al., 2014; DOI:10.1016/j.cub.2012.09.014) or a combination of anatomy and retinotopic mapping measurements (Benson & Winawer, 2018; DOI:10.7554/eLife.40224) exist, but human experts are more accurate than these methods (Benson et al., 2021; DOI:10.1101/2020.12.30.424856). Convolutional Neural Networks (CNNs) are powerful tools for image processing, and recent work has shown they can predict polar angle and eccentricity maps in individual subjects based on anatomy (Ribiero et al., 2021; DOI:10.1016/j.neuroimage.2021.118624). We hypothesize that a CNN could predict V1, V2, and V3 boundaries in individual subjects with greater accuracy than existing methods. Methods. We used the expert-drawn V1-V3 boundaries from Benson et al. (2021) of the subjects in the Human Connectome Project 7 Tesla Retinotopy Dataset (Benson et al., 2018; DOI:10.1167/18.13.23) as training (N=135) and test data (N=32). We constructed a U-Net CNN with a ResNet-18 backbone and trained it with either anatomical (curvature, thickness, surface area, and sulcal depth) or functional (retinotopic) maps as input. Results. CNN predictions out-performed other methods. The median dice coefficients between predicted and expert-drawn labels from the test dataset for the CNNs trained using anatomical and functional data were 0.77 and 0.90, respectively. In comparison, coefficients for existing methods based on anatomical or anatomical plus functional data were 0.70 and 0.72, respectively. These results demonstrate that even with a small training dataset, CNNs excel at accurately labeling visual areas on human brains in an automated fashion. This method can facilitate vision science neuroimaging experiments by making an otherwise difficult and subjective process fast, precise and reliable.

Acknowledgements: This work was supported by NIH NEI award 1R01EY033628.

Talk 5, 7:30 pm, 75.65

Topological and dynamical features of source localized EEG networks in presaccadic visual processing

Amirhossein Ghaderi1,2 (), Matthias Niemeier3,1,2, John Douglas Crawford1,2,3,4,5,6; 1Centre for Vision Research, York University, Toronto, ON, Canada, 2Vision Science to Applications (VISTA) Program, York University, Toronto, ON, Canada, 3Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada, 4Department of Biology, York University, Toronto, ON, Canada, 5Department of Kinesiology and Health Sciences, Toronto, ON, Canada, 6Department of Psychology, York University, Toronto, ON, Canada

The topology and dynamics of cortical networks involved in the generation of presaccadic neural signals remain poorly understood. in particular, how they interact with simultaneously presented visual stimuli. Here, we used different approaches in graph theory analysis (GTA) and electroencephalography (EEG) to evaluate topology and dynamics of functional brain networks in the perisaccadic interval. EEG was recorded via 64 channels in two behavioral conditions (fixation or saccade). Participants (N=21) were pre-cued with a series of 1-3 grids (three horizontal lines, 10° by 10°) located 5° below the central fixation-point. 100ms later, a stimulus (three vertical lines; same size/location) was briefly presented (for 70ms). In the saccade condition, a left/right shift of the fixation-point during the interstimulus interval triggered a saccade after the second stimulus. Source localization (SL) was performed on the 200ms period following the saccade cue, or the equivalent time during fixation trials. Lagged coherences were calculated between all pairs of 84 Brodmann areas. SL/GTA both identified major network hubs near the frontal and parietal eye fields, with widespread cortical connectivity. Other GTA measures (clustering coefficient, global efficiency, energy, entropy) showed that network segregation, integration, synchronizability, and complexity were enhanced during the perisaccadic interval. Further, these network properties significantly interacted with stimulus repetition, altering both hubs and network topography. These data suggest a network mechanism for enhanced visual information processing and propagation in the presaccadic interval.

Acknowledgements: Acknowledgements: Grant Support: an NSERC Discovery Grant and VISTA Fellowship, funded by CFREF.