VSS, May 13-18

Methods: New ideas and emerging trends

Talk Session: Sunday, May 15, 2022, 8:15 – 9:45 am EDT, Talk Room 2
Moderator: Jon Matthis, Northeatern University

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 4:07 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:15 am, 31.21

Adaptive methods to quickly estimate psychometric functions: the case of Psi-marg-grid and the effect of non-monotony

Adrien Chopin1 (); 1Sorbonne Université, INSERM, CNRS

Introduction: Estimating psychometric thresholds can be challenging, especially stereoacuities (Chopin et al., 2019). Adaptive methods used for estimation generally assume monotonically increasing psychometric functions (Levitt, 1971), which can be incorrect, for example when testing stereoacuities (Kane et al., 2014). I wondered how adaptive methods are affected by non-monotony in psychometric functions. Methods: I performed Monte-Carlo simulations of 90-trial threshold estimations following six different methods: (1) the method of constant stimuli (MOCS), ZEST (King-Smith et al., 1991), Psi (Kontsevich & Tyler, 1999), Psi-marginal (Prins, 2013), Psi-grid (Doire, Brookes, & Naylor, 2017) and a new algorithm Psi-marg-grid, mixing Psi-Marg’s marginalization on nuisance parameters and Psi-grid’s adaptive search grid. I tested four conditions under which the simulated ideal-observer’s function and/or the algorithm’s assumed function were monotonic or non-monotonic. Parameters other than the threshold were random. The bias was measured by the median difference to the simulated threshold, the accuracy by the mean absolute error and the test-retest by the limit of agreement between two simulations of a given threshold (log units). The chance level was calculated as the probability to obtain a threshold better than a critical value while simulating a random observer. Results: When assuming monotony, no algorithms could issue meaningful estimates with non-monotonic observers (bias>11822%). When assuming non-monotony, Psi-marg-grid showed the best results (bias 0.6%, accuracy 8.92%, test-retest 0.159, chance level 1%). Only Psi-marg had a better test-retest (0.154) but it lacked accuracy with non-monotonic observers (error 29.4%). MOCS had the worst test-retest (0.6) and chance level (53.2%). Psi’s and Psi-grid’s chance levels were high (18.9% and 17.8% respectively). ZEST lacked accuracy with non-monotonic observers (error 122%). Conclusion: it is important to use methods assuming non-monotony, because none of the tested methods were otherwise robust to non-monotony in the observer’s psychometric functions. When assuming non-monotony, Psi-marg-grid was the best tested method.

Acknowledgements: This research was supported by the Chair SILVERSIGHT ANR-18-CHIN-0002, by the IHU FOReSIGHT ANR-18-IAHU-01, and by the LabEx LIFESENSES (ANR-10-LABX-65).

Talk 2, 8:30 am, 31.22

Opto-Array: an implantable array of LEDs built for behavioral optogenetic experiments in nonhuman primates

Reza Azadi1 (), Emily Lopez1, Rishi Rajalingham2, Michael Sorenson3, Simon Bohn4, Arash Afraz1; 1Laboratory of Neuropsychology, NIMH, NIH, Bethesda, MD, 2Brain and Cognitive Science, MIT, Cambridge, MA, 3BlackRock Microsystems, Salt Lake City, UT, USA, 4Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA

Optogenetic methods revolutionized the landscape of systems neuroscience by allowing perturbation of neuronal activity with precise spatial and temporal resolution and cell-type specificity. However, optogenetic studies often struggle to obtain large behavioral effects in monkeys. We developed a new chronically implantable LED array, the Opto-Array, that can reliably deliver light to the cortical surface in large brains, specifically non-human primates. The Opto-Array includes 24 LEDs and one thermal sensor on a 5 by 5 grid of PCB encapsulated with a transparent parylene and a layer of silicone coating on the surface. The 2D configuration of this LED array allows perturbation of a large range of cortical surface areas from 1 by 1 mm for only one active LED up to 5 by 5 mm when the entire array is turned on. The thermal sensor is important to monitor the temperature of cortical tissue to avoid overheating and potential damage. The chronic characteristic of Opto-Array allows highly stable perturbation of neural activities in behavioral experiments, in which data can be collected and pooled over months. Another advantage is that Opto-Array is a safer alternative to acute methods such as using optical fiber and direct illumination that minimizes the tissue damage as well as the risk of infection from open cranial windows and chambers. Here we explain the physical properties and characteristics of the Opto-Array as well as the surgical techniques and procedures for the implantation in macaque monkeys.

Acknowledgements: ZIAMH002958

Talk 3, 8:45 am, 31.23

Multichannel recordings in neuroscience: new computational methods for fluctuating neural dynamics and spatiotemporal patterns

Lyle Muller1,2 (), Gabriel Benigno1,2, Alexandra Busch1,2, Zachary Davis3, John Reynolds3; 1Department of Mathematics, Western University, London, ON, Canada, 2Brain and Mind Institute, Western University, London, ON, Canada, 3The Salk Institute for Biological Studies, La Jolla, CA, USA

With new multichannel recording technologies, neuroscientists can now record from neocortex of awake animals with both high spatial and temporal resolution. Early recordings during anesthesia revealed spontaneous and stimulus-evoked waves traveling across the cortex. While for some time these waves were thought to disappear in awake states, our recent work has revealed traveling waves in visual cortex of awake animals. To study neural recordings during irregular, fluctuating activity in the awake state, we have developed a computational technique that we termed generalized phase (GP). The GP approach allows us to analyze the dominant fluctuation in broadband neural data at each moment in time, thus permitting analysis of the constantly occurring irregular neural activity at the single-trial level. By quantifying GP at each electrode on a multielectrode array, we have studied spontaneous fluctuations in awake marmosets as they await a faint visual target during a detection task. We find that ongoing fluctuations propagate as intrinsic traveling waves (iTWs) across the multielectrode array, modulate background firing rates, and strongly influence visually evoked responses. To understand the underlying mechanism for these waves, we then studied a large-scale spiking network model with balanced excitatory and inhibitory interactions. By scaling this model to the size of marmoset area MT, we find that time delays from unmyelinated horizontal fibers can profoundly shape the weakly correlated activity known to model the awake state (the “asynchronous-irregular” [AI] regime) into iTWs. In this state, only a small fraction of the local neural population spikes as an iTW passes. We call this unique operating mode, where the benefits of the AI state in local networks can coexist with iTWs propagating across the cortex, the “sparse wave regime”. We discuss potential roles for these sparse waves in dynamically modulating neural sensitivity during active visual processing.

Acknowledgements: This work was supported by Gatsby Charitable Foundation, the Fiona and Sanjay Jha Chair in Neuroscience, CIHR and NSF (NeuroNex Grant No. 2015276), the Swartz Foundation, NIH Grants R01-EY028723, T32 EY020503-06, T32 MH020002-16A, P30 EY019005, Compute Canada, and BrainsCAN at Western University.

Talk 4, 9:00 am, 31.24

Predicting Gaze Position with Deep Learning of Electroencephalography Data

Martyna Plomecka1 (), Ard Kastrati2, Lukas Wolf1, Roger Wattenhofer2, Nicolas Langer1; 1University of Zurich, 2ETH Zurich

The collection of eye gaze information is widely used in cognitive science and psychology. Moreover, many neuroscientific studies complement neuroimaging methods with eye-tracking technology to identify variations in attention, arousal and the participant's compliance with the task demands. To address limitations with conventional eye-tracking systems, recent studies have focused on leveraging advanced machine-learning techniques to compute gaze based on images from a webcam or images from Functional Magnetic Resonance Imaging (fMRI). While using a webcam to specify the eye gaze position requires an additional system and synchronization with auxiliary measures from the actual experiment that is even more cumbersome than in traditional eye-tracking systems, fMRI data acquisition is costly and does not provide the temporal resolution at the level that cognition takes place. In contrast, Electroencephalography (EEG) is a safe and cost-friendly method that directly measures the brain's electrical activity and enables measurement in clinical settings. However, an eye-tracking approach that offers gaze position estimation based on concurrently measured EEG is lacking. We address this shortcoming and show that gaze position can be restored by combining EEG activity and state-of-the-art machine learning. We use a dataset consisting of recordings from 400 healthy participants while they engage in tasks with varying complexity levels resulting in EEG and EOG features for over 3 million gaze fixations. To address intersubject variability and different experimental setups, we introduced a calibration paradigm, allowing the trained model to represent each participant's fixation characteristics throughout the experiment efficiently. Including a standardized, time-efficient and straightforward protocol to calibrate future recorded data on the pre-trained algorithm will improve the model's sensitivity, accuracy and versatility. This work emphasizes the importance of eye-tracking for the interpretation of EEG results and provides an open-source software that is widely applicable in research and clinical settings.

Talk 5, 9:15 am, 31.25

Task-dependent head-eye coordination during natural fixation

Zhetuo Zhao1,2 (), Yuanhao H. Li1,2, Ruitao Lin1,2, Sanjana Kapisthalam1,2, Ashley M. Clark1,2, Bin Yang1,2, Janis Intoy1,2, Michele A. Cox1,2, Michele Rucci1,2; 1Department of Brain and Cognitive Sciences, 2Center for Visual Science, University of Rochester, USA

Humans acquire visual information by continually moving their eyes and head. Pioneering studies reported that, during natural fixation, head-eye coordination is tuned according to the task to yield a suitable amount of retinal image motion (Steinman, 1986). But what type of motion is suited for a given task? Recent experiments indicate that structuring the temporal luminance flow impinging onto the retina is an important function of eye movements. In high-acuity tasks, observers tune their fixational eye drifts so that the luminance modulations delivered within the range of temporal sensitivity enhance high spatial frequencies (Intoy & Rucci, 2020). To resolve minute eye movements, in these previous experiments, the head of the observer was strictly immobilized. Here we use a new custom device to examine how retinal image motion varies across tasks during natural head-free fixation. We recorded head and eye movements by means of scleral coils and passive markers, using an apparatus that integrated a motion capture system (OptiTrack) with a specially designed coil system with three highly uniform, oscillating, magnetic fields. Human observers (N=8) conducted a set of natural tasks, which included visual search, sorting objects by color, an acuity test, and sustained fixation on markers. Our results confirm that head-eye fixational coordination changes systematically across tasks. Critically, we show that changes in motor activity alter the information content of visual input signals by modulating the distribution of spatial frequency power delivered within the bandwidth of human temporal sensitivity. That is, humans jointly coordinate fixational head and eye movements according to the tasks’ demands in ways that emphasize the relevant spatial frequency range in the temporal luminance flow. These results indicate that that task-dependent tuning of head-eye coordination effectively acts as a selection mechanism in spatial frequency.

Acknowledgements: This work was supported by Reality Labs. MR and JI were supported by National Institutes of Health grants EY018363 and EY029565, respectively.

Talk 6, 9:30 am, 31.26

The FreeMoCap Project - and - Gaze/Hand coupling during a combined three-ball juggling and balance task

Jonathan Matthis1 (), Aaron Cherian2, Trent Wirth3; 1Northeatern University

We present a broad scale, long term, open science endeavor that aims to create a free, open source, low cost, research-grade full-body motion capture system alongside a research project that utilizes this tool to investigate visual motor coupling during a combined three-ball juggling and balance task. The FreeMoCap Project - The computer vision community is making tremendous advances in the field of markerless motion capture software. However, these advances often require a high floor of technical knowledge to be used effectively. This limits their utility to the scientific community and creates a near insurmountable barrier for the general population. The FreeMoCap system leverages emerging markerless motion capture software (e.g. OpenPose, MediaPipe, DeepLabCut, etc) to create a streamlined ‘one-click’ pipeline for 3D kinematic reconstruction of full-body human, animal, and robotic movement. This system works with arbitrary camera hardware and provides methods for synchronous recording of wired cameras (e.g. USB webcams) as well as the post-hoc synchronization of independent cameras (e.g. GoPros). The FreeMoCap project emphasizes ease-of-use, with the eventual goal of developing a system that will allow a 14-year-old with no technical training and no outside assistance to recreate a research-grade motion capture system for less than 100 US Dollars. Juggling/Balance Task - The FreeMoCap system was used to record a subject performing a three-ball juggling task while balancing on a ‘wobble-board’ platform. The full body (and hand) kinematic data produced by FreeMoCap was spatiotemporally calibrated with binocular gaze data recorded by a Pupil Labs mobile eye tracker in a manner analogous to methods used in Matthis, Yates, and Hayhoe (2018), Matthis, et al (PLoS Comp Bio, In Press) and Wirth and Matthis (VSS 2022). The resulting data reveals a tight coupling between gaze and the hand/juggling ball system as well as a complex relationship between the juggling and balance task.

Acknowledgements: NIH NEI R00-EY028229