VSS, May 13-18

Temporal Processing: Neural mechanisms, timing perception

Talk Session: Saturday, May 14, 2022, 8:15 – 9:45 am EDT, Talk Room 2
Moderator: Iris Groen, University of Amsterdam

Times are being displayed in EDT timezone (Florida time): Monday, August 8, 11:51 pm EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:15 am, 21.21

Propagation speeds of action potentials in the human retina compensate for traveling distances

Annalisa Bucci1,2 (), Roland Diggelmann1,2, Matej Znidaric1,2, Martina De Gennaro2, Cameron Cowan2, Botond Roska2, Andreas Hierlemann1, Felix Franke2; 1ETH Zürich, 2Institute of Molecular and Clinical Ophthalmology Basel

Timing between action potentials is crucial for information processing in real neural networks. However, the physical axonal lengths largely determine the time necessary for action potentials to reach postsynaptic neurons. Retinal ganglion cell (RGC) axons form the retinal nerve fiber layer (RNFL). The human RNFL shows a highly stereotypical organization characterized by the presence of the fovea, a specialized region enabling high-resolution vision tasks, such as reading. To reach the papilla (i.e., the optic nerve head), the axons of the RNFL do not cross the fovea, but rather some axons detour around it following significantly longer trajectories. We investigated whether different axonal lengths in the human RNFL entail distinct conduction velocities, allowing visual signals to reach the brain synchronously. We used human retinal explants to precisely measure paths and propagation speeds of action potentials of foveal and peripheral RGCs by using high-density microelectrode-array recordings at subcellular resolution. Axonal conduction speeds were spatially heterogeneous and depended on the location of the RGC somas. Around the fovea centralis, action potentials of temporal RGCs traveled up to 50% faster than action potentials of nasal RGCs. In both foveal and peripheral retina, we observed a bimodal distribution of propagation speeds of the two most abundant cell types in primate retina: midget and parasol cells. Peripheral RGC axons exhibited up to three times higher conduction velocities than foveal RGC axons. We modelled the entire human RNFL to predict trajectories (and thus lengths) of RGC axons. The model recapitulated well the organization of the human RNFL and estimates of axonal lengths strongly correlated with observed axonal lengths and action potential propagation speeds. Our measurements suggest that a compensatory mechanism in the human retina contributes to synchronizing the arrival times of visual signals in the brain.

Acknowledgements: This work was financially supported by the Swiss National Science Foundation (SNSF) under Sinergia Grant CRSII5_173728 and the European Commission under the ERC Advanced Grant 694829 (“neuroXscales”).

Talk 2, 8:30 am, 21.22

Near-additive temporal dynamics of sub-threshold population responses in macaque V1

Jingyang Zhou1,2 (), Matt Whitmire3,4,5, Yuzhi Chen3,4,5, Eyal Seidemann3,4,5; 1Center for Computational Neuroscience, Flatiron Institute, 2Center for Neural Science, New York University, 3Center for Perceptual Systems, University of Texas, Austin, 4Department of Psychology, University of Texas, Austin, 5Department of Neuroscience, University of Texas, Austin

To study stimulus-evoked dynamics of neuronal signals, a powerful method is to quantify deviations of the measured response dynamics from predictions of a linear system. A linear system can be completely characterized by its impulse response function; deviations from linearity can inform us about the type of nonlinearities that the response dynamics contain. Non-linearities in neuronal responses typically have ecological causes or functional benefits, and are crucial for understanding how our internal representations relate to sensory inputs. Here, we conducted linear system analysis on trial-by-trial data of voltage sensitive dye (VSD) measurements from behaving macaque V1. We used a set of 12 large and high-contrast visual stimuli that varied in duration of a single pulse, and in inter-stimulus interval between two pulses (20 to 640 ms). VSD signals represent membrane potential dynamics pooled from a local neuronal population, which makes it unique and complementary to spike-based signals measured using other methods (e.g. single- or multi-units, LFP, BOLD in fMRI). Unlike other spike-based signals, we found that population membrane potentials measured using VSD are surprisingly close to being additive in time. This near-additivity has not been previously examined, possibly due to challenges to separate stimulus-evoked neuronal signals from other signal sources in VSD time courses. Our new pre-processing algorithm allows us to robustly separate these two components. We further quantified the small but significant deviations from additivity at short stimulus durations, and present a delayed normalization model that accounts for the near-additive temporal summation in population membrane dynamics. The delayed normalization model can also exhibit previously observed contrast-dependent gain change in population membrane potential dynamics. Furthermore, the model provides a platform for testing ways to connect between signals of population membrane potential and other spike-based neuronal population measurements.

Talk 3, 8:45 am, 21.23

Delayed divisive normalization predicts temporal dynamics of neural responses in human visual cortex

Iris Groen1,2 (), Giovanni Piantoni3, Stephanie Montenegro4, Adeen Flinker4, Sasha Devore4, Orrin Devinsky4, Werner Doyle4, Nick Ramsey3, Natalia Petridou3, Jonathan Winawer2; 1University of Amsterdam, 2New York University, 3University Medical Center Utrecht, 4New York University Grossman School of Medicine

Neural responses to visual inputs change continuously over time. Even for simple static stimuli, responses in visual cortex decrease when stimulus duration is prolonged (subadditive temporal summation), reduce when stimuli are repeated (adaptation), and rise more slowly for low contrast stimuli (phase delay). These phenomena are often studied independently. Here, we demonstrate these phenomena within the same experiment and model the underlying neural computations with a single computational model. We extracted time-varying responses from electrocorticographic (ECoG) recordings from patients presented with grayscale pattern stimuli that varied in contrast, duration, and inter-stimulus interval (ISI). Aggregating data across patients yielded 88 electrodes with robust visual responses, covering earlier (V1-V3) and higher-order (V3a/b, LO, TO, IPS) retinotopic maps. In all regions, the ECoG responses exhibit several nonlinear dynamics: peak response amplitude saturates with high contrast and longer stimulus durations; the response to a second stimulus is suppressed for short ISIs and recovers for longer ISIs; response latency decreases with increasing contrast. These dynamics are accurately predicted by a computational model comprised of a small set of canonical neuronal operations: linear filtering, rectification, exponentiation, and delayed divisive normalization. We find that an increased normalization term captures both adaptation- and contrast-related response reductions, suggesting potentially shared underlying mechanisms. We additionally demonstrate both changes and invariance in temporal dynamics across the visual hierarchy. First, temporal summation windows increase systematically from earlier to higher areas; however, recovery time from adaptation is relatively invariant. Second, response amplitudes become more invariant to contrast in higher visual areas, but response latencies do not. Together, our results reveal the presence of a wide range of temporal neuronal dynamics in the human visual cortex, and demonstrate that a simple model captures these dynamics at millisecond resolution.

Acknowledgements: This work was funded by BRAIN Initiative Grant R01-MH111417

Talk 4, 9:00 am, 21.24

Limited visual representation of moving objects during physical occlusion

Lina Teichmann1, Denise Moerel2, Anina Rich3, Chris Baker1; 1National Institute of Mental Health, 2University of Sydney, 3Macquarie University

Visual information is interrupted frequently due to occlusion, eyeblinks, and saccades. However, objects in the visual environment seem to persist through these perceptual gaps. In the current preregistered study, we examined the nature of object representations during perceptual gaps caused by occlusion. Participants passively viewed an object moving on a circular trajectory in the periphery. Occasionally, the object was either dynamically occluded or shrank and disappeared. Using Magnetoencephalography (MEG) paired with sensor-space multivariate pattern analyses, we were able to track how object representations unfold over time: before, during and after occlusion. Focusing on colour, shape and position as different object features, when the moving objects were fully visible, there was clear evidence for information about all features in the MEG signal. During occlusion, some information persisted, but not to the same degree as when visible. In addition, these weaker representations of object features were not specific to occlusion but also occurred when the object was perceived to disappear. Overall, our results challenge the notion of a perception-like representation of moving objects during occlusion and open up new questions about how the visual system overcomes perceptual gaps to support the perception of a meaningful, continuous stream of information.

Talk 5, 9:15 am, 21.25

Temporal expectations facilitate performance in the absence of concomitant spatial expectations and in dynamically unfolding environments

Irene Echeverria-Altuna1 (), Sage Boettcher1, Kia Nobre1,2; 1Department of Experimental Psychology, University of Oxford, 2Oxford Centre for Human Brain Activity (OHBA), Department of Psychiatry, University of Oxford

Visual attention can be proactively directed to locations in space (Posner, 1980; Chung and Jiang 1998), to object features (Treisman and Gelade, 1980; Kok et al., 2012) and to points in time (Coull & Nobre, 1998). In our environment, spatial, temporal and feature-based expectations interact to shape behaviour (Nobre and Rohenkohl, 2014). Temporal expectations have been proposed to guide perception when accompanied by congruent spatial expectations (O’Reilly, 2008; Rohenkol et al., 2014). In two complementary studies, we set out to investigate whether cued temporal expectations can guide visual perception, even in the absence of spatial expectations, in a continuously changing environment. On each trial, participants (online study: N = 49, in-person study: N = 24) were presented with a stream of bilaterally appearing coloured circles, similar to a dual-stream rapid serial visual presentation task. Each stream was composed of three coloured targets and between 6 and 9 distractors. On each trial one of the targets always appeared at a fixed early time, another appeared at a fixed late time, and the other could appear at any time. A coloured cue at the beginning of each trial indicated which of the three target circles was relevant. Specifically, participants were asked to respond to the side (left or right) in which the cued target appeared in any given trial. We found that participants were faster and more accurate in detecting targets that occurred at an expected time (early or late), compared to the randomly timed targets, even though participants had no information regarding the likely location of the target. From these results, we conclude that temporal expectations can facilitate performance in the absence of concomitant spatial expectations and in dynamically unfolding streams of visual stimuli.

Talk 6, 9:30 am, 21.26

Motor-Independent but Modality-Specific Time Adaptation

Eckart Zimmermann1 (), Michael Wiesing1, Nadine Schlichting1; 1Institute for Experimental Psychology, Heinrich Heine University Dűsseldorf, Germany

In the last two decades, much evidence has demonstrated that the motor system is involved in the encoding of temporal processing in perception. How this interconnection is implemented remains elusive. In a shared resource view, the temporal interpretation of events might emerge directly from motor plans. In a recalibration view, motor and temporal processing are separate but the former calibrates the latter. In accordance with the recalibration model, perception and action have opposing functional roles with regard to the specificity of processing. Perception has to discriminate whereas action planning has to coordinate. If the temporal properties of one of the effectors were distorted, all other movement plans would need to be recalibrated in order to produce successful behavior. Here, we tested whether time-critical and goal-oriented movements are affected more globally by adapting compared to the beknown direction, movement type and context specificity of spatial adaptation. In a ready-set-go paradigm, participants reproduced the interval between ready- and set-signals by performing different arm and hand movements in Virtual Reality (VR). In adaptation trials, we introduced a temporal perturbation, such that movements in VR appeared slowed down. Participants had to temporally adapt their behavior to sustain performance. We found that adaptation effects transferred between different movement types, interval ranges, target locations, and environmental contexts. However, adaptation effects did not transfer if the sensory modality switched from vision to audition. Consistent with the need of coordination and unlike recalibrating to spatial perturbations, the temporal planning of motor actions is recalibrated more globally within the motor system. By contrast, in perception, adaptation effects were localized in sensory modalities, supporting the perceptual aim of discriminating between stimulus features. Our findings suggest that temporal processing for perception and action are separate and that movement errors recalibrate temporal perception.

Acknowledgements: Supported by European Research Council (project moreSense grant agreement n. 757184).