Talk 1, 2:30 pm
Temporal integration of visual information is affected by fast spatial grouping
Sensory information is integrated over space and time, determining both the content and time of our visual experience. Here we tested how spatial integration of elementary features affects temporal integration across the visual field. An array of 16 equidistant small sinusoidal gratings was presented on a virtual circle centered on the fixation point. The gratings were randomly split in two halves, and each half group was presented sequentially. Test stimuli were created by omitting one grating, and participants had to detect these tests in a sequential 2AFC paradigm. To measure the time course of temporal integration, we varied the duration of the first group of gratings (10–160 ms) and inter-stimulus interval between the two groups (ISI, 0–40 ms). The contrast of the first group was adjusted to match the perceived contrast of the second group. We compared two spatial conditions: the gratings’ orientations were either aligned with the virtual circle in both groups (forming a collinear contour), or the gratings’ orientations were aligned in one group and orthogonal in the other group (forming an interrupted contour). Increasing the ISI decreased performance, but more surprisingly, increasing the duration of the first group also decreased performance (see also DiLollo, 1977, Nature). The two effects were not additive: performance decreased faster with an increase in ISI than duration. Interestingly, detection of the missing element was better when both parts of the display contained gratings forming collinear, rather than with interrupted contours. The interaction between the duration of the display and ISI on performance is inconsistent with temporal integration being an output of a fixed or sliding temporal window integrating information over time. Furthermore, the effect of relative similarity of gratings suggests an interaction between a fast spatial grouping and temporal integration across the visual field, further informing models of temporal integration.
Acknowledgements: ANR grant no. ANR-22-CE28-0025
Talk 2, 2:45 pm
Sensory correlation detection by children treated for congenital visual deprivation
Priti Gupta1,3, Lukas Vogelsang2,5, Marin Vogelsang2,5, Manvi Jain1,3, Naviya Lall3, Dhun Verma3, Chetan Ralekar5, Suma Ganesh4, Pawan Sinha5; 1Indian Institute of Technology Delhi, 2École Polytechnique Fédérale de Lausanne, Switzerland, 3Project Prakash, Dr Shroff's Charity Eye Hospital, New Delhi, 4Department of Pediatric Opthalmology, Dr Shroff's Charity Eye Hospital, New Delhi, 5Massachusetts Institute of Technology
The temporal covariance of sensory signals provides critical information for determining the relationship between different entities in the sensorium. How quickly during the developmental timeline does the ability to detect such correlations become evident? Addressing this question is important for assessing whether this ability can help bootstrap the early stages of perceptual learning. Here we report work designed to assess the ability to detect temporal correlations of varying strengths within and across sensory modalities in 15 patients treated for congenital blindness as part of Project Prakash, a humanitarian and scientific effort focused on treating early blind children and through their help, understanding visual development. The performance of Prakash patients was compared with that of 21 normally-sighted blur-matched controls. In the intra-modal condition, participants were asked to determine which of two disks was blinking most in unison with a circumscribing ring. In the inter-modal condition, participants had to identify the disk blinking most congruently with a concurrent audio track of beeps and silences. This experimental design yielded three main results. First, we found that, while not fully reaching the level of normally-sighted controls, Prakash patients were able to detect correlations with markedly above-chance accuracy rapidly after sight onset. Second, for both groups, performance levels in the inter-modal and intra-modal conditions were comparable. Finally, the extent of the time series that participants observed before making a decision was similar between the two groups but markedly longer than would be required when using a pre-defined statistical decision criterion. These results help characterize a foundational process for detecting relationships between environmental entities, point to the resilience of acquiring this ability to early-onset, prolonged visual deprivation, and suggest that it could potentially serve as a bootstrapping mechanism for learning to extract environmental cliques.
Acknowledgements: This work was funded by Grant R01EY020517 from NEI (NIH) to Pawan Sinha
Talk 3, 3:00 pm
Parsing Pulses: Testing the Limits of Temporal Phase Perception in Human Vision
Introduction: Humans can detect luminance flicker exceeding 60 Hz, but the threshold for perceiving the flicker’s phase is much lower (~7-10 Hz). As a precursor to future experiments investigating this temporal bottleneck and the broader dynamics of visual perception, we replicated Aghdaee and Cavanagh (2007) using stimuli devoid of spatial and temporal transients. Methods: Twelve subjects judged whether two monochromatic Gaussians, oscillating sinusoidally between black and white, were in-phase or 180° out-of-phase. A 1440 Hz PROPixx projector (VPixx Technologies) displayed stimulus pairs at 4° eccentricity, spaced 1.8° or 5° apart, either: 1) left and right of the vertical meridian in the opposite-hemifield condition, or 2) above and below the horizontal meridian within the same hemifield. Using the method of constant stimuli, we measured phase detection thresholds across 11 oscillation frequencies (1–31 Hz), conducting 25 repetitions for each of the randomly interleaved conditions. To prevent visual offset artifacts, stimuli oscillated continuously until subjects responded. Thresholds were determined by fitting a cumulative normal function with a lower asymptote parameter. Results: Inter-stimulus spacing distance revealed a significant main effect, indicating subjects discriminated phase at higher frequencies for closely-spaced stimuli (11.13 Hz) than for farther stimuli (8.15 Hz). The main effect of hemifield was not significant, and no significant interaction with distance was observed. Notably, the asymptote parameter differed significantly from zero in the near condition, with subjects retaining a small (~60%) but significant ability to determine phase at even the highest frequencies tested. Conclusion: The advantage of near stimuli suggests the involvement of a low-level primary sensory mechanism, such as local motion detection circuits. In contrast, comparing two far-spaced stimuli requires higher-level (non-local) and slower mechanisms which possess timing consistent with conscious awareness. Future work should consider mechanisms such as discrete perception and onset artifacts similar to the Fröhlich Illusion.
Talk 4, 3:15 pm
The appearance of orientation repulsion changes with developing temporal expectation
Tomoya Nakamura1,2, Ikuya Murakami1; 1The University of Tokyo, 2Japan Society for the Promotion of Science
Anticipating when future events will happen improves our performance by facilitating visuomotor processing at various stages, from perception to action. We investigated whether such temporal expectation also influences the appearance of orientation repulsion, wherein a vertical target subjectively appears as tilted against a surrounding inducer. As the inducer, eight circularly arranged Gabor patches were continuously presented on both sides of the fixation point. As the target, another Gabor patch was flashed at the center of either one of the inducers. Participants reported whether the target appeared as tilted clockwise or counterclockwise from the vertical. In Experiment 1, prior to the target onset, auditory temporal cues were provided five times at constant intervals of 400 ms. Participants were instructed to attend to the fifth cue moment, as the target most often (with 69% probability) appeared at this moment. The target otherwise appeared 200 ms earlier or later than the anticipated moment. Repulsion significantly decreased when the target appeared earlier than anticipated. In Experiment 2, to isolate the effect of automatic entrainment to the cue rhythm, the cue was repeated every 450 ms throughout a session, and the target was presented either in-phase or out-of-phase with the rhythm with equal probability. Anticipating the target onset was virtually impossible in this setup, and indeed, no change in repulsion was observed. In Experiment 3, to focus on the effect of hazard rates, a single cue was provided, and the target was presented after one of three intervals (200, 400, or 600 ms) with equal probability. Although the cue was uninformative about the target onset, repulsion significantly decreased as the cue-target interval increased. These findings suggest that developing temporal expectation, especially the expectation associated with hazard rates, promotes premature decisions on the perceptual content that has not fully undergone contextual modulation during low-level visual processing.
Acknowledgements: Supported by KAKENHI 21J20400, 22KJ0555, 18H05523, and 23H01052
Talk 5, 3:30 pm
Saccadic time compression and brain dynamics: from regions to whole brain network
Saccades influence time perception, but the associated neural mechanisms remain elusive. We explored the cortical dynamics of perisaccadic time perception through a combination psychophysics, EEG, source localization, and graph theory analysis (GTA). 21 participants viewed a reference stimuli sequence followed by a test stimulus, either just before saccades or sustained fixation. Following this, participants were asked to judge the duration of the test compared to the reference. In previous studies we found that stimulus repetition and saccades events interacted at the level of sensorimotor brain dynamics (Ghaderi et al. Cerebral Cortex 2023) and perceived stimulus duration (Ghaderi et al. Heliyon 2022). Here, we combined these two approaches to investigate brain dynamics related to perceived stimulus duration. Source localization revealed the dynamics in cortical activation, predominantly starting from early visual and concluding in higher-level ‘cognitive’ areas (frontal and anterior cingulate cortices (ACC)). The GTA highlighted the pivotal roles of three groups of brain regions: 1) visual, 2) temporal and parahippocampal, and 3) frontal and ACC. The involvement of these regions suggests that the early visual areas may initially influence time perception concerning higher visual processing for underestimated trials. Subsequently, a top-down mechanism could be engaged in processing these visual signals, leading to increased activity in the frontal and decreased activity in the ACC, likely associated with decision-making errors. Additionally, this mechanism involves heightened activity in memory-related regions during the underestimation of time, indicating a potential need for increased activation in these areas concerning errors related to time compression. The whole network analysis revealed significant differences in the network features between underestimated and correct judged trials. These results imply a potential link between time compression and processing in functional networks which suggests network characteristics (integration, segregation, synchronization stability, and complexity) seem to play a role in shaping our perception during brief durations.
Acknowledgements: Grant Support: an NSERC Discovery Grant and VISTA Fellowship, funded by CFREF.
Talk 6, 3:45 pm
Optogenetic Stimulation of Inferotemporal Cortex is Perceived Earlier than Stimulation
Drew Nguyen1, Elia Shahbazi1, Timothy Ma2, Arash Afraz1; 1National Institutes of Health, 2New York University
Local stimulation in high-level cortical visual areas perturbs the contents of visual perception. We have previously demonstrated that visual events evoked by optogenetic stimulation in IT cortex can be reconstructed using a method dubbed “perceptography”. While perceptography informs about the contents of stimulation-evoked perceptual events, we do not know when they are perceived relative to external physical events. In this study, we use high throughput behavioral optogenetics coupled with visual interference to measure when stimulation-evoked perceptual events are perceived relative to concurrent sensory input. An adult macaque monkey was trained to behaviorally detect and report a brief optogenetic excitatory impulse delivered to its central IT cortex. The animal started each 1.6s trial by fixating on a randomly chosen computer-generated image (8 deg.). A ~1x1mm area of the IT cortex was optogenetically stimulated in half of the trials at random for 60ms halfway through the image presentation using an implanted LED array. We hypothesize that interrupting the image presentation at the proper time can mask the stimulation-evoked perceptual event. This was accomplished by presenting high-contrast visual noise (12 deg.) at one of 11 different time points during image presentation. After each trial, the animal reported whether it was stimulated by looking at one of two presented targets, with liquid reward for correct reports. We find that the monkey’s performance varies with the onset time of the visual noise. Presentation of visual noise 200ms prior to stimulation elicited a significantly larger miss rate compared to baseline. This is reflected by a significant decrease in the monkey’s d’ for the same noise onset time against baseline. Furthermore, we find that perceptography with image perturbations presented 200ms prior to stimulation induces a higher false alarm rate compared to image perturbation presented simultaneously to stimulation.
Talk 7, 4:00 pm
Relationship between V1 spiking patterns and scalp EEG is frequency-dependent
Despite decades of electroencephalography (EEG) research, the relationship between EEG and underlying spiking dynamics remains unclear. This limits our ability to infer intracranial signals from EEG, a critical step to bridge electrophysiological findings across species and to develop non-invasive brain-machine interfaces (BMIs). We recorded spiking activity from a 32-channel floating microarray permanently implanted in parafoveal V1 and scalp-EEG in a male macaque monkey. While the animal fixated, the screen flickered at different temporal frequencies (0, 5, 10, 20, and 40 Hz) to induce steady-state visual evoked potentials (SSVEP). The primary advantage of SSVEPs is that they generate high signal-to-noise ratios. We analyzed the relationship between the SSVEPs in multiunit spiking activity (MUA) and EEG. Both MUA and EEG showed robust SSVEPs, with best response in EEG for 20Hz-stimulus. The MUA also showed strong responses at the harmonics of the stimulus frequencies, which was not evident in EEG. Time-series correlation between trial-averaged EEG and MUA showed strongest relationship for 5Hz- and 10Hz-stimuli. Furthermore, correlating MUA with EEG power at different frequencies (1-200 Hz) showed prominent correlations for 5Hz-stimulus, which was limited to specific EEG bands (5-10, 10-20, and 40-70 Hz). This correlation pattern was consistent across intracranial electrodes placed at different depths in V1, suggesting that the 5Hz stimulus is optimal for estimating spiking activity from EEG. Single-trial EEG-MUA correlations lacked stimulus-specific relationships. However, a 10 ms delay in EEG signal yielded consistent negative correlations with spiking activity across intracranial electrode depths. This suggests that delayed EEG signals may reflect information about the spiking activity and could be used to estimate MUA from EEG. Our study shows robust relationships between V1 spiking activity and EEG under frequency-specific stimulus conditions. These results give direction to better estimate cortical spiking activity using non-invasive scalp EEG.