Dynamic scaling of temporal normalization in human early visual cortices

Poster Presentation 53.419: Tuesday, May 19, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Temporal Processing: Neural mechanisms, models

Minsun Park1, Tomas Knapen2,3, Sam Ling1; 1Boston University, 2Spinoza Center for Neuroimaging, KNAW, 3Vrije University Amsterdam

To process dynamic inputs efficiently in the temporal domain, the visual system does not simply accumulate sensory inputs linearly. Instead, neural responses to stimuli have been shown to exhibit compressive temporal summation, typically explained by a delayed normalization (DN) model. This model assumes a single decay constant to account for temporal normalization, but prior work has largely restricted testing to sub-second timescales –regimes within which a single decay constant is likely to suffice. However, it is known that neural adaptation doesn’t adhere to a single fixed timescale, but exhibits power-law behavior. Here, we tested a modified DN model in which the decay constant scales as a power-law function of duration across a wider range of stimulus durations. We used fMRI to measure BOLD responses in human early visual cortex (V1–V3) to arrays of gratings (45% contrast), which were presented for eight log-spaced durations (33 msec to 4.2 sec), using a fast event-related design. We observed strong sub-additivity in the BOLD response as stimulus duration increased, across all visual areas. The standard DN model with a fixed decay constant successfully predicted responses for brief durations, replicating previous findings. Interestingly though, it failed to account for the response compression at longer timescales. That is, the DN model significantly underestimated the magnitude of compression at longer durations (e.g., > 1 sec). In contrast, a modified DN model with a scaled decay constant does capture visuocortical response amplitudes across a full range of durations, challenging the assumption of a unitary temporal normalization pool. This dynamic DN model operates over a broader continuum of timescales, by flexibly adjusting to the temporal structure of visual inputs.

Acknowledgements: This work was funded by the National Institutes of Health (NIH) Grand R01EY028163 to S. Ling.