Temporal Coherence Shapes Behavioral and Neural Representations of Action
Poster Presentation 16.341: Friday, May 15, 2026, 3:45 – 6:00 pm, Banyan Breezeway
Session: Temporal Processing: Duration and timing perception
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Ali Feza Karahaliloglu1, Burcu Ayşen Ürgen1; 1Bilkent University
Temporal receptive windows (TRWs) describe the timescales over which sensory systems integrate dynamic information. Although TRWs are typically studied using long naturalistic stimuli, it remains unclear whether similarly graded perceptual signatures can be elicited with short, well-controlled stimuli, and how attention modulates these dynamics. We developed a new temporal-scrambling paradigm using 1.5-s action videos in which an actor dragged or rotated a spherical object. Each video was divided into equal-length clips whose order was permuted pseudorandomly to generate eight scrambling levels. In alternating blocks, participants attended either the action or a central fixation target. During action-attention blocks, participants judged whether the object moved toward or away from the actor. Twelve participants completed the task behaviorally. Accuracy increased and response times decreased as videos became more intact (Accuracy: F(7,77)=59.43, p<10⁻¹²; RT: F(7,77)=5.68, p<10⁻³). A psychometric fit revealed a robust positive slope (mean=0.507 log-odds/level), explaining 90% of group-level accuracy variance. A small “toward” response bias diminished with temporal coherence, consistent with reduced uncertainty on intact trials. These results demonstrate that short, controlled stimuli can elicit clear TRW-like behavioral signatures. Three participants were scanned during 3T fMRI while performing the same task. Intact versus scrambled contrasts produced stronger responses in higher-level regions of the action observation network (AON), whereas the opposite contrast yielded clusters in early visual cortex. We assessed neural sensitivity to temporal coherence using permutation-based decoding of attend-to-action data. Decoding accuracy was reliably above chance across subjects, with the strongest effects in hMT (Cohen’s d ≈ 5.0) and PMd (d ≈ 4.2), and additional robust decoding in AIPS, IPL, and mid-level visual cortex (V2/V3). Early visual cortex (V1) showed no decodable structure, indicating that decoding reflects action-relevant temporal integration rather than low-level visual features. These findings introduce a compact paradigm for probing perceptual TRWs and attentional influences on action processing.
Acknowledgements: This project is funded by TÜBİTAK (The Scientific and Technological Research Council of Türkiye) - ARDEB (Academic Research Funding Program Directorate) within the scope of the "1001- Scientific and Technological Research Projects Funding Program" Project Number: 124K961