Feedforward and Feedback effects in Visually Perceived Duration: The influence of semantic and mid-level visual features

Poster Presentation 16.345: Friday, May 15, 2026, 3:45 – 6:00 pm, Banyan Breezeway
Session: Temporal Processing: Duration and timing perception

Martin Wiener1, Anisha Krishnan1; 1George Mason University

A plethora of experimental evidence exists demonstrating that the perception of duration can be influenced by stimulus features such as size, numerosity, brightness, motion, emotion, clutter, and memorability. Extant theories to explain this effect have postulated a domain-general magnitude system, increased neuronal firing, attentional allocation, or processing efficiency. One method for determining how these effects are driven is whether they arise from feedforward, bottom-up processes across the visual hierarchy, or feedback, top-down processes from higher-order regions of the cortex back to lower-level ones. To address this, we conducted an experiment (n=50) wherein subjects were required to rapidly classify the durations of visual stimuli consisting of real objects or texture-matched images controlled for mid-level visual features yet devoid of semantic identity, known as “texforms” (Long, et al. 2016). The images consisted of animate or inanimate objects that were either large or small in real-world size, offering a test of magnitude-based accounts. Notably, previous work has shown that, despite only containing mid-level features, texforms can still activate ventral stream responses – to a lesser extent and intensity – and retain some identifiable information. Results demonstrated that, for regular images, larger real-world size induced time dilation, with no effect of animacy, and no effect observed within texforms. Surprisingly, we also observed that regular images were dilated compared to texforms, to a much larger degree. A follow-up experiment (n=50) with texforms and regular images inter-mixed replicated these effects, again with no effects within texform images. To understand this further, we compared texform and regular images in a recurrent convolutional neural network. Here, we observed that regular images were processed faster than texforms through the network, but this effect was only found at higher layers. The findings thus support the idea of recurrent or feedback effects to visual cortex as driving time dilation effects.