Attentional templates for visual motion
Poster Presentation 26.409: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Attention: Features, objects
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Xiaoli Zhang1, Taosheng Liu1; 1Michigan State University
Visual attention is guided by attentional templates that store task-relevant information. It has been shown that the maintained templates can capture attention during a secondary search task where the template feature is task-irrelevant, a phenomenon we refer to as template-driven capture. This capture effect has been used to assess the characteristics of attentional templates for static features such as color and orientation. Importantly, it remains unknown how attentional templates encode dynamic features such as visual motion, given that motion direction can also guide visual search effectively (e.g., Girelli & Luck, 1997). This study aims to explore attentional templates for motion direction using the template-driven capture paradigm. Participants were cued to attend to a particular motion direction and perform a sustained feature-based selection task. The selection task was intermixed with a visual search task containing an irrelevant distractor that either matched or mismatched the cued direction. We used the performance difference between these conditions to index the guiding strength of the attentional template. We found template-driven capture for motion direction, indicated by a higher inverse efficiency score in matched than mismatched search trials. A follow-up experiment was conducted to explore whether motion direction is automatically recoded into orientation information during maintenance. Current results suggest that orientation features alone only produce salience-driven capture but not template-driven capture. Our results demonstrate that motion direction is actively maintained in attentional templates, providing a potential mechanism for effectively utilizing dynamic information to guide attention. More broadly, our study extends the template-driven capture paradigm to dynamic information and opens up new avenues for further exploration into the properties of attentional templates across diverse features.
Acknowledgements: This work is funded by NIH R01EY032071 to TL.