The time course of activating, maintaining, and switching between attentional templates in visual search

Poster Presentation 33.423: Sunday, May 19, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Attention: Features, objects 2

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Martin Eimer1 (), Gordon Dodwell1, Rebecca Nako1; 1Birkbeck, University of London

Research on task switching often focuses on stimulus-guided response selection and execution. In contrast, the processes associated with changing task settings involved in covert attentional control have been much less well studied. Here, we focus on the dynamics of activating and switching mental representations of target-defining features (attentional templates) during the preparation for visual search. We employed a new high-definition rapid serial probe presentation paradigm, where lateral “clouds” of multiple densely spaced dots in different colours are presented in rapid succession throughout the interval between target displays that contain a colour-defined target. By measuring N2pc components triggered by cloud probes that match a currently task-relevant colour, feature-specific search template activation processes can be tracked with very high temporal resolution. Participants prepared for and responded to a specific “early” target colour that appeared on a subset of trials, or switched to a different “late” target colour on other trials. In one experiment, the absence of the early target was the cue for switching to the late target colour. In a second experiment, an explicit stay/switch cue appeared when the early target was absent, indicating that the previous template should either be maintained or changed to a different colour. Results showed that target templates were activated and switched with remarkable speed and temporal precision, and in line with changes in task demands. They also provided new evidence for the simultaneous co-activation of multiple attentional templates.

Acknowledgements: This work was funded by a grant by the Economic and Social Research Council, UK (ES/V002708/1).