Visual search for known objects is controlled by visual target representations held in visual working memory. Can visual search also be guided by verbal target descriptions held in verbal working memory? And would such cross-modality guidance be equally efficient than guidance from visual target representations? To answer these questions, we measured N2pc components of the event-related potential in two blocked search tasks in which participants were either given visual or verbal target descriptions. Search efficiency was manipulated between trials in terms of memory load (activation of one versus two colour templates). Search displays in the two tasks were physically identical. They each contained six differently coloured vertically or horizontally oriented bars. Each search display was preceded by a cue display indicating the one or two target colour(s) relevant in the upcoming search display. In the visual task cues were coloured squares while they were the initial letters of the colour words in the verbal task (e.g., R for red). Participants task was to find the bar that matched (one of) the cued target colour(s) and report its orientation. N2pc components measured in the visual task were slightly delayed and attenuated in high- versus low-load trials. Load costs of that magnitude have been attributed to reflect mutual inhibition between two simultaneously activated colour templates and have been interpreted to reflect an efficient search mode. The same pattern of N2pcs was observed in the verbal task. However, the relative load costs in the verbal task were substantially increased both in terms of N2pc amplitudes and latencies as compared to the visual task. These results suggest that there are qualitative differences in visual search that is guided by visual as compared to verbal target representations in that verbal guidance is less efficient, and possibly even serial, rather than parallel.
Acknowledgements: This work was supported by a research grant of the Leverhulme Trust (RPG-2020-319) awarded to AG.