Visual Search: Attention, mechanisms, models
Talk Session: Monday, May 18, 2026, 10:45 am – 12:15 pm, Talk Room 1
Moderator: Jeremy Wolfe, Brigham and Womens Hospital / Harvard Med
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 10:45 am, 42.11
Neural representations of spatial priority during visual search in feature-selective and parietal cortex
Daniel D Thayer1, Thomas C Sprague1; 1University of California, Santa Barbara
Attention is directed to locations that are goal-relevant and image-salient (but irrelevant). To identify important locations, priority map theory posits that feature-selective retinotopic maps (e.g., color or motion) index salient (Thayer & Sprague, 2023) and relevant (Thayer & Sprague, 2025) locations based on their preferred feature dimension, which is subsequently integrated into a feature-agnostic priority map, where the most important location guides attention. Even though feature dimension maps are crucial for directing attention, it is unclear how neural responses within these maps reflect the priority signals associated with relevant and salient stimuli during visual search. Here, we used a covert search task to evaluate how concurrently present relevant and salient items are represented in neural feature dimension maps. On each trial, participants were cued to search for a target defined by a specific color or motion direction in a subsequent search array containing 8 colorful moving dot stimuli. All array items had homogenous features, except for the target, which differed solely on the cued feature dimension, and the occasional salient distractor, which was either a unique color or motion direction. We used an encoding-based multiple regression analysis with activation patterns in feature-selective retinotopic regions (motion [TO1/TO2] and color [hV4/VO1/VO2] maps) to assess the response to individual search array items. Targets and distractors were represented in neural feature dimension maps and representations were stronger when defined by the preferred feature dimension of each region. Furthermore, the time course of feature-selective responses was consistent with behavioral response times. However, target responses in parietal cortices (IPS0/IPS1) correlated most strongly with behavior, suggesting that IPS0/IPS1 may be an integrated priority map and ultimately determine what is important in the visual field. These results indicate that neural feature dimension maps are crucial for computing attentional priority and that activation profiles in parietal regions guide behavior.
Funding: Alfred P Sloan Research Fellowship, National Eye Institute R01-EY035300
Talk 2, 11:00 am, 42.12
Distinguishing Serial and Parallel Search Using Neural Signatures
Güven Kandemir1,2, Docky Duncan1,2, Dirk van Moorselaar3, Jan Theeuwes1,2,4; 1Vrije Universiteit Amsterdam, 2Institute Brain and Behavior Amsterdam (iBBA), 3Universiteit Utrecht, 4William James Center for Research, ISPA-Instituto Universitario
How do neural responses differ when one searches for a red rose among red poppies or white tulips? When a target is highly salient, as in a red rose among white tulips, search can take place based on the salient feature. By processing all displayed elements in parallel, the target simply pops out, resulting in rapid detection. Conversely, when target–distractor similarity is high, as in a red rose among red poppies, each item must be processed individually or in small groups, slowing detection rate with increasing distractor count. Despite the long-standing distinction between parallel and serial search, neural correlates of these strategies have rarely been contrasted, since interpretation is typically confounded by differences in visual displays. In this study, we contrasted neural correlates of search strategies for visually identical displays. For this, we biased participants toward parallel or serial search in subsequent blocks by varying the similarity of the target and the distractors, prompting a focus on feature or conjunction. Embedded among inducer trials, test trials were identical in all blocks and could be searched with either strategy. Behavioral analyses from 24 participants confirmed successful induction of distinct search modes. EEG decoding from 17 posterior electrodes revealed that attentional deployment differed across search conditions in test trials, with significant generalization between test and inducer trials using the same strategy. The target location was represented differently across strategies, including differences in scalp topography. Comparing these location representations revealed that during parallel, but not serial, search, observers switched strategy when the target was not detected early. In addition, we found condition-specific differences in the neural representation of the target itself, consistent with more advanced stages of processing. These findings demonstrate that parallel and serial search are strongly influenced by history effects even under identical visual stimulation, resulting in distinct neural dynamics during visual search.
European Research Council (ERC) advanced grant [833029–LEARNATTEND] and Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) SSH Open Competition Behaviour and Education grant [406.21.GO.034]
Talk 3, 11:15 am, 42.13
What if you never searched for the same target twice?
Jeremy Wolfe1, Cailey Tennyson2; 1Brigham and Womens Hospital / Harvard Med, 2Brigham and Womens Hospital
In standard visual search experiments, observers typically look for the same target over many trials. Using these repeated searches, researchers have studied observers’ search templates, intertrial priming, distractor suppression, etc. These are important research topics. However, in the real world, many searches are one-time events. After all, we do not search for the VSS registration desk 100 times in a row. Do the conclusions drawn from blocks of repeated search apply to one-time searches? We had observers search for 6-dimensional conjunction targets. Targets were whole items with an enclosed part. Whole and part each had a color, shape, and orientation. On each trial, targets shared a particular number of features (0-5) with all distractors. 24 observers were tested for 300 trials, 50% target present. Set sizes: 6, 9, & 12. On each trial in the unique target condition, a novel target was shown before the trial, never to be shown again. The comparison condition was a single target search with the same target repeated for all trials, again sharing 0-5 features with distractors on each trial. Observers had unlimited time to encode each unique target. Results: Present RTs and Miss error rates were similar for unique and repeated targets. Absent RTs and False Alarms were higher in the unique condition, especially when targets and distractors shared 4 or 5 features. Perhaps observers simply guessed that targets must be present when they became very similar to distractors. A second experiment used only distractors that shared five features with each trial’s target. RTs were significantly slower in the unique condition, but RT x set size slopes were not significantly steeper. False alarms were higher for unique targets. Conclusion: Observers pay a cost when targets are new on every trial, but the fundamentals of search behavior remain similar to classic repeated target tasks.
NIH-NEI EY017001
Talk 4, 11:30 am, 42.14
Frequency Over Time Revisited: Learning Schedules and the Fate of Early Bias
Thiti Chainiyom1, Timothy F. Brady2, Chaipat Chunharas1,3,4; 1Cognitive Clinical and Computational Neuroscience Center of Excellence, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand, 2Department of Psychology, University of California San Diego, San Diego, CA, USA, 3Division of Neurology, Department of Medicine, Faculty of Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, Thailand, 4Chula Neuroscience Center, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, Thailand
In a world where patterns keep changing, do things that are common early on gain a search advantage later – e.g., will an ex-partner always be easy to spot in a crowd, even long after you’ve stopped actively looking for them? Visual statistical learning shows that repeated exposure builds familiarity, yet it remains unclear whether early high-frequency items continue to benefit in search once the pattern shifts. We asked whether an early high-frequency target (A) keeps an advantage over a later high-frequency target (B) when total exposure and low-level features are matched. Twenty-four participants completed two visual search tasks using colored circle/square targets (A and B) with controlled luminance contrast. Each task contained 640 trials: 160 baseline trials and 480 experimental trials, divided into three phases. In Experiment 1, Phase 1 introduced an early frequency bias toward A, Phase 2 raised the frequency of both targets, and Phase 3 tested later performance. In Experiment 2, as in previous work, A was frequent, and B was rare in Phase 1; their roles were swapped in Phase 2, and both targets appeared on 50% of Phase 3 trials. We analyzed phase-wise d′ and learning slopes. In Experiment 1, early A–B d′ differences in Phase 1 showed only a weak, non-significant link to Phase 2 (r = .32, p = .12), but reliably predicted A–B differences in Phase 3 (r =.59, p =.003), suggesting that early frequency can leave a familiarity bias that benefits performance later. In Experiment 2, learning slopes were predicted by whether targets were currently frequent or rare; these effects were similar for both targets, and early A–B d′ did not reliably carry over once the frequency swapped. Together, the results suggest that temporal structure and learning schedule can shape how early frequency biases survive or are overwritten over time.
Talk 5, 11:45 am, 42.15
Semantic Guidance During Visual Search: Thematic Facilitation vs. Taxonomic Competition
Tim Mousseau1, Joseph Nah2, Sarah Shomstein1, Joy J. Geng2,3; 1Department of Psychological and Brain Sciences, The George Washington University, 2Center for Mind and Brain, University of California Davis, 3Department of Psychology, University of California Davis
Objects are related to one another at different levels of semantic meaning. Taxonomic relationships arise from shared features and categorical membership, whereas thematic relationships stem from real-world co-occurrence. While semantic relatedness guides visual attention overall, it remains unclear how they differ. In three experiments (Ns = 331; 337; 20), participants searched for a cued target in an array of four objects. In the two semantic conditions, the target was paired with a taxonomically or thematically related distractor along with two unrelated distractor objects. In the neutral condition, all four objects were unrelated. In all experiments, participants identified the target significantly faster in the thematic (E1-3: M = 1206; 1203; 1120) than in the taxonomic (M = 1329; 1329; 1178) condition (all p<.003), and both semantic conditions were faster than neutral (M = 1381; 1404; 1232; all p<.003). Experiment 3 used eye-tracking to test when and how thematic and taxonomic relationships influence attention. Time-course analysis was conducted on fixation proportions toward targets, distractors, and the unrelated objects across 2 seconds. In the first 500 ms, participants fixated taxonomic distractors (M = 0.118) 3.5 times more often than unrelated distractors (M = 0.033), suggesting early attentional capture (p<.001). In contrast, fixations toward thematic distractors (M = 0.055) were comparable to unrelated objects (M = 0.042; p = .069), showing that thematically related distractors were rarely fixated, despite facilitating target-search. Critically, participants were 2 times more likely to fixate on the taxonomic than thematic distractors (p<.001). In summary, thematic distractors facilitate attention toward targets without being looked at directly, suggesting that naturally co-occurring objects form perceptual groups that can be detected without overt visual attention. Taxonomic distractors, however, competed with the target by spreading attention between objects, perhaps due to shared low-level features that could not be rapidly discriminated.
Talk 6, 12:00 pm, 42.16
Unlike color, shape is not an intuitively understood dimension for attentional guidance
Haiye Liu1 (), Jun-Ming Yu1, Alejandro Lleras1, Simona Buetti1; 1University of Illinois Urbana-Champaign
Visual search becomes harder as the target becomes more perceptually similar to surrounding distractors. Previous work has quantified this relationship using logarithmic search slopes, which increase as target–distractor similarity increases. However, it remains unclear to what extent subjective similarity judgments reflect the objective search difficulty measured in search tasks. Here, search slopes were obtained for a wide range of color pairs (hue differences of ±15° to ±70°) and 10 distinct shape pairs (including house-down, triangles, diamonds, circles, squares, and stars). A new group of participants rated the perceptual similarity of these same color and shape pairs on a 0–100 scale, enabling a direct comparison between subjective similarity structure and previously measured search efficiency. Color similarity showed a strong and highly predictive relationship with search performance (R²=.87), indicating subjective judgments of color similarity are well calibrated to the effectiveness of color signals in guiding search. In contrast, shape similarity showed only a weak, nonsignificant trend (R²=.31) suggesting that subjective shape similarity judgments do not track well search difficulty. Importantly, comparable search slopes were associated with very different levels of subjective similarity for color versus shape. For example, a distractor color at a relatively close distance to the target yielded a slope of 104 ms/log-unit and was rated as subjectively quite similar (62/100) to the target color. Yet a five-point star distractor with a nearly identical slope was rated very subjectively dissimilar to the target (17/100). These findings demonstrate that color provides a more robust and perceptually aligned signal for guiding visual search than shape, revealing an important asymmetry in how feature dimensions contribute to attentional guidance. Subjective similarity, it turns out, is not always an accurate window into the efficiency of perceptual processes. Observers may sometimes have wrong intuitions about what feature dimension should be more helpful during search.