Visual Search 2

Talk Session: Tuesday, May 21, 2024, 10:45 am – 12:15 pm, Talk Room 2
Moderator: Wilson Geisler, University of Texas at Austin

Talk 1, 10:45 am, 52.21

Temporal dynamics of multiple attentional template activation during preparation for search

Anna Grubert1 (), Ziyi Wang1, Mikel Jimenez1, Ella Wiliams2, Roger Remington3, Martin Eimer4; 1Durham University, 2Oxford University, 3University of Minnesota, 4Birkbeck, University of London

Visual search for known objects is guided by attentional templates (target representations held in working memory), which are activated prior to search. We used an RSVP paradigm to track the temporal dynamics of template activation when multiple colours are task relevant. Search displays containing a pre-defined colour target and five differently coloured distractors were shown every 1600ms. Every 200ms between successive searches, a target- or distractor-colour probe was presented. N2pc components (electrophysiological markers indexing attentional capture) were measured at each probe’s temporal position prior to search to determine when in time attentional templates were activated. Target-colour probe N2pc amplitudes increased during the preparation period and were largest for probes directly preceding the next search display. This pattern of transient template activation was identical in single- and two-colour search and probe N2pcs were comparable in size when participants searched for one versus two colours, when the two possible target colours were equiprobable or differed in their likelihoods, and when they changed randomly versus predictably. Transient template activation was also observed in three-colour search, but only when target colours appeared randomly. When they alternated predictably between search episodes, only probes that matched the upcoming target colour triggered N2pcs. This suggests that two attentional templates can be activated in parallel without any apparent costs. However, with three templates, participants prefer to make use of strategic opportunities to reduce working memory load. Notably, distractor-colour probes never triggered N2pcs, demonstrating perfect colour-selectivity in one- two-, and three-colour search.

Acknowledgements: This work was funded by research grants of the Leverhulme Trust (RPG-2020-319) awarded to AG and the Economic and Social Research Council (ES/V002708/1) awarded to ME.

Talk 2, 11:00 am, 52.22

The Characteristics of Distractor Templates Arising from Learned Suppression

Rory Ferguson1 (), Bo Yeong Won1; 1California State University, Chico

In visual search, individuals use cognitive representations known as distractor templates to filter out irrelevant distractors and focus on relevant targets. While previous research has predominantly focused on target templates, this study aims to investigate the nature of distractor templates. Specifically, we investigate the type of information—whether perceptual, semantic, or a combination of both—that is derived from the repeated suppression of distractors. During the training phase, participants sought a target object from a specific category (e.g., shoe) among other distractor objects from a different category (e.g., broom). Following the training phase, without explicit transitions, four different types of distractors were introduced: 1) new exemplars of trained distractors (e.g., a new broom), 2) semantically related distractors (e.g., bucket), 3) perceptually related distractors (e.g., spatula), and 4) unrelated distractors (e.g., light bulb). We hypothesized that if the distractor template included perceptual information but not semantic information, perceptually similar distractors (e.g., spatula) would be suppressed more effectively than semantically related (e.g., bucket) or unrelated distractors (e.g., light bulb), resulting in faster search. Conversely, if the distractor template contained semantic information, semantically related distractors (e.g., bucket) would be suppressed more efficiently than perceptually related (e.g., spatula) or unrelated distractors (e.g., bucket), leading to faster search. If the distractor template encompassed both semantic and perceptual information, both semantically related (e.g., bucket) and perceptually related (e.g., spatula) distractors would exhibit faster search than unrelated distractors (e.g., light bulb). We found that the distractor template formed through repeated exposures contains both semantic and perceptual information to some extent. It is noteworthy that our distractor processes extend beyond mere feature-based information, incorporating semantic details of distractors encountered repeatedly. These novel findings highlight how attentional guidance during visual search tasks is influenced not only by feature-based but also semantic-based processes.

Talk 3, 11:15 am, 52.23

Looking for Tampa Buccaneers: Familiar sport logos are found more efficiently in hybrid search

Dyllan Simpson1 (), Lauren Williams1, Viola Stöermer2, Timothy Brady1; 1University of California, San Diego, 2Dartmouth College

In hybrid search tasks, observers search the environment for multiple target items they hold in memory (e.g., locating ingredients for a recipe in the supermarket). Previous research showed that search performance deteriorates with the number of targets memorized (Wolfe, 2012). The current studies tested how the familiarity and activation level of memory items affect hybrid search efficiency. In the first two experiments, we contrasted performance for a set of 16 search targets seen once with a set where memory strength was increased by repeating items 8x (Exp. 1) or by repeating and asking questions about the items (e.g., “What is the primary use for this object?”) to encourage deeper processing (Exp. 2). In a third experiment we selected participants based on their self-reported expertise in sports to capitalize on their strong memories for certain sport logos (NFL vs. NHL fans). We then compared search for 16 targets in their domain of expertise with 12 targets from the other sport. In all experiments, after the memorization and memory test phases, participants performed a visual search at small or large set sizes (6 vs 12 or 8 vs 16). Search performance was measured using inverse efficiency scores (IES) to account for speed/accuracy tradeoffs. Across all experiments (overall N=143), IES scores were generally lower for the high strength memory condition than the low strength memory condition. This indicates better search performance for target sets with stronger memory representations, even when the stronger memory list was larger. Broadly, these data contrast with the idea of a simple search through lists of items in memory. They show that differences in memory strength — and therefore differences in how accessible items are — can account for key aspects of memory set size effects in hybrid search.

Talk 4, 11:30 am, 52.24

Understanding Covert Search in Noise Backgrounds Using Heuristic Decision Analysis

Anqi Zhang1,2 (), Wilson Geisler1,3; 1Center for Perceptual Systems, University of Texas at Austin, 2Department of Physics, University of Texas at Austin, 3Department of Psychology, University of Texas at Austin

A classic covert search paradigm is to measure search accuracy as a function of the number of potential target locations at a fixed retinal eccentricity, which minimizes the differences in sensitivity across the potential locations. For well-separated targets there are many cases where the effect of the number of locations (the set size) is predicted by parallel unlimited processing (a Bayes optimal decision process). Here we measured search accuracy for 19 well-separated potential target locations that tiled the central 16 deg in a triangular array. The search display was presented for 250ms (the duration of a typical fixation in overt search). Each location contained a 3.5 deg patch of white noise. On half the trials there was no target, and on half the trials a small wavelet target was added to the center of one of the 19 locations. The task was to indicate the location of the target or that it was absent. To precisely characterize eccentricity effects, we measured in a separate experiment the detectability of the target at each location. Under the assumption of statistical independence, we found that human search accuracy slightly exceeded that of the Bayes optimum, and that the observers suffered a modest loss of sensitivity in the fovea (foveal neglect). Furthermore, the observers were able to do this even though the Bayes optimal decision process uses precise knowledge of the sensitivity (d’) at each potential location, which varied substantially across the search locations. These seemingly impossible results may be explained by two plausible factors. First, we show that a simple heuristic decision rule that assumes a fixed sensitivity at all potential locations is very close to optimal. Second, we show that intrinsic temporal variations in overall sensitivity could explain how search performance can be slightly above the optimal performance predicted assuming statistical independence.

Acknowledgements: National Institute of Health (EY024662, EY11747).

Talk 5, 11:45 am, 52.25

Knowing what you missed in mixed hybrid visual search

Ava Mitra1, Jeremy Wolfe1,2; 1Brigham and Women's Hospital, 2Harvard Medical School

Mixed-hybrid search is a model task for investigating errors in everyday visual search when simultaneously searching for multiple types of targets (e.g., finding a specific freeway exit while also searching for the category of “obstacles” such as barriers, workers, or raccoons). In mixed hybrid search, people are better at finding specific items but often miss the more categorical targets. Using methodologies from the Inattentional Blindness literature, a previous study found that when participants miss items during search, they can identify the correct item significantly above chance in a subsequent 2AFC identification task, even while reporting little or no awareness of missing any items. What type of information is retained about these missed items that later enables one to identify them correctly? Might participants have rough categorical representations about the missed item, even when guessing about the specific item within the category? Our participants searched for two specific items and two categories of items with 0, 1, or 2 targets present. Stimuli were visible until participants responded. Following each trial, participants rated their confidence in their search response (0-100), then performed a 2 or 6AFC task to identify potential missed targets. Finally, they reported their confidence in their forced-choice selections. In the 6AFC task for categorical missed targets, participants identified the exact missed item 46% of the time (chance is 16.6%, t(12)=5.1, p=0.003). Participants correctly answered the 6AFC when uncertain about their search performance (34 on a 100-point confidence scale). However, when confident that no target was missed (confidence 88), they guessed in the 6AFC task, both about the item and the category (40%, chance=40%). Overall, observers have some awareness of missed information in search even if the search is self-terminated. However, when they are sure they missed nothing, there does not appear to be any subsequently recoverable information.

Acknowledgements: NEI EY017001

Talk 6, 12:00 pm, 52.26

How inflexible is the attentional bias towards recently selected locations?

Daniel Toledano1 (), Dominique Lamy1; 1Tel Aviv University

Attention is strongly biased towards the location where a previous target was recently found. This priming-of-location (PoL) effect is thought to reflect a primitive mechanism by which selecting an object automatically and proactively enhances the attentional priority at its location. This account predicts that PoL should be unaffected by changes in task context. However, in most previous PoL studies the task context remained constant. Here, we tested this prediction using a probe paradigm. We manipulated task context by interleaving search trials where participants searched for a shape target among nontargets (2/3 of trials), search-probe trials where they reported letters briefly superimposed on the search display after a short delay (1/6), and probe trials where only the letters appeared (1/6). In Experiments 1 and 2, we found that a letter was more likely to be reported when it appeared at the previous target location than elsewhere. Crucially, this bias was similar when task context repeated (search→search-probe sequences) and when it changed (search→probe sequences). However, in these experiments participants expected a search task on most trials. Therefore, when the context changed, the expected context did not. In Experiment 3, we reversed the task probabilities (probe task on 2/3 of the trials) and in Experiment 4, we used an AABB design, such that the upcoming task was known with 100%-certainty. The bias to report the letter from the previous target location was reduced as the task-change expectation increased. Interestingly, in probe->search sequences, RTs in the search task were faster when the target appeared at the location of a previously reported letter than elsewhere, in all experiments - but this effect was not modulated by task-change expectations. Overall, our findings indicate that selecting an object proactively enhances the attentional priority at its location but expectations about the tasks' context reduce this bias.