V-VSS, June 1-2

Visual Search, Attention

Talk Session: Thursday, June 2, 2022, 8:30 – 9:45 am EDT, Zoom Session

Times are being displayed in EDT timezone (Florida time): Friday, September 30, 11:10 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:30 am, 81.61

Target-specific image clutter metrics for visual search

Yelda Semizer1 (), Melchi Michel2; 1New Jersey Institute of Technology, 2Rutgers University

Researchers working across a variety of domains investigate the effects of visual clutter on performance have developed a handful of measures for characterizing this clutter. Although these measures can provide coarse predictions of search performance, they do not account for the properties of search targets and how these interact with clutter. In particular, they do not consider target-background similarity in quantifying the effect of clutter on search performance. However, when search targets and backgrounds share similar features, performance declines (Semizer & Michel, 2017). Here, we propose two new clutter metrics based on different measures of target-background similarity (i.e., exemplar level and category level) to predict the effect of clutter on search performance. Our metrics compute the similarity between target and background features (i.e., orientation subbands) in images while also accounting for size of a search target. The exemplar-level metric quantified the overlap between features of a specific search target (present in the search image) and features of a search background, while the categorical-level metric quantified the overlap between features of a search target category and features of a search background, where the latter can be used to predict search performance when the target is absent. We tested the predictive power of these metrics, along with that of an existing target-agnostic clutter metric, using a set of search data where the task was to detect and locate categorical targets in a set of natural images. Our results demonstrate that both clutter metrics successfully contributed to explaining search performance as a function of the search target. More importantly, these metrics can predict such differences even for scenes in which the target is absent, suggesting that the categorical representation of the target guides the search.

Acknowledgements: NSF BCS1456822

Talk 2, 8:45 am, 81.62

The visual search performances for metric and ordinal depth information show a pattern of dissociation

Ke Zhang1,2 (), Jiehui Qian1; 1Sun Yat-Sen University, Guangzhou, China, 2Shaoxing University, Shaoxing, China

Neural evidence has shown that the processing of ordinal (categorical) spatial relation and metric (coordinate) spatial information involves different brain areas. Recent behavioral evidence suggests a difference in processing between metric depth (absolute distance) and ordinal depth (spatial relations in depth), but the mechanism underlying the difference is unclear. Here, we investigated the processing of the metric depth and ordinal depth by a visual search task. Items were presented at multiple depth planes defined by binocular disparity, with one item per depth plane. In the metric-search condition, the participants needed to search for a target that was presented on a particular depth plane, which was previously shown as the target depth, among one to three distractor depth planes. In the ordinal-search condition, they needed to search for a target that had a particular depth order, which was previously shown in numbers (‘1’ indicated the nearest depth plane, ‘2’ indicated the second nearest, and so on), among the distractor depth planes. The performances showed a pattern of dissociation. When searching for a metric depth, the overall reaction time (RT) increased as the depth separation between the target and the distractor became smaller. However, the depth separation showed no effect on the RTs for searching an ordinal depth. When searching for an ordinal depth, the RTs were faster when the target was presented at the nearest and the farthest depth planes than when it was presented at the middle planes. No such effect was found for searching a metric depth. Our finding indicates different underlying mechanisms for processing metric depth and ordinal depth information, which may involve distinctive activation in the dorsal and ventral pathways.

Talk 3, 9:00 am, 81.63

What fixation durations reveal about the functional visual field in search through natural scenes

Daniel Ernst1,2 (), Jeremy Wolfe1; 1Brigham & Women’s Hospital / Harvard Medical School, 2Bielefeld University

An observer, fixating at one location, selectively processes stimuli in some surrounding region when searching for targets in visual search tasks. This region is called the functional visual field (FVF). Recent FVF research has sought to explain why observers sometimes miss a target even though they fixated nearby, placing the target clearly within the FVF. Such miss errors become especially important when the targets are, for example, tumors on x-rays or threats in baggage at airport security checkpoints. Understanding of such "look but fail to see" errors would benefit from a direct measure of the FVF during search. We use the duration of the “pre-target fixation” (PTF) as such a measure. The PTF is the fixation that immediately precedes a saccade to the target. In previous experiments where participants search for an O among Cs, PTF duration is shorter than other fixation durations when search is easy and when the target is near to the current fixation. It is hypothesized that the presence of a strong target signal inside the FVF speeds saccade planning and the resulting release of a saccade. In the present study, we tested whether similar effects can also be found in search through natural scene images where observers do not have precise target templates but only know the target category. To that end, we analyzed the large open source COCO-Search18 gaze dataset (Chen, Yang, Ahn, Samaras, Hoai, & Zelinsky, 2021). As in search for artificial stimuli, results showed shorter PTF durations if the target is close. As would be expected, the FVF as estimated from PTF duration is smaller when search is more difficult. The PTF duration method can be used to estimate the size of the FVF in a manner more natural and more convenient than gaze contingent moving window paradigms.

Acknowledgements: This research is funded by Deutsche Forschungsgemeinschaft (DFG, grant ER 962/1-1)

Talk 4, 9:15 am, 81.64

Foreground Bias: Inconsistent Target Effects Reduced When Searching Across Depth

Karolina Krzys1 (), Louisa Man1, Jeffrey Wammes1, Monica Castelhano1; 1Department of Psychology, Queen's University

Attentional guidance in scenes is influenced by a multitude of factors, some of which operate jointly and some independently. The semantic context, which relates incoming visual properties with prior knowledge, is among the most influential factors. Recent research has demonstrated a strong foreground bias in scene processing, supported by both eye-tracking data (more fixations to foreground), and visual search (faster and more accurate target detection in foreground). However, it is unclear whether this foreground prioritization is influenced by semantic context. Here, we examined how attention was deployed during search, depending on whether the target was consistent with the foreground or background. For each scene, targets were selected to be semantically consistent with either the foreground or background (e.g., toaster in kitchen, printer in office). Targets became inconsistent when swapped between foreground and background. To account for size differences across depth, the visual angle of large objects in the background was comparable to small objects in foreground. Thus, we implemented a fully crossed factorial design with: Depth (foreground vs. background), Consistency (semantically consistent vs. inconsistent), and Size (small vs. large) as within-subjects factors. Participants searched for these targets and response times (RT) were collected. Results indicated significant main effects of depth and consistency, with faster RT for foreground and semantically consistent targets. However, there was also a 2-way interaction of depth and size, and a 3-way interaction. Further analyses of the 3-way interaction revealed faster RT for consistent targets only for small foreground and large background targets. To further control for size, only targets of comparable visual angle were included in a subsequent analysis. Here, the effect of semantic consistency was significantly smaller in the foreground than background region. We conclude the Foreground Bias modulates the effects of semantics by decreasing its impact in space near the viewer.

Talk 5, 9:30 am, 81.65

Internal Attention and Precision in Working Memory are Inseparable

Fatih Serin1 (), Eren Günseli1; 1Sabanci University

The ability to guide attention is suggested to be superior for one of multiple items in working memory (WM). While this ability has been traditionally attributed to internal attention (Olivers et al., 2011), a new study proposed that the item with higher memory precision guides external attention (Williams et al., 2019). However, because internal attention also boosts precision, previous studies were unable to dissociate these two theories. Here, we aimed to independently manipulate internal attention and precision on the same trial. On each trial, two colors were shown. Then, on 70% of the trials, memory for each color was sequentially tested using a three-alternative forced-choice task. On 30% of the trials, a search task was given to quantify attentional guidance. Before either of these tasks, a retro-cue indicated which color will be tested first, thus should be attended. Critically, participants were also instructed that the test for the second color will require higher precision, as it will involve lure colors more similar to the memory color, making them harder to distinguish. Thus, we encouraged participants to direct their internal attention to the cued color while storing the non-cued color with higher precision via instructions (Experiment 1), additional feedback (Experiment 2), and reward (Experiment 3). Experiment 4 reversed the order of the test on some trials to control for the effects of output interference. To unconfound task difficulty from assessing precision, a minority of trials in each experiment tested both items with equal lure-target similarity. In all experiments, the cued item was reported more accurately, implying its higher precision. These results indicate that dissociating internal attention and precision in WM may not be possible, revealing the difficulty of isolating the determining factor of attentional guidance as internal attention or precision.