VSS, May 13-18

Visual Search

Talk Session: Saturday, May 14, 2022, 2:30 – 4:15 pm EDT, Talk Room 2
Moderator: Anna Kosovicheva, U. of Toronto Mississauga

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:57 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 2:30 pm, 24.21

The spatial and temporal characteristics of the priming of location effect: Revisiting Maljkovic and Nakayama (1996)

Daniel Toledano1 (), Dominique Lamy1; 1Tel Aviv University

Stimulus saliency and search goals determine which location receives our attention first. However, our past search history also has a particularly striking impact on search performance: in their seminal work, Maljkovic and Nakayama (1996) discovered that search is reliably faster if the current target happens to appear at a previous target’s location, and slower if it appears at a previous distractor’s location, relative to empty space. Furthermore, these effects, referred to as location priming, were found to decay when the spatial distance between current and past targets, as well as the temporal interval between current and previous trials, increased. Although highly influential, these experiments relied on the data of only three participants and follow-up empirical research has been relatively scarce. In the current study, we re-analyzed a recently published large-scale dataset of over 210,000 trials from 8 experiments (Adam et al., 2021), where participants searched for a shape target among nontarget shapes. Several variables were manipulated, such as display size, search strategy, salient color-distractor presence, target-color repetition and inter-trial time. Beyond replicating the basic phenomenon, our analyses generated several novel findings. First, we found location priming to be far longer-lasting than previously thought (12 trials back instead of just 5-8). Second, we disentangled the influence of passing time from the influence of intervening trials to account for the effect’s decrease over time. Third, we found task demands to strongly modulate the temporal and spatial characteristics of location priming. Finally, we show that a hitherto overlooked confound accounts for the findings attributed to inhibition of previous distractors’ locations, suggesting that target location enhancement alone underlies location priming. Taken together, these findings advance our understanding of the complex spatial and temporal dynamics of location priming.

Talk 2, 2:45 pm, 24.22

Target-rate effects in continuous visual search

Louis K H Chan1 (), Winnie W L Chan2; 1Hong Kong Baptist University, 2Hong Kong Shue Yan University

Vigilance and visual search tasks have been two important tasks used in visual attention research. Both tasks involve detection of a potential target. While vigilance tasks usually involve continuous monitoring of a single object, visual search tasks usually involve multiple objects and discrete trials. In real life, we often search for potential targets among multiple objects continuously. Infrared body temperature surveillance and lifeguarding are some examples. In this study, we are interested in whether previous findings on visual search generalize to a continuous variant. Specifically, we want to know whether a target prevalence effect – that targets are usually missed when they are rare – generalizes to continuous visual search. We designed a task that involves detection of a target feature among objects of continuously changing features. Targets may occur rarely or frequently in each several minute long session that is not separated into trials. In Experiment 1, participants monitored for the occurrence of a specific color among changing colors. Rare targets were associated with a slower detection RT and a higher miss rate. In Experiment 2, participants monitored both for a color and an orientation, and their relative frequency was manipulated. For both features, miss rates were higher when they were rarer. In Experiment 3, set size effects were measured and showed a relative frequency effect. The more frequent targets were associated with higher search efficiencies. Experiment 4 ruled out a vigilance account based on a flash detection dual task. This suggests that target-rate effects in continuous visual search were mostly decisional. Taken together, common behavioral effects in visual search, including target-rate effects and set-size effects, seems to be replicable in its continuous variant. Further research is needed to detail the similarities and differences in the attentional and decisional processes between standard and continuous visual search tasks.

Acknowledgements: This research was supported by a grant from the Hong Kong Research Grants Council (UGC/FDS23/H03/19)

Talk 3, 3:00 pm, 24.23

The Moose Came Out of Nowhere: Low Prevalence Effects in Road Hazard Detection

Anna Kosovicheva1 (), Jeremy M. Wolfe2,3, Benjamin Wolfe1; 1University of Toronto Mississauga, 2Brigham & Women's Hospital, 3Harvard Medical School

If all the drivers in your city were bad, would you be better at detecting dangerous events on the road? What if they were all good? In visual search, a low prevalence effect (LPE) has been found, in which observers frequently miss rare targets. These studies have used static stimuli, visible until response. In contrast, detections of road hazards often afford only brief glimpses of complex, dynamic scenes before decisions must be made. We tested the LPE with a novel road hazard detection task. Observers viewed brief (333 ms) video clips of road scenes recorded from dashboard cameras. These preserve the visual complexity of natural driving, while allowing control over event prevalence. In five online experiments (n=16 each), observers viewed our road scene clips and reported whether or not they saw a hazardous event on each trial. Using hazard prevalences of 50% and 4% in separate sessions, we replicate the LPE results from visual search; miss error rates were twice as frequent in the low prevalence condition compared to high prevalence (40 vs 18%, p < .001). This difference was attributable to a more conservative criterion in the low prevalence condition, while sensitivity (d’) was similar between conditions. Furthermore, miss rates increased as hazards became increasingly rare, down to 1% prevalence. Additional experiments showed that these results could not be explained by simple motor errors, since allowing observers to correct their responses did not impact the LPE. Finally, this effect persisted even when observers were explicitly pre-briefed about the LPE, indicating that simple cognitive interventions may not be effective at eliminating it. Together, our results demonstrate that the LPE generalizes to complex perceptual decisions in dynamic natural driving scenes, where observers must monitor and respond to rare hazards.

Acknowledgements: This work was supported by an NSERC Discovery Grant (RGPIN-2021-02730 to BW)

Talk 4, 3:15 pm, 24.24

Feature-temporal predictions can guide attention during visual search in dynamic scenes

Gwenllian C. Williams1 (), Sage E. P. Boettcher1, Nir Shalev1, Anna C. Nobre1; 1Department of Experimental Psychology, University of Oxford

Our world is dynamic, with various items coming in and out of view at different times. An efficient cognitive system should be able to allocate attention to relevant spatial locations, features, and moments in time. For example, when searching for a taxi you have ordered, you may hold expectations about the taxi’s colour, likely location, and time of arrival. Recent work using a novel dynamic visual search task has shown that spatiotemporal regularities can be used to guide attention towards targets in space and time. However, it remains unclear if feature-temporal regularities can also be used to improve our visual search performance. That is, when we hold no spatial expectations (i.e., we cannot predict the direction the taxi will arrive from), can we guide our attention based on featural and temporal expectations? We investigated this using an online dynamic visual search task that required participants to find and click on multiple targets in a search display. Targets and distractors faded in and out of view at different times during trials. The task contained feature-temporal regularities in that half of the targets in each trial always appeared in the same colour and at the same time. The remaining half of the targets appeared in an unpredictable colour and at an unpredictable time during trials, making them feature-temporally unpredictable. Participants located targets significantly more often, and significantly faster when they were feature-temporally predictable compared to when they were feature-temporally unpredictable. From this finding, we concluded that participants were able to use the feature-temporal regularities in the dynamic visual search task as a basis for attentional guidance. Further, no participants reported noticing feature-temporal regularities during the task, suggesting this attentional guidance may be implicit.

Acknowledgements: NIHR Oxford Health Biomedical Research Centre; Wellcome Trust Senior Investigator Award to A.C.N. (104571/Z/14/Z); James S. McDonnell Foundation Understanding Human Cognition (number 220020448); The Wellcome Centre for Integrative Neuroimaging is supported by the Wellcome Trust (203139/Z/16/Z).

Talk 5, 3:30 pm, 24.25

Functionally Related Objects Capture Attention and Improve Search Guidance

Steven Ford1 (), Gregory Zelinsky2, Joseph Schmidt1; 1University of Central Florida, 2Stony Brook University

Consistency between objects and scene locations improves search performance (Draschkow & Vo, 2017). Functionally related objects (i.e., a hammer above a nail) represents a form of object consistency that may lead to perceptual grouping and attentional capture (Green & Hummel, 2004; 2006). We test this hypothesis utilizing a search task with eye-tracking and Event-Related-Potential (ERP) metrics. Participants were cued with two objects, which were either functionally related or unrelated. After a brief retention interval in which the ERPs were assessed, participants searched for one of the two objects among three unrelated distractors. If related items capture attention more than unrelated items, we should see a larger cue-related N2pc, consistent with a stronger spatial attention shift (Luck, 2012). Additionally, if functionally related items are perceptually grouped, we should see an increased N2pc (Mazza & Caramazza, 2012; Marini & Marzi, 2016), and reduced contralateral-delay-activity (CDA), indicating a lower visual working memory (VWM) load, consistent with perceptual grouping (Diaz et al., 2021). Finally, we predict improvements in search guidance, indexed by a greater percentage of initial search saccades directed at the target. Related objects produced a larger cue-related N2pc, consistent with perceptual grouping and indicating a stronger shift of spatial attention (i.e., greater attentional capture) relative to unrelated objects. However, there were no significant differences in CDA, suggesting that both items in related and unrelated pairs are similarly represented in VWM. Additionally, related objects produced stronger search guidance and improved search performance across several measures. Our findings are mostly consistent with prior reports suggesting that related objects are perceptually grouped and capture attention (Green & Hummel, 2004; 2006). These results suggest that while multiple-target search tends to be more difficult than single target search (Cain et al., 2011; Menneer et al., 2007), the unique coding of functionally related objects improves performance.

Talk 6, 3:45 pm, 24.26

Is There One “Beam” of Attention for Searching in Space and Time?

Raymond Klein1 (), Brett Feltmate2, Yoko Ishigami3, Nicholas Murray4; 1Dalhousie University, 2Department of Psychology and Neuroscience

In their pure forms, searching in space entails the allocation of attention to items distributed in space and presented at the same time whereas searching in time entails the allocation of attention to items distributed in time and presented at the same location. In two quite independent projects we have explored whether the metaphorical “beams” operating the domains of space and time might be independent or the same. In one project we used a differential approach. Early research using spatial (Snyder, 1972) and temporal search tasks (McLean, Broadbent & Broadbent, 1983) reported a substantial degree of sloppiness (binding errors). We had participants perform both of these tasks to see if the frequency of these binding errors in the domains of space and time might be correlated. We replicated both early findings of binding errors in space and in time, but their frequency of occurrence in the two domains was not significantly correlated. In the other project, we used an experimental approach. Here we explored whether the principles described by Duncan & Humphreys (1989; hereafter D&H) for searching in space would apply similarly to searching in time. Not surprisingly, performance in spatial search conformed to the predictions of D&H's principles. Importantly, temporal search performance followed the same pattern, suggesting that D&H’s principles are indeed generalizable to temporal search. We will speculate on why these two approaches seem to yield different answers to the question posed in our title.

Acknowledgements: Natural Sciences and Engineering Research Council of Canada Discovery Grant

Talk 7, 4:00 pm, 24.27

Goal-Directed Control of Visual Attention and the Minimization of Effort

Sangji Lee1 (), Brian Anderson2; 1Texas A&M University, 2Texas A&M University

People utilize goal-directed attentional control to selectively and strategically prioritize information in the service of accomplishing a task. Prior studies have focused on factors that modulate the control of attention in a prescribed environment (i.e., instructed goal or strategy and a fixed target feature). However, in the real world, people have to decide what they will search for and what strategy they will use to find it. What are the principles that govern the control of attention in these sorts of situations? We hypothesized that goal-directed attention serves to minimize the exertion of effort in accomplishing task goals. To test this hypothesis, we conducted a pair of experiments in which participants performed a modified version of the Adaptive Choice Visual Search task. There were two targets on every trial, one red and the other blue, and participants only needed to find one. We modified the color ratio between red and blue non-targets (attentional effort, three levels for each color) and added a physical effort requirement after every trial by requiring participants to apply force to a hand dynamometer. Reporting a target in one color required more force to progress to the next trial than reporting the other. The minimization of effort would be reflected in searching for the target in the less numerous color and the color associated with less physical effort, balancing these two priorities when the easier-to-find target is also associated with greater physical demand. In Experiment 1, participants were provided no information about the relationship between color and effort, whereas, in Experiment 2, participants were fully informed of these relationships. Across both experiments, we show that physical and attentional effort demands jointly determine how participants choose to conduct a visual search, consistent with the principle of effort minimization in the goal-directed control of attention.