Visual Search: Attention, memory
Talk Session: Sunday, May 21, 2023, 2:30 – 4:15 pm, Talk Room 2
Moderator: Jeff Schall, York University, Canada
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 2:30 pm, 34.21
The involvement of the temporo-parietal junction in attentional reorienting and stimulus evaluation
Cheol Hwan Kim1 (), Jongmin Lee1, Suk Won Han1; 1Chungnam National University
It has been well known that the temporo-parietal junction (TPJ), a part of the ventral attention network, plays a crucial role in attentional reorienting. However, recently, some researchers showed that the TPJ was also associated with other cognitive processes, especially an evaluative process, which refers to inferring/computing the behavioral importance of the attended stimulus. Hence, we investigated whether a region involved in attentional reorienting is also engaged in evaluating the behavioral significance of attention-capturing stimuli. In an fMRI experiment, participants performed a modified Posner cueing task, in which four different arrow cues indicating four distinct locations were presented, followed by a target stimulus. Each cue predicted the target location to different extents; the cues predicted the target location with the probability of 80%, 20% (high-certainty cues), 60%, 40% (low-certainty cues). In each trial, participants made responses to a target, preceded by a cue stimulus. After four consecutive target responses, participants were required to infer how much each different cue predicts target location. We found that several fronto-parietal regions, frontal eye fields (FEF), medial superior parietal lobule (mSPL), and temporo-parietal junction of both hemispheres showed increased activity when the cued location mismatched with target location, evoking reorienting of attention. Notably, a dissociation across the orienting regions was found; the left and right TPJ activities were greater for high-certainty cues, whereas the FEF showed greater activity for low-certainty cues and mSPL showed similar activity for these cues. We suggest that the TPJ activation increased under the high-certainty cue presentation because this region is associated with utilizing sensory information to evaluate the behavioral significance of a stimulus; with the high-certainty cues, the amount of evidence for inferring the cue predictability should be abundant. By contrast, other fronto-parietal regions seem to be sensitive to increased task demand.
Acknowledgements: This research was funded by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No.NRF-2022M3J6A1084843)
Talk 2, 2:45 pm, 34.22
Resolving stages of processing in visual search: Frontal eye field neurophysiology with two degrees of difficulty
Wanyi Lyu1 (), Thomas R. Reppert2, Jeffrey D. Schall1; 1Department of Biology, Centre for Vision Research, Vision Science to Application, York University, Toronto ON Canada, 2Department of Psychology, University of the South, Sewanee TN USA
Behavior is the outcome of covert perceptual, cognitive, and motor operations that can be described by mathematical models and are produced by brain systems comprised of diverse neurons. Using the logic of selective influence, we are distinguishing stages of processing supporting visual search. Macaque monkeys searched for a color singleton among distractors. Two operations necessary for the task—search efficiency and stimulus-response mapping—were independently manipulated. Search efficiency was manipulated by varying the similarity of singleton and distractor colors. Stimulus-response mapping, or stimulus-response encoding, was manipulated by varying the elongation of stimuli that cued GO or NO-GO responses. The response times of both monkeys were modified selectively by the 2x2 (High vs Low efficiency) x (High vs Low encoding) manipulations. Single-unit spiking was sampled in frontal eye field of two monkeys. Neurons representing stimulus salience were distinguished from neurons mediating saccade preparation. The times of modulation of both categories of neurons were measured in the 2x2 (High vs Low efficiency) x (High vs Low encoding) manipulations. The manipulation of search efficiency influenced the time taken to resolve singleton location and the delay of saccade preparation of most neurons. The manipulation of stimulus-response encoding did not influence the time taken to resolve singleton location of most neurons but also delayed saccade preparation. The convergence of performance and neural results provide evidence that distinct operations during visual search can be resolved via the experimental logic of selective influence.
Acknowledgements: Supported by Ian Howard Family Foundation, York University Vision Science to Application, NSERC RGPIN-2022-04592, NIH: RO1-EY08890, F32-EY028846, T32-EY007135, P30-EY008126.
Talk 3, 3:00 pm, 34.23
What do neurons in the superior colliculus encode during visual search?
Abe Leite1 (), Hossein Adeli1, Rakesh Nanjappa2, Robert M. McPeek2, Gregory J. Zelinsky1; 1Stony Brook University, 2SUNY College of Optometry
The selection of saccade targets is affected by both bottom-up (saliency) and top-down (e.g., target guidance) attention biases, as well as the history of previously fixated locations (e.g., inhibitory tagging). The neural correlates of these factors have been extensively studied, but key to our work is that each factor is known to drive the firing of superior colliculus (SC) neurons, which themselves drive eye movements. We seek to characterize the contribution of these factors in the activity of SC neurons by making novel use of an information theoretic method called partial information decomposition (PID). PID is particularly useful in characterizing whether a neuron’s firing represents redundant or unique information about each factor, enabling the identification of neurons that specifically code certain attention biases. A rhesus monkey was trained to search for a specific target disk in a grid of disks that either had the same color or a different color from the target disk. We computed each disk’s saliency, its goal-relevance, and whether it was previously fixated. We recorded from 89 cells and found strong evidence for intermediate-layer SC neurons uniquely coding both target guidance and inhibitory tagging, with less clear evidence for bottom-up salience. More neurons selectively code inhibitory tagging (32) compared to target guidance (19) and saliency (3), and the inhibitory-tagging information is generally also coded faster. PID can identify cells that carry multiple signals, and we identified 22 cells that encoded synergistic information from multiple factors. We conclude from this first application of PID to attention control that bottom-up salience plays a smaller role than task (target guidance) and fixation-selection history in the representation of information by intermediate-layer SC neurons, with the importance of these latter two factors being so great as to warrant the dedication of neurons to specifically encode these feature and spatial attention biases.
Talk 4, 3:15 pm, 34.24
Exploring the neural correlates of naturalistic hybrid search tasks
Matias Ison1 (), Joaquin Gonzalez1,2, Alessandra Barbosa1, Damian Care2, Anthony Ries3, Juan Kamienkowski2; 1University of Nottingham, 2University of Buenos Aires & National Scientific and Technical Research Council, Argentina, 3U.S. Army Research Laboratory
Since their introduction in the 60s, we’ve learned a lot about hybrid search (HS) tasks, which require observers to search for any item of a memory set. One of the main signatures of HS is a robust linear relationship between RT and visual set size (as in other visual search -VS- tasks) and a logarithmic relationship between RT and memory set size. This has been investigated under several scenarios, including word searches and categorical targets, and its behavioral correlates have been used to inform theories of VS. However, little is known about the underlying neural mechanisms in HS (and in general about overt attention VS tasks). One reason is that eye movements produce artifacts in M-EEG signals, which are much larger than the signals of interest. Here, we aim to start uncovering the neurophysiological mechanisms underlying HS. We first ran an online behavioral experiment using a new mapping naturalistic search task, where the memory set changes in each trial. We found that the main signatures of HS (linear increase in RT with visual set size and logarithmic increase with memory set size), remain present in target-present trials even when contextual information is present. In a second experiment, we combined EEG and eye tracking recordings while participants performed the same task. By using a deconvolution analysis approach, we found differences in the fixation-related potentials (brain potentials aligned to fixation onset) depending on the memory set size. In a third experiment, we extended our approach to combine MEG and eye movement recordings. After identifying and characterizing robust markers of neural and saccadic spike artifacts in the signal, we found significant task effects in fixation-related fields and low-frequency oscillations. Altogether, our approach provides a way to inquire about the role of specific neurophysiological signals in eye movements and behavior.
Acknowledgements: This work was partly funded by ARO ( under Cooperative Agreement Number W911NF2120237 and W911NF1920240) and Agencia Nacional de Promoción Científica y Tecnológica (PICT 2018-2699)
Talk 5, 3:30 pm, 34.25
Explaining the guidance of search for real-world objects using quantitative similarity
Brett Bahle1 (), Steven J. Luck1; 1University of California - Davis
Visual attention is guided during search toward stimuli that correspond to the features of the target. These features, often termed the “attentional set”, are thought to be represented in the brain as a working memory representation, particularly when searching for a frequently changing target. But what features are represented in the attentional set, especially when the target is a complex, real-world object? Here, we utilized both computational approaches (such as ConceptNet) and crowd-sourced data (Hebart, et al. (2020)) to quantitatively model multiple levels of representational abstraction for search targets (and, correspondingly, search distractors). Specifically, we propose that the objects of search (both the known target and all possible distractor objects) can be defined as a vector of feature values at different levels of abstraction, from low-level, image-based features to high-level, semantic features. Moreover, the extent to which an item in a search display will attract attention directly scales with its quantitative similarity to the target’s features. Across different search tasks, we found evidence that the level of abstraction of a given representational space selectively explained variance in search behavior. Specifically, both pre-saccadic mechanisms (as indexed by probability of item fixation) and post-saccadic mechanisms (as indexed by item dwell times) were explained by quantitative similarity between the target of search and a given item in a search display. Our approach provides a new quantitative model for predicting attentional allocation during visual search.
Talk 6, 3:45 pm, 34.26
Attention to object categories: Selection history determines the breadth of attentional tuning during real-world object search
Douglas A. Addleman1 (), Reshma Rajasingh1, Viola S. Stoermer1; 1Dartmouth College
People are remarkably good at learning the statistics of their visual environments. For instance, people rapidly learn to pay attention to locations or simple features frequently associated with targets during visual search. Here, we tested how such statistical regularities influence attentional selection during real-world object search. Participants searched for a vertically oriented object among seven distractor objects tilted 45 degrees to the left or right. In a training phase, we induced attentional biases towards objects from one category (e.g., cars) by, unbeknownst to participants, making those objects more likely to be targets. In a subsequent testing phase, we examined whether learned biases persisted when each object became equally likely to be a target. In Experiment 1 (N=44), participants acquired attentional biases for a single real-world object that was frequently their target in the training phase (p < .00001, dz = 1.10), and this bias persisted into the neutral testing phase (p < .001, dz = 0.86). In Experiment 2 (N=32), we introduced new exemplars from the learned category during the testing phase to examine whether participants would generalize learning from one object to its entire object category. Results revealed no transfer to the new objects (p = .750, dz = 0.06, BF=0.19 in favor of no effect), despite robust learning of the exemplar object. However, as soon as participants learned to prioritize at least two exemplars from one category (Exp. 3, N=72), we found clear transfer during testing to novel objects from the same category (p < .00001, dz = 0.57). These results indicate that people can tune their attention adaptively, to specific objects or to entire object categories, based on recent experience. Together, these studies reveal that the breadth of attentional tuning in real-world search can be flexibly adjusted based on recent experience to optimally support current task demands.
Acknowledgements: This research was supported in part by NSF Grant BCS-1850738 to VSS and by a Dartmouth Leave Term grant to RR.
Talk 7, 4:00 pm, 34.27
Searching for a target in a natural scene does not allow for robust recall of scene or target details that are irrelevant to response expectations
Ryan E O'Donnell1 (), Nicolás Cárdenas-Miller1, Joyce Tam1, Dheeraj Varghese2, Brad Wyble1; 1Pennsylvania State University, 2Vrije University
Humans can easily form memories of naturalistic scenes with rich visual details and recreate them in drawings, but how robust are incidental memories of scenes when participants only expect to search for an object within them? Tests of incidental working memory for simple search stimuli (e.g., letters, colored shapes) show that even an attended target’s identity cannot be accurately retrieved when participants only expect to report its location, a phenomenon termed attribute amnesia. We predict that this form of expectation-based memory selectivity will also be present for both the whole scene and the search target when participants are asked to search for an object in a scene and only report its location. In our experiments, participants located wall art or pillows within novel indoor scenes for 43 trials before being unexpectedly prompted to draw the entire scene (Experiment 1) or the wall art which was just the search target (Experiment 2). Naïve raters matched participants’ drawings to the actual scene or object. In both experiments, the failure to draw accurately on the surprise trial was dramatic. Most drawings lacked visual details to be sufficiently recognizable, were not in the same basic category as the scene or object, and some drawers even submitted empty canvases. On the next trial, after developing an expectation to draw detailed scenes or objects, the same drawers produced highly recognizable scenes and objects. Experiment 3 further showed that these memory failures were not merely attributable to surprise-related interference. Thus, despite being capable of reconstructing visual details from memory, response-irrelevant details of scenes, or even attended-to objects themselves, may not be encoded to begin with. Rather, participants primarily encode information relevant to current goals and expectations. We theorize that this is a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.
Acknowledgements: The authors would like to thank Wilma Bainbridge for creating her incredibly helpful tutorial on implementing drawing studies online and Taryn Green for her help in data analysis. This research was funded by NSF Grant 1734220.