Cutting across the top-down-bottom-up dichotomy in attentional capture research

Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): J. Eric T. Taylor, Brain and Mind Institute at Western University
Presenters: Nicholas Gaspelin, Matthew Hilchey, Dominique Lamy, Stefanie Becker, Andrew B. Leber

< Back to 2017 Symposia

Research on attentional selection describes the various factors that determine what information is ignored and what information is processed. These factors are commonly described as either bottom-up or top-down, indicating whether stimulus properties or an observer’s goals determine the outcome of selection. Research on selection typically adheres strongly to one of these two perspectives; the field is divided. The aim of this symposium is to generate discussions and highlight new developments in the study of attentional selection that do not conform to the bifurcated approach that has characterized the field for some time (or trifurcated, with respect to recent models emphasizing the role of selection history). The research presented in this symposium does not presuppose that selection can be easily or meaningfully dichotomized. As such, the theme of the symposium is cutting across the top-down-bottom-up dichotomy in attentional selection research. To achieve this, presenters in this session either share data that cannot be easily explained within the top-down or bot-tom-up framework, or they propose alternative models of existing descriptions of sources of attentional control. Theoretically, the symposium will begin with presentations that attempt to resolve the dichotomy with a new role for suppression (Gaspelin & Luck) or further bemuse the dichotomy with typically bottom-up patterns of behaviour in response to intransient stimuli (Hilchey, Taylor, & Pratt). The discussion then turns to demonstrations that the bottom-up, top-down, and selection history sources of control variously operate on different perceptual and attentional pro-cesses (Lamy & Zivony; Becker & Martin), complicating our categorization of sources of control. Finally, the session will conclude with an argument for more thorough descriptions of sources of control (Leber & Irons). In summary, these researchers will present cutting-edge developments using converging methodologies (chronometry, EEG, and eye-tracking measures) that further our understanding of attentional selection and advance attentional capture research beyond its current dichotomy. Given the heated history of this debate and the importance of the theoretical question, we expect that this symposium should be of interest to a wide audience of researchers at VSS, especially those interested in visual attention and cognitive control.

Mechanisms Underlying Suppression of Attentional Capture by Salient Stimuli

Speaker: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis
Additional Authors: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis; Carly J. Leonard, Center for Mind and Brain at the University of California, Davis; Steven J. Luck, Center for Mind and Brain at the University of California, Davis

Researchers have long debated the nature of cognitive control in vision, with the field being dominated by two theoretical camps. Stimulus-driven theories claim that visual attention is automatically captured by salient stimuli, whereas goal-driven theories argue that capture depends critically the goals of a viewer. To resolve this debate, we have previously provided key evidence for a new hybrid model called signal suppression hypothesis. According to this account, all salient stimuli generate an active salience signal which automatically attempts to guide visual attention. However, this signal can be actively suppressed. In the current talk, we review the converging evidence for this active suppression of salient items, using behavioral, eye tracking and electrophysiological methods. We will also discuss the cognitive mechanisms underlying suppression effects and directions for future research.

Beyond the new-event paradigm in visual attention research: Can completely static stimuli capture attention?

Speaker: Matthew Hilchey, University of Toronto
Additional Authors: Matthew D. Hilchey, University of Toronto, J. Eric T. Taylor, Brain and Mind Institute at Western University; Jay Pratt, University of Toronto

The last several decades of attention research have focused almost exclusively on paradigms that introduce new perceptual objects or salient sensory changes to the visual environment in order to determine how attention is captured to those locations. There are a handful of exceptions, and in the spirit of those studies, we asked whether or not a completely unchanging stimuli can attract attention using variations of classic additional singleton and cueing paradigms. In the additional singleton tasks, we presented a preview array of six uniform circles. After a short delay, one circle changed in form and luminance – the target location – and all but one location changed luminance, leaving the sixth location physically unchanged. The results indicated that attention was attracted toward the vicinity of the only unchanging stimulus, regardless of whether all circles around it increased or decreased luminance. In the cueing tasks, cueing was achieved by changing the luminance of 5 circles in the object preview array either 150 or 1000 ms before the onset of a target. Under certain conditions, we observed canonical patterns of facilitation and inhibition emerging from the location containing the physically unchanging cue stimuli. Taken together, the findings suggest that a completely unchanging stimulus, which bears no obvious resemblance to the target, can attract attention in certain situations.

Stimulus salience, current goals and selection history do not affect the same perceptual processes

Speaker: Dominique Lamy, Tel Aviv University
Additional Authors: Dominique Lamy, Tel Aviv University; Alon Zivony, Tel Aviv University

When exposed to a visual scene, our perceptual system performs several successive processes. During the preattentive stage, the attentional priority accruing to each location is computed. Then, attention is shifted towards the highest-priority location. Finally, the visual properties at that location are processed. Although most attention models posit that stimulus-driven and goal-directed processes combine to determine attentional priority, demonstrations of purely stimulus-driven capture are surprisingly rare. In addition, the consequences of stimulus-driven and goal-directed capture on perceptual processing have not been fully described. Specifically, whether attention can be disengaged from a distractor before its properties have been processed is unclear. Finally, the strict dichotomy between bottom-up and top-down attentional control has been challenged based on the claim that selection history also biases attentional weights on the priority map. Our objective was to clarify what perceptual processes stimulus salience, current goals and selection history affect. We used a feature-search spatial-cueing paradigm. We showed that (a) unlike stimulus salience and current goals, selection history does not modulate attentional priority, but only perceptual processes following attentional selection; (b) a salient distractor not matching search goals may capture attention but attention can be disengaged from this distractor’s location before its properties are fully processed; and (c) attentional capture by a distractor sharing the target feature entails that this distractor’s properties are mandatorily processed.

Which features guide visual attention, and how do they do it?

Speaker: Stefanie Becker, The University of Queensland
Additional Authors: Stefanie Becker, The University of Queensland; Aimee Martin, The University of Queensland

Previous studies purport to show that salient irrelevant items can attract attention involuntarily, against the intentions and goals of an observer. However, corresponding evidence originates predominantly from RT and eye movement studies, whereas EEG studies largely failed to support saliency capture. In the present study, we examined effects of salient colour distractors on search for a known colour target when the distractor was similar vs . dissimilar to the target. We used both eye tracking and EEG (in separate experiments), and also investigated participant’s awareness of the features of irrelevant distractors. The results showed that capture by irrelevant distractors was strongly top-down modulated, with target-similar dis-tractors attracting attention much more strongly, and being remembered better, than salient distractors. Awareness of the distractor correlated more strongly with initial capture rather than attentional dwelling on the distractor after it was selected. The salient distractor enjoyed no noticeable advantage over non-salient control distractors with regard to implicit measures, but was overall reported with higher accuracy than non-salient distractors. This raises the interesting possibility that salient items may primarily boost visual processes directly, by requiring less attention for accurate perception, not by summoning spatial attention.

Toward a profile of goal-directed attentional control

Speaker: Andrew B. Leber, The Ohio State University
Additional Authors: Andrew B. Leber, The Ohio State University; Jessica L. Irons, The Ohio State University

Recent criticism of the classic bottom-up/top-down dichotomy of attention has deservedly focused on the existence of experience-driven factors out-side this dichotomy. However, as researchers seek a better framework characterizing all control sources, a thorough re-evaluation of the top-down, or goal-directed, component is imperative. Studies of this component have richly documented the ways in which goals strategically modulate attentional control, but surprisingly little is known about how individuals arrive at their chosen strategies. Consider that manipulating goal-directed control commonly relies on experimenter instruction, which lacks ecological validity and may not always be complied with. To better characterize the factors governing goal-directed control, we recently created the adaptive choice visual search paradigm. Here, observers can freely choose between two tar-gets on each trial, while we cyclically vary the relative efficacy of searching for each target. That is, on some trials it is faster to search for a red target than a blue target, while on other trials the opposite is true . Results using this paradigm have shown that choice behavior is far from optimal, and appears largely determined by competing drives to maximize performance and minimize effort. Further, individual differences in performance are stable across sessions while also being malleable to experimental manipulations emphasizing one competing drive (e.g., reward, which motivates individuals to maximize performance). This research represents an initial step toward characterizing an individual profile of goal-directed control that extends beyond the classic understanding of “top-down” attention and promises to contribute to a more accurate framework of attentional control.

< Back to 2017 Symposia