VSS, May 13-18

Attention: Features, objects, endogenous

Talk Session: Tuesday, May 17, 2022, 10:45 am – 12:30 pm EDT, Talk Room 1
Moderator: Martin Rolfs, Humboldt-University

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:39 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 10:45 am, 52.11

Decoding Visual Feature Versus Visual Spatial Attention Control with Deep Neural Networks

Yun liang1, Sreenivasan Meyyappan2, Mingzhou Ding1; 1J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, 2Center for Mind and Brain, University of California, Davis, CA

Multivoxel pattern analysis (MVPA) examines the differences in multivoxel patterns evoked by different cognitive conditions using machine learning methods such as logistic regression and support vector machine. These methods are linear methods. It is possible that there are nonlinear relationships in the data that are not readily detected by these methods. We attempted to apply deep neural networks to address this problem. FMRI data were recorded from humans (n=20) performing a cued visual spatial/feature attention task in which an auditory cue instructed the subject to attend either left or right visual field (spatial trials), or attend either red or green color (feature trials). Following a random delay, two rectangular stimuli appeared, one in each visual field, and the subjects reported the orientation of the rectangle in the attended location (spatial trials) or with the attended color (feature trials). A deep neural network (DNN) was trained to take cue-evoked fMRI data as input features to predict trial labels. For feature (spatial) attention control, feature (spatial) trial data from 19 subjects were used to train a DNN model, which was then tested on the remaining subject. This process was repeated 20 times and the 20 decoding accuracies were averaged. We found that using the whole brain, the accuracy for decoding feature attention (cue red vs cue green) and spatial attention control (cue left vs cue right) is 59% and 61% respectively, both significantly above chance level of 50%. The heatmaps derived from the DNN models revealed important regions that contribute to both feature and spatial attention control as well as regions that contribute mainly to feature attention control or spatial attention control. In sum, DNNs can yield insights underlying attention control that complement other methods and provide a new approach for uncovering more complex relations between cognitive conditions and neural activities.

Talk 2, 11:00 am, 52.12

Effects of Spatial Attention on Spatial and Temporal Acuity Explained by Parvo-Magno Interactions: A Computational Account

Boris Penaloza1,2 (), Haluk Ogmen1; 1Department of Electrical & Computer Engineering, University of Denver, 2Universidad Tecnológica de Panamá, Panamá

The moment-to-moment amount of visual information received by our visual system is enormous. Nevertheless, not all visual information serves a purpose for our cognitive, emotional, social, and ultimately survival goals. Therefore, the brain employs a process called attention to select relevant information and to optimize its limited resources. Specifically, covert spatial attention—attending to a particular location in the visual field without eye movements—improves spatial resolution and paradoxically deteriorates temporal resolution. Even though the role of spatial attention in perception is unquestioned, the neural correlates underlying these attentional effects remain still elusive. Thus, in this work, we tested the predictions of a mechanistic model that explains these phenomena based on interactions between channels with different spatiotemporal sensitivities—viz., the magnocellular (transient) and parvocellular (sustained) channels. More specifically, our model postulates that spatial attention enhances activities in the parvocellular pathway thereby producing improved performance in spatial resolution tasks. The attentional enhancement of parvocellular activities leads to decreased magnocellular activities due to parvo-magno inhibitory interactions in the model. As a result, spatial attention hampers temporal resolution. We compared our model’s predictions to psychophysical data, testing the effects of spatial attention on spatial acuity tasks (Yeshurun & Carrasco, 1999) and temporal acuity tasks (Yeshurun & Levy, 2003). The results show that our model accounts for both attentional effects, i.e., the improved performance in spatial resolution tasks (R2=0.98) and the impaired performance in temporal resolution tasks (R2=0.95). This study provides computational evidence in support of parvo-magno inhibitory interactions as potential neural mechanisms to account for the effects of spatial attention on spatiotemporal perception.

Talk 3, 11:15 am, 52.13

Post-inhibition deficits are shaped by task-irrelevant feature similarity

Samoni Nag1 (), Patrick Cox1, Dwight Kravitz1, Stephen Mitroff1; 1The George Washington University

Response inhibition—the suppression of prepotent motor responses—typically triggers a cascade of behavioral changes in decision making and motor planning which effectively reduces the efficiency of subsequent performance. These effects have traditionally been hypothesized to arise from high-level cognitive processes localized to the frontoparietal cortices. However, recent evidence noting bidirectional interference between top-down processing and perception (Teng & Kravitz, 2019) suggests that post-inhibition interference might also arise from perceptual processing and extend to even task-irrelevant stimulus features; arguing against traditional views that such interference originates in the frontoparietal network. To test this prediction, a simple go/no-go paradigm with colored and oriented Gabor patches was presented to participants recruited via Amazon Mechanical Turk. On each trial, participants responded to either the color (Experiment 1) or orientation (Experiment 2) of a Gabor patch while the other, task-irrelevant feature was orthogonal to task goals. Critically, the task-irrelevant feature (of orientation or color) on the go-trial (probe) immediately following the singular no-go trial was manipulated between participants such that it was either 0º or 72º away from that of the preceding inhibition event. Accuracy and response time on the probe trial was worse/slower when the task-irrelevant features of the probe matched (0º) the task-irrelevant feature of preceding inhibition trial than when it differed (72º). Additional experiments replicated this finding and extended it to intermediate task-irrelevant differences (18º, 36º, 54º), allowing for direct comparisons with the known tuning properties of the task-irrelevant feature (of orientation or color) in early perceptual areas. Taken together, these findings suggest that task-irrelevant stimulus features shape post-inhibition performance deficits. Importantly, these results lend support to an alternative theoretical model in which there is extensive interplay between response inhibition and perceptual processing.

Acknowledgements: This research was funded by US Army Research Office grant #​​W911NF-16-1-0274) and US Army Research Laboratory Cooperative Agreements #W911NF-19-2-0260 & #W911NF-21-2-0179.

Talk 4, 11:30 am, 52.14

Eye movement characteristics reflect object-based attention

Olga Shurygina1,2, Martin Rolfs1,2; 1Humboldt-Universität zu Berlin, 2Exzellenzcluster Science of Intelligence, Technische Universität Berlin

Neurophysiological and psychophysical studies established that objects are a unit of attentional selection at early stages of visual processing. Behavioral evidence for object-based selection comes from attentional cueing studies, in which cueing a specific object increases an observers’ ability to detect and rapidly report a probe presented on the same object as compared to a different object (i.e., a same-object advantage). Here, we tested whether object-based attention is reflected in the speed and accuracy of saccadic eye movements executed within or across objects. We presented two C-shaped objects, located on an imaginary circle with an eccentricity of 3, 5, or 7 degrees of visual angle from the initial fixation point. We made the two shapes perceptually distinct using different textures and colored outlines. The shapes also appeared at different time points and moved along pseudo-random trajectories from different edges of the screen to their final positions (randomly oriented but opposite each other). Four saccade targets were located equidistantly at the ends of the two objects. A central cue pointed to one of the targets, instructing observers to make a sequence of two eye movements: A first saccade to the cued target, and a second saccade to the next target in the clockwise or counterclockwise direction (the direction remained constant within, but was balanced across, individuals). We varied cued locations and objects orientations such that observers made approximately the same number of second saccades within the same and to the different object. Comparing second saccade characteristics in the same vs. different object conditions, we found that — across all object eccentricities — second saccades within the same object had shorter latencies and had more accurate landing positions than second saccades to a target located on a different object. These findings suggest that that object-based attention contributes to saccade preparation and execution

Acknowledgements: This project has received funding from Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 ‘Science of Intelligence’ – project no. 390523135. M.R. was supported by the Heisenberg Programme of the DFG (grants RO 3579/8-1 and RO 3579/12-1)

Talk 5, 11:45 am, 52.15

Attending to future objects

Chenxiao Guan1 (), Chaz Firestone1; 1Johns Hopkins University

In addition to attending to continuous regions of space, we also attend to discrete visual objects. A foundational result in this domain is that attention "spreads" within an object: If we attend to one portion of an object, we can't help but attend to the rest of it, as revealed by facilitated probe detection for other within-object locations. But what can be the objects of object-based attention? In particular, is this process limited only to the here and now, or can we also attend to objects that don't yet exist but merely *could* exist at some future time? Here, we explore how attention spreads not only to other locations within a single, present object, but also to disconnected object-parts that could combine with a presently attended object to create a new object. We designed stimulus triplets consisting of one puzzle-piece-like central shape and two nearby puzzle-piece-like shapes, one of which could neatly combine with the central shape and one of which could not (as determined by the presence of certain protrusions and indentations). Shortly after stimulus onset, two letters appeared, one on the central shape and another on one of the two smaller parts: either the "combinable" piece or the "non-combinable" piece. Subjects simply decided whether the two letters were the same or different. We found that subjects were faster to evaluate letter-similarity when the two letters appeared on shapes that could combine into one, rather than on two shapes that could not — even without any differences in accuracy (ruling out a speed-accuracy tradeoff). Follow-up experiments ruled out mere similarity as a driver of this effect, isolating combinability per se. We suggest that attention can select not only actual objects that are present now, but also "possible" objects that may be present in the future.

Acknowledgements: This work was supported by NSF BCS 2021053 awarded to C.F.

Talk 6, 12:00 pm, 52.16

Exogenous attention effects persist into Visual Working Memory

Luke Huszar1 (), Tair Vizel2, Marisa Carrasco1; 1New York Unviersity, 2Tel Aviv University

Rationale. The sensory recruitment hypothesis posits that Visual Working Memory (VWM) maintenance depends on the same cortical machinery responsible for online perception, implying similarity between perceptual and VWM representations. Characterizing similarities and differences in these representations is critical for understanding how the brain reformats perceptions into durable working memories. Goal. Here, we investigated whether transient modulations to perception via exogenous attention are preserved after VWM encoding. A stimulus viewed with exogenous attention appears higher in contrast than it is, but this change in perceived contrast disappears with the decay of exogenous attention (~500ms after cue onset). If these transient dynamics continue after exogenously attended perceptual representations are encoded into VWM, then the boost to apparent contrast should fade over a delay period. Alternatively, if the encoding process freezes ongoing modulations, maintaining a snapshot of the perception from the time of encoding, then the boost to apparent contrast should persist. Method. Observers performed a delayed contrast comparison task. On each trial, a Gabor stimulus was briefly presented and after a variable delay period (500 or 2000ms), a comparison Gabor was presented. Participants reported which Gabor was higher in contrast. Exogenous attention was manipulated through cues that appeared above the location of the first, second, or both stimuli 100ms before their onset. Results. When the first stimulus was viewed during exogenous attention,the attentional boost to perceived contrast persisted across both delay periods to the same extent. In other words, VWM consistently sustained the attentional effect on representation present at the time of encoding. Conclusion. This finding reveals that VWM representations differ from percepts in terms of WVM’s robustness against transient changes. VWM maintains a snapshot of the percept as it was at the encoding time rather than preserving the transient modulations characteristic of exogenous attention on perception.

Acknowledgements: NIH training grant 5T32EY007136-28 (Movshon); NIH R01-EY019693 (Carrasco)

Talk 7, 12:15 pm, 52.17

Roles of goal-directed performance optimization vs. stimulus-driven salience in determining attentional control strategy

Walden Y. Li1 (), Andrew B. Leber1; 1The Ohio State University

Attentional control strategy accounts for significant variation in individual visual search performance. Research has shown that an individual’s strategy optimality is stable within visual search and foraging tasks (Clarke et al., 2020) and generalizes across similar visual search tasks (Li et al., 2021). However, in some paradigms designed to investigate strategy, stimulus salience—rather than individuals’ drive to optimize performance—might explain behavior. Here, we pitted stimulus salience vs. strategy optimization via a modification of the Adaptive Choice Visual Search (ACVS; Irons & Leber, 2018) paradigm. In Experiment 1, Control Group participants could choose to search for either a red or blue target containing a “5”—each of which was present on every trial. Participants moved the mouse to search, revealing digits by hovering over each object, one at a time. One color subset was always less numerous than the other; as a result, it was more optimal to search for the target in the smaller subset (although note that the smaller subset items were also more salient). In the Manipulated Group, we presented targets sooner in the large subset than in the small subset, such that searching the large (and less salient) subset was now the optimal strategy. Experiment 2 contained a similar task with subsets defined by their spatial location instead of colors. In both experiments, participants’ tendency of choosing the small subset was significantly reduced in the Manipulated Group, in which the target appeared sooner in larger subsets. These results demonstrate that strategy optimization overrides stimulus salience in visual search, and strategy is dependent more on internal, rather than external, factors.

Acknowledgements: This work was supported by BCS-2021038 to A.B.L.