VSS, May 13-18

Attention: Prioritization, suppression, lapses

Talk Session: Wednesday, May 18, 2022, 8:15 – 10:00 am EDT, Talk Room 1
Moderator: Yaffa Yeshurun, University of Haifa

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:11 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:15 am, 61.11

Pinging the brain to reveal a hidden attentional priority map

Docky Duncan1,2 (), Dirk van Moorselaar1,2, Jan Theeuwes1,2; 1Vrije Universiteit Amsterdam, 2Institute Brain and Behavior Amsterdam (iBBA)

Exciting work in the working memory literature has demonstrated that hidden, or so-called ‘activity-silent’, memory representations can be inferred through external perturbation. Here we explored whether the same technique can be used to visualise the landscape of spatial priority maps. It is generally assumed that statistical learning, for example about high-probability target locations in space, affects weights within a theoretical spatial priority map. We hypothesised that these maps may be hidden from techniques measuring active neural activity because they are not mediated by active neural firing but rather by changes in synaptic weights leading to biased re-activation potentials, akin to reactivation of latent memory representations following external perturbation. We thus sought to observe whether perturbation of the visual system with visual noise (i.e., presenting a high contrast visual ping) would lead to a visualisation of the learned attentional priority map. We tested this using the additional singleton paradigm, wherein participants were implicitly trained to expect search targets to appear in certain locations in space. This high-probability target location systematically shifted across the display in a blocked design allowing for a multivariate decoding approach. Critically, in the inter-trial period we occasionally presented high-contrast visual ‘pings’ similar to those used to reveal activity-silent working memory contents. Using multivariate pattern analysis on raw and time-frequency filtered EEG data, we show robust anticipatory decoding of the high probability target location before stimulus onsets, but critically only on trials containing a ‘ping’ prior to search display onset. Accompanying analyses of eye-tracking data preclude eye movements as an explanation of our results and indicate that our findings are the reflection of a latent attentional priority map. Our findings thus highlight that dynamic coding offers a plausible mechanical explanation for how statistical learning arises, as well as offering a new, striking method of revealing learned attentional priority.

Acknowledgements: European Research Council (ERC) advanced grant 833029

Talk 2, 8:30 am, 61.12

Distinguishing anticipatory visual cortical dynamics during temporal attention and expectation

Karen Tian1,2 (), David Heeger2, Marisa Carrasco2, Rachel Denison1,2; 1Boston University, 2New York University

In processing a stream of visual information, visual performance is improved by temporal expectation, the timing predictability of sensory events, and by voluntary temporal attention, the prioritization of sensory events at behaviorally relevant time points. Although temporal expectation and attention are usually interchangeably used, they can be dissociated and may be supported by distinct neural mechanisms. Here, we manipulate temporal attention while holding constant expectation and use concurrent MEG to disentangle the effects of temporal attention and expectation on anticipatory visual cortical dynamics. Observers performed an orientation discrimination task. On each trial, two grating targets (T1, T2) appeared for 50 ms sequentially at the fovea, separated by a 300-ms stimulus onset asynchrony. A precue tone (75% validity) instructed observers to attend to T1 or T2. A response cue tone after the targets instructed them to report the orientation (CW/CCW) of either T1 or T2. Thus, on each trial, one target was attended and the other unattended, whereas their expected timing was fixed. The targets were superimposed on 20-Hz flickering noise, which generated a 20-Hz steady state visual evoked response (SSVER) in the visual cortex. We calculated the intertrial phase coherence (ITPC) of the SSVER signal to continuously measure visual cortical sensitivity. Temporal expectation and attention both affected visual cortical sensitivity to visual stimulation in anticipation of the target stimuli. Temporal expectation was accompanied by a ramping increase in ITPC, starting from the precue up to the expected onset of T1, whether attended or not. Temporal attention modulated the slope of the ramp, such that the slope leading up to T1 was steeper when T1 was attended than unattended. These results suggest temporal expectation and attention jointly act on sensory processing, with temporal attention acting over and above expectation in modulating the anticipatory ramping of visual cortical sensitivity.

Acknowledgements: NIH R01-EY019693 to MC

Talk 3, 8:45 am, 61.13

Two Target Templates for Attentional Guidance and Decision-Making: Relational and Optimal

Stefanie Becker1 (), Zachary Hamblin-Frohman1; 1The University of Queensland, Brisbane, Australia

The target template is often described as the mental representation that drives attentional selection, for instance, in visual search. However, this template is not necessarily a veridical representation of a sought-after target. According to Optimal Tuning, the attentional template shifts to an exaggerated target value to maximise the signal-to-noise ratio when the target is similar to the non-targets. By contrast, the Relational Account states that attention is tuned to the relative target feature that specifies how the target differs from the other items in the context (e.g. all redder items or the reddest item). Both theories are empirically supported, but used different paradigms (perceptual decision tasks vs. visual search), and different attentional measures (probe response accuracy vs. gaze capture). Here, we incorporated both paradigms to provide a critical test of these accounts. The results revealed Optimal Tuning shifts in probe trial accuracy (when participants had to indicate the location of the target), but this did not drive early attention or gaze behaviour in visual search. Instead, attentional guidance followed the Relational Account, selecting all items with the relative target colour (e.g., redder). This suggests that the masked probe trials used in Optimal Tuning do not probe the attentional template that guides visual attention. Moreover, we found that optimal tuning shifts could be explained by simultaneous contrast effects: Surrounding an odd-coloured target with similar-coloured nontargets can shift the appearance of the target (e.g., make the target appear slightly redder). This suggests that the optimal tuning shift in probe responses may in fact be a perceptual artefact rather than a strategic adaptation to optimise the signal-to-noise ratio. In sum, the results demonstrate that the attention-guiding target template contains relative features while the template guiding decision-making contains a veridical representation of the target that can be shifted due to simultaneous contrast effects.

Acknowledgements: This work was supported by Australian Research Council grant DP210103430 to SIB.

Talk 4, 9:00 am, 61.14

Evidence against the signal suppression hypothesis in the capture-probe paradigm

Matt Oxner1 (), Jasna Martinovic2, Norman Forschack1, Romy Lempe1, Christopher Gundlach1, Matthias Mueller1; 1Universität Leipzig, 2University of Edinburgh

The signal suppression hypothesis (Gaspelin et al., 2015, 2018) argues that attentional capture by a salient, singleton distractor can be preempted through top-down inhibitory mechanisms, so long as observers have foreknowledge of the singleton’s features, i.e. its color and shape. Several recent studies have supported this view using the capture-probe visual search paradigm: by interleaving a probe task in which participants report letters that appear on each search item, inferences can be drawn about relative attentional deployment to these items. Largely, this paradigm has shown that probe recall associated with a singleton distractor is reduced relative to nonsingletons, suggesting distractor suppression. But given that these nonsingletons usually have the same color as the target while singletons do not, we hypothesized that a global facilitation of the target color, rather than singleton suppression, could drive this effect. In several experiments, we show that enhancement of the baseline nonsingleton distractors explains “below baseline suppression” as well as some effects previously seen in the capture-probe paradigm. First, when nonsingletons were presented in an irrelevant color different from the target and singleton, the difference in attention between distractor types was abolished. Second, after manipulating the color of nonsingletons, we found that attention to nonsingletons was affected by their color similarity to the target, with little or no such effect on singleton distractors. Finally, we tested the claim by Gaspelin & Luck (2018) that the putative “distractor suppression” requires fixed singleton features. In contrast to their findings, we found that this effect does appear to occur when singleton features vary trial-by-trial, so long as nonsingleton features remain fixed and benefit from the target’s global feature enhancement. Rather than preattentive feature suppression, these and previous results can be parsimoniously explained by global target-feature enhancement, which affects the baseline used to measure distractor suppression.

Acknowledgements: This research was funded by the Deutsche Forschungsgemeinschaft, grant to MM (MU972/29-1).

Talk 5, 9:15 am, 61.15

Single unit recordings in the human brain track sustained attention dynamics

Nicole Hakim1 (), Megan deBettencourt1, Tao Xie2, Mahesh Padmanaban3, Edward Awh4, Edward Vogel5, Peter Warnke; 1Stanford University, 2University of Chicago

Attention inevitably fluctuates over time, encompassing highly attentive moments and potentially catastrophic lapses of attention. These attention dynamics have been characterized using sustained attention to response tasks (SARTs) in which behavioral responses predict upcoming attention lapses. Studies have implicated subcortical brain areas, such as the basal ganglia and thalamus, as important to successfully perform a SART. Here, we examined neural signatures of attention by directly recording from the human brain. We collected data from patients (n=10) who were awake and performing a SART while undergoing deep brain stimulator implantation surgery. This task required patients to click a button to frequently presented circles (80% of trials) and withhold a response to infrequently presented squares (20% of trials). Patients successfully performed this task during surgery and elicited canonical behavioral signatures of sustained attention performance. For example, patients demonstrated attention lapses (29% lapse rate), operationalized as failures to inhibit the prepotent response to the infrequent square trials. When responding more quickly, patients were also more likely to lapse to an upcoming square trial that required deviating from the prepotent response (mean reaction time (RT) = 50 ms faster prior to lapses). We examined the neural signatures of sustained attention states, operationalized by RT, using recordings from the subthalamic nucleus (STN, n=5) of the basal ganglia and ventral intermediate (VIM, n=5) nucleus of the thalamus. First, cluster-based permutation tests revealed that worse attentional states were correlated with lower beta power (12-20 Hz). Furthermore, we identified putative single units and characterized their spiking activity. Using multivariate pattern analyses, we successfully decoded sustained attentional state from spiking activity of single units and population activity of multiple units. Overall, these results provide comprehensive insight into how moment-to-moment fluctuations of sustained attention are reflected in the basal ganglia, especially the thalamus and subthalamus.

Talk 6, 9:30 am, 61.16

Development of the attentional blink from early infancy to adulthood

Jean-Remy HOCHMANN1,2 (), Sid Kouider3; 1CNRS UMR5229 - Institut des Sciences Cognitives Marc Jeannerod, 67 Boulevard Pinel, 69675, Bron, France., 2Université Lyon 1 Claude Bernard, France, 3Laboratoire de Sciences Cognitives et Psycholinguistique, EHESS/CNRS/ENS-DEC, 75005 Paris, France

The attentional blink (AB) is a phenomenon, in which the second of two target stimuli (T1 and T2) is not consciously perceived when it appears shortly after the first target stimulus. This phenomenon is well explained by a two-stage model of perception, where an early sensory stage precedes a capacity-limited late stage that recruits the attentional system. As long as T1 occupies the late stage, T2 is missed. The duration of the AB thus reflects the duration of the late stage of perception. We designed an AB task that requires no instruction whatsoever and can be employed with very young infants (N=24 per group). In each trial, sequences of images were presented in parallel in three locations on the screen: left, center and right. Most images were masks (scrambled faces). Among those masks, two faces appeared. The first face (T1) appeared centrally. The second face (T2) appeared either left or right. We measured infants’ tendency to detect and look at T2. We could estimate the duration of the AB by varying the delay between T1 and T2 (first, third or seventh image following T1) and the rate of stimulus presentation (3.33 Hz, 5 Hz, 10 Hz). We found that 5-month-olds missed T2 at 600 and 900 ms but saw T2 at 1400 and 2100 ms, suggesting an AB of about 1150 ms. As infants grow, the AB shrinks: 8-month-olds exhibited an AB that lasted less than 700 ms and 3-year-olds an AB that lasted less than 300 ms. Finally, adults exhibited an AB equivalent to that of 3-year-olds. These findings show that the two-stage organisation of perception is in place early in life and that an acceleration of the late stage of perception is a fundamental aspect of cognitive development, particularly in the course of the first year of life.

Acknowledgements: This work was supported by the LABEX CORTEX (ANR-11-LABX-0042) of Université de Lyon within the program "Investissements d'Avenir" (decision n° 2019-ANR-LABX-02) operated by the French National Research Agency (ANR).

Talk 7, 9:45 am, 61.17

The predictive power of internal noise when considering attentional effects

Felipe Luzardo1, Yaffa Yeshurun1; 1University of Haifa

Individuals differ considerably in the degree to which they benefit from attention allocation. This includes both benefits (i.e., better performance when the target appears at a cued than a non-cued location) and costs (i.e., worse performance when the target appears at an invalidly cued than a non-cued location), and with different tasks. Thus far, such individual differences were attributed to post-perceptual factors such as working-memory capacity. Here, we examined whether a perceptual factor – the level of internal noise – is related to inter-individual variability in attentional effects. To that end, we estimated observers’ internal noise using the double-pass procedure combined with an external noise paradigm, and the perceptual template model. We also measured the effects of spatial attention in an acuity task: the participants reported the side of a square on which a small aperture appeared. Central arrows were used to engage sustained attention and peripheral cues to engage transient attention. Additionally, we measured temporal attention using the attentional blink paradigm. We found reliable correlations between individual levels of internal noise and the effects of both types of spatial attention, albeit of opposite directions: positive correlation with sustained attention and negative correlation with transient attention. When participants were split into groups by internal noise level (low/high), we found that participants with high internal noise displayed a significant cost of attending the wrong location when deploying sustained attention. Regarding the attentional blink, we found that internal noise predicted lag-1 cost. Taken together these findings demonstrate that internal noise – a fundamental characteristic of visual perception – can predict individual differences in the effects of various types of attention. We speculate that internal noise might be related to increased attention-related inhibitory processes, such that individuals with high levels of internal noise might demonstrate increased levels of attention-related inhibition.