Attention 3
Talk Session: Monday, May 18, 2026, 8:15 – 9:45 am, Talk Room 1
Moderator: Barry Giesbrecht, University of California, Santa Barbara
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 8:15 am, 41.11
Deep mind-wandering amplifies attentional capture
Shivang Shelat1,2, Daniel D. Thayer1, Jonathan W. Schooler1, Barry Giesbrecht1,2; 1University of California, Santa Barbara, 2Institute for Collaborative Biotechnologies
Failures of human attention take many forms. One example is when our attention drifts away from a task toward irrelevant thoughts: mind-wandering. Another example is when our attention is pulled toward a salient visual distractor at the expense of a target: distractor capture. In this experiment, we test how mind-wandering and spatial attention are linked by interleaving thought probes into the additional singleton paradigm. Participants (n = 68) indicated a bar’s orientation in a target shape among nontarget shapes (e.g., the diamond among circles), and one nontarget was a color singleton distractor on half of the trials. Periodic probes asked participants to report the depth of their mind-wandering. The distractor did not affect accuracy but slowed response times (RTs) to the target (b = 22.77, SE = 2.78, p < .0001), while deeper mind-wandering reduced accuracy (b = -0.019, SE = 0.0066, p = .0033) but did not affect RTs. Critically, on correct trials, there was a significant interaction between distractor presence and mind-wandering on RTs (b = 0.87, SE = 0.40, p = .030). The RT distractor cost was larger when participants reported deep mind-wandering before the trial onset. On the other hand, when participants were not mind-wandering (i.e., when they were strongly focused), the distractor cost shrunk. This suggests that the classic behavioral cost of distractor capture in the additional singleton paradigm is modulated by mind-wandering.
This work is supported by a National Science Foundation Graduate Research Fellowship awarded to the first author under grant number 2139319.
Talk 2, 8:30 am, 41.12
Modulations of Perceptual Priority of Foveal and Extrafoveal Content During Fixation
Sanjana Kapisthalam1, Martina Poletti1; 1University of Rochester
Naturally, perception unfolds in a continuous fixation–saccade cycle. Saccades reposition the fovea several times per second, and fixations (typically 200–300 ms) constitute a stable and relatively long time window in which visual information is acquired. Yet, fixation is often treated as a temporal unit. Here we examine how visual discrimination changes over the course of fixation after saccade landing. Observers made a saccade to the center of the display and reported the direction of a brief (50 ms) orientation change of a continuously visible Gabor (2° diameter, either at 2 or 8 cpd). The orientation change could occur any time up to 450 ms after saccade landing. Gabors were either presented in isolation at a given eccentricity—either foveally(1°) or parafoveally (8°)—or were presented simultaneously at both eccentricities. In the latter case, only one, randomly picked, Gabor changed orientation. Stimulus contrast was adjusted to achieve threshold performance for each eccentricity and spatial frequency. When stimuli were presented in isolation, orientation-discrimination performance steadily declined over the 450 ms following saccade landing. This decline was comparable for foveal and extrafoveal stimuli. In contrast, when stimuli were presented simultaneously at both eccentricities, discrimination declined for foveal stimuli, but increased for parafoveal stimuli over the course of fixation. This pattern showed an early foveal advantage followed by a later parafoveal gain and occurred only for high-spatial-frequency stimuli (8 cpd). For low spatial frequency stimuli (2 cpd), performance remained stable for both foveal and parafoveal stimuli throughout fixation. These results show that even during brief fixations, visual discrimination is not constant but changes substantially depending on stimulus eccentricity and spatial frequency. Immediately after saccade landing, the visual system prioritizes high-resolution foveal content, but over the course of fixation this prioritization shifts toward extrafoveal stimuli, even when no further saccade is planned.
Meta Reality Labs, NIHEY001319 to the center for visual science at University of Rochester
Talk 3, 8:45 am, 41.13
Temporal Dynamics of Reward Processing in Visual Decision-Making
Christopher J.H. Pirrung1 (), Dalia Abdo Kahin1, Chris Baker1; 1Laboratory of Brain and Cognition, National Institutes of Health
In visual decision making, visual information and reward processing interact to shape learning with visual and frontal cortex implicated in each, respectively. The Reward Positivity (RewP) is a sensitive M/EEG marker of reward, typically measured at frontocentral sensors/electrodes. The RewP is likely generated by a distributed cortical network whose individual nodes contribute heterogenous information. Given the broad set of contributors to this signal, using a univariate approach such as ERP analysis may not capture the full scope of this signal. Multivariate pattern analysis (MVPA) has shown increased sensitivity to neural signals and allowed for successful classification of perceptual stimuli based on evoked neural activity. Here we tested whether MVPA of MEG data can be used to temporally decode rewarding and non-rewarding feedback in individual participants. Participants completed a visual discrimination task followed by a visual cue (blue or yellow screen) that indicated monetary reward. Feedback-color mapping changed between runs. MVPA was first used to classify color at each timepoint, irrespective of feedback. This sanity check model accurately predicted color from 70-350ms. Feedback was accurately predicted from 200-700ms in a model including both blue- and yellow-reward trials. A cross-decoding model, in which the training set included rewards of one color and the testing set included rewards of the other color, made systematic errors at early timepoints (70-250ms) in line with color processing, but successfully predicted feedback at later timepoints (350-700ms). This suggests that we are successfully decoding the response to the cognitive content of the feedback, not just the perceptual attributes. Interestingly, preliminary source estimation shows the ability to decode feedback not only in expected frontal areas, but also early visual areas. This method could allow for differential decoding of perceptual and cognitive attributes of reward, enabling us to explore how perceptual elements of reward guide decision-making.
ZIA MH002893
Talk 4, 9:00 am, 41.14
Two paths to guide attention: dissociating Statistical and Reward Learning
Carola Dolci1 (), Tom Verguts1, Elisa Santandrea2, Nico Boehler1; 1Department of Experimental psychology, Ghent University, 2Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona
The human ability to anticipate and efficiently respond to future events relies on learning regularities that shape attentional priorities. Two major experience-dependent phenomena, statistical learning (SL) and reward-based learning (RL), are known to bias attention toward frequently occurring or highly valued stimuli. Yet, it remains unclear whether these processes operate through shared or distinct neural pathways. To address this question, we conducted a two-session EEG study in which participants performed a visual search task designed to independently manipulate SL and RL. In the SL session, targets appeared more frequently in a specific spatial location, whereas in the RL session, correct responses at certain locations yielded higher rewards. To probe latent attentional biases established through learning, each search display was preceded by a task-irrelevant “ping,” allowing us to measure early reactivation of spatial priorities. Behavioral results demonstrated robust facilitation in both learning contexts: participants responded faster and more accurately when targets appeared in high-frequency (SL) or high-reward (RL) locations. Multivariate EEG decoding revealed a clear dissociation between learning types. SL induced spatially specific neural modulation during the ping display, whereas RL did not. Rather, ERP analyses revealed that RL modulated a later lateralized component related to the discrimination of the target (the SPCN), reflecting greater cognitive effort and discrimination demands at low- (vs. high-) reward locations. Hierarchical drift diffusion modeling supported these distinctions by revealing complementary effects on decision dynamics and discriminability across the two learning types. Together, these findings show that SL and RL, despite producing similar behavioral enhancements, rely on (partially) dissociable neural mechanisms. SL shapes anticipatory visual processing even before target onset, reflecting the encoding of learned spatial regularities. In contrast, RL primarily impacts later value-driven selection and decision-related processes during the search itself.
Talk 5, 9:15 am, 41.15
Unique contributions of selective and sustained attention to neural image representations
Anna Corriveau1,2, Dongfang Tian1, Matthieu Chidharom1,2, Edward K. Vogel1,2,3, Monica D. Rosenberg1,2,3; 1Department of Psychology, University of Chicago, 2Institute for Mind and Biology, University of Chicago, 3Neuroscience Institute, University of Chicago
Attention prioritizes relevant information from the environment but is not stable from moment-to-moment. While relevant information receives an overall processing benefit (selective attention), sustained attentional state fluctuates on top of attentional selection leading to trial-to-trial variability in processing. To what extent do these separable components of attention affect the fidelity of visual information represented in neural EEG signals? Participants performed four 15-minute blocks (600 trials/block) of a continuous performance task in which stimuli were images overlaid on oriented Gabor patches. In two blocks, images were task-relevant. On each trial, participants responded with a button press when an image was from a frequent category (food and vehicles) and withheld a response when it was from the infrequent-category (animals, 10%). During the other two blocks, participants responded based on Gabor orientation. To quantify differences in sustained attentional state, trials were split based on whether they occurred during in-the-zone (low response variance) and out-of-the-zone (high response variance) states within both image-relevant and image-irrelevant blocks. Participants made more commission errors when out-of-the-zone (p=0.013). We compared EEG logistic regression classification (food vs. vehicles) for relevant, irrelevant, in-the-zone, and out-of-the-zone trials. Classification performance was higher for relevant vs. irrelevant images, with evidence of a difference lasting from 250-600ms after trial onset. However, sustained attention influenced classification differently for image-relevant and image-irrelevant blocks, suggesting an interaction between selective and sustained attention. During image-relevant blocks, classifier performance was higher for in-the-zone vs. out-of-the-zone trials (300-450ms after onset). However, when images were irrelevant, in-the-zone classification under-performed out-of-the-zone classification in the initial period following stimulus onset but this pattern reversed later in the trial. Results reveal unique representational time courses for different selective and sustained attentional states providing further support for a distinction between attentional subcomponents.
Office of Naval Research Multidisciplinary University Research Initiatives (MURI) N00014-23-1-2768 to E.K.V. and M.D.R.
Talk 6, 9:30 am, 41.16
The “Attentional Blink” is not always Attentional: Different Mechanisms Underly Task-switching and Single-task Designs
Matthew Junker1, Albert Kim1, David E. Huber1; 1University of Colorado Boulder
A well-documented limitation of visual processing is the transient deficit in responding to a second visual target (T2) when presented within the specific temporal interval of 100-400 ms after a first (T1) in Rapid Serial Visual Presentation (RSVP). The mechanisms of this “attentional blink” (AB; Raymond et al., 1992) have been investigated using various target-defining and response features, sometimes manipulating target-defining features within a single trial (e.g., identify the only white letter, then indicate whether a black X appeared) and sometimes keeping the target-defining feature constant (e.g., look for two target letters among numbers). We provide evidence that although the transient T2 deficit in the former “task-switching” paradigm may indeed be due to “attentional” limitations, deficits in the latter “single-task” paradigm may result from neural habituation of the target-defining feature (Rusconi & Huber, 2018). Specifically, observers were presented with two target words defined by their semantic category (e.g., colors) among nontargets (e.g., clothes) and a lag-dependent T2 deficit was observed even when participants were instructed to report only the second instance of a target. When T1 was defined by a different category from T2 (e.g., numbers for T1 and colors for T2), ignoring T1 did not produce a lag-dependent T2 deficit; therefore, different mechanism may underly performance deficits between these two tasks. A follow-up experiment was conducted in which participants categorized only the final word presented, which appeared at various intervals after another word of the same category. We observed lag-dependent changes in RT, demonstrating that encoding T1 to memory is not required for eliciting the transient T2 deficit for single-task designs. Finally, we demonstrate a strong reduction in an ERP component associated with late perceptual processing (the N170) during the AB in a single-task design. Together, these results suggest that different mechanisms underly single-task and task-switching AB paradigms.