Attention 1
Talk Session: Friday, May 15, 2026, 4:15 – 5:45 pm, Talk Room 1
Moderator: Alex White, Barnard College
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Talk 1, 4:15 pm, 15.11
Visual search is disrupted by perceived lighting changes in dynamic and static displays
Gabriel Conn1, Zephyr Markley1, Katie Jobson2, Kayla Sansevere3, Katherine Moore1; 1Arcadia University, 2University of Pennsylvania, 3Tufts University
Visual search depends on internal search templates that guide attention toward likely targets while filtering distractors. These templates, stored in activated long-term memory, can shift flexibly depending on task demands and display context. One factor influencing template formation is color constancy—the perceptual adjustment that maintains stable color appearance across changes in lighting. We investigated whether perceived lighting changes automatically reshape visual search templates, even when such adjustments hinder performance. Participants searched a rapid serial visual presentation for a target presented while background and distractor colors were systematically shifted in hue to simulate lighting changes. There were two critical distractors for each target, which were generated by applying red and green filters to the target. Participants were instructed to reject these distractors. Across four experiments and six unique targets using a dynamic display, participants consistently showed greater difficulty rejecting distractors whose color shifts matched the background tint, i.e., distractors that appeared as a target would have had there been a true lighting change. Target identification also suffered on tinted backgrounds relative to white. These effects were amplified when the background color persisted across trials as opposed to when it changed randomly, suggesting that establishing a lighting context strengthens template shifts. We found similar effects in static displays of realistic visual scenes. These findings demonstrate that color constancy principles influence visual search automatically, constraining strategic control over search templates. This reflexive adjustment, while sometimes counterproductive as in the case of this study, is reflective of an adaptive mechanism for object recognition in natural environments.
Talk 2, 4:30 pm, 15.12
Short-term effect of eye-specific attention training in augmented reality
Yizhi Wang1,2 (), Jinyou Zou3, Peng Zhang1,2; 1State Key Laboratory of Cognitive Science and Mental Health, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China, 2University of Chinese Academy of Sciences, Beijing, China, 3Aier Academy of Ophthalmology, Central South University, Changsha, China
Introduction: Top-down attention can modulate visual processing in specific monocular channels. Here, we investigated whether short-term (30 min) training in altered reality (AR) can influence eye-specific attention ability. Methods: During AR intervention, images were captured in real time through a HD high-speed camera built in a pair of LED goggles. Participants played action video game with the non-deprived eye (NDE) presented with original videos while the deprived eye (DE) received phase-scrambled pink noise. A modified binocular rivalry paradigm was used to assess eye-specific attention effect before and after training. In the central visual field, one eye was presented with a radial grating while the other eye with dynamic Mondrian noise. Two monocular attentional cues, one in each eye, were presented to the left and the right side of the rivalry stimuli. Participants were required to pay attention to the monocular cue in one eye while monitoring the perceptual state of the rivalry stimuli. They pressed a button when the radial grating break interocular suppression from the dynamic noise. Results: At baseline, suppression time was significantly longer when the attended eye was presented with the dynamic noise than with the radial grating, suggesting an eye-specific attention effect. After training, eye-specific attention effect disappeared for the NDE, while no significant change was observed for the DE. Conclusion: Depriving phase information in one eye for 30 min in altered reality significantly reduced the eye-specific attention ability of the non-deprived eye. This effect suggests an adaptation of eye-based attention to a monocular channel.
Talk 3, 4:45 pm, 15.13
Attentional eye selection modulates interoucalar suppression acorss the visual cortical hierarchy
Chuan Hou1, Junxian Rao1; 1Smith-Kettlewell Eye Research Institute
Previous work revealed that attentional modulation in V1 from the amblyopic eye is degraded, and that this degradation correlates with the magnitude of behaviorally measured visual suppression in amblyopia, suggesting a tight link between selective visual attention and interocular suppression (Hou et al., 2016). In this study, we investigated whether attentional eye selection can causally modulate interocular suppression in the visual cortex. We employed dichoptic multiple-object tracking stimuli, modulated at distinct temporal frequencies in each eye, in observers with normal vision. The different frequencies allowed us to quantify spectral response components associated with each eye’s inputs: 3.75 Hz (F1) for target dots in the tracking eye and 3 Hz (F2) for distractor dots in the other eye. Neural activity and behavioral performance were recorded using a 128-channel EEG system under three conditions: (1) tracking dots in the dominant eye, (2) tracking dots in the non-dominant eye, and (3) passive viewing with no tracking. Regions of interest (ROIs) were defined with structural and functional MRI, including V1, extrastriate cortex (V3A, hV4, hMT+, and lateral occipital cortex), and the intraparietal sulcus (IPS). Our results showed that during natural viewing (passive condition), there was little or no neural bias between the two eyes. However, selectively attending to one eye (i.e., tracking target dots with either the dominant or non-dominant eye) produced stronger responses to the attended eye’s signal compared to the distractor eye. This attentional response bias was most prominent in extrastriate cortex and posterior IPS. These findings demonstrate that attentional eye selection can modulate interocular suppression across the visual cortical hierarchy. This mechanism may have clinical implications for understanding and treating visual disorders characterized by interocular sensory imbalance, such as the pathological suppression observed in amblyopia.
This research was supported by NIH grant R01EY035346 to C. H.
Talk 4, 5:00 pm, 15.14
Pre-saccadic attention and the right visual field advantage for word recognition
Mariam Latif1, Devon Lack1, Alex L White1; 1Barnard College, Columbia University
Background: Written words are generally easier to recognize in the right visual field (RVF) than in the left visual field (LVF). This asymmetry is typically measured while participants maintain fixation and judge words at unpredictable locations, unlike how readers frequently shift attention and move their eyes. Here we manipulate spatial attention in order to elucidate its role in the hemifield asymmetry. The question is whether English words are still easier to recognize in the RVF even when attention is focused to the left during preparation of a leftwards saccade. Methods: On each trial, we presented an English word on one side of fixation and a pseudoword on the other. The primary task was semantic categorization of the word. On neutral trials, participants maintained fixation and attended to both sides equally. On saccade trials, a central cue prompted participants to immediately saccade to the left or right. The word appeared just before the saccade, either at the cued side (2/3 of trials) or the opposite side (1/3 of trials). Results: On neutral trials, semantic categorization accuracy was higher when the word was in the RVF than the LVF. On saccade trials, as time approached the saccade onset, accuracy increased for words at the cued side and decreased at the opposite side. Thus, pre-saccadic attention modulated the magnitude of the hemifield asymmetry. However, the asymmetry never reversed: even just before a leftwards saccade, accuracy was still better for words in the RVF. Conclusion: The hemifield asymmetry is not due to an attentional bias to the right, as it persists when attention is focused to the left. Rather, we propose that it is due to the left lateralization of reading-related brain areas. Future directions include testing readers of right-to-left scripts (e.g., Arabic) who have more experience attending leftwards while reading.
Talk 5, 5:15 pm, 15.15
Dual tasks in animated data visualization: evaluating global statistics under multifocal attention
Ouxun Jiang1, Steven Franconeri1; 1Northwestern University
When presented with an animated data visualization, viewers often complete multiple tasks simultaneously. For example, in an animated bubble chart, viewers might track a few bubbles representing data points of interest while also evaluating whether the average size of the bubbles is increasing or decreasing. Under such dual-task conditions, how do the visual tasks interfere with each other? If the tracking task impairs the evaluation of average features, it would suggest that the ‘multifocal’ attention used during tracking is insufficient for extracting global statistics; otherwise, it would suggest that global features remain accessible outside focal attention. To test this, we asked participants to perform a multiple-object tracking task while evaluating the average feature (size) changes of either the tracked targets or the non-targets. Results showed that during tracking, participants could reliably evaluate the average size change for targets but not for non-targets. It suggests that the multifocal attention demanded by tracking tasks relies on more of a focal selection, consistent with the idea that feature information degrades outside visual focus, even for global statistics. We also tested whether a practical design intervention could ease processing demands for animated data visualizations by highlighting target objects. Highlighting improved performance in tracking and in evaluating average size change of non-targets. These findings provide insights for designers to consider the limits of demanding multiple tasks within one animated data visualization, and practical design interventions that could mitigate such limits.
Talk 6, 5:30 pm, 15.16
A headwinds bias in visual attention: Selective detection of self-disadvantaging changes in a game of Pong
Loren Matelsky1, Colleen Macklin2, Benjamin van Buren; 1The New School, 2Parsons School of Design
People are better able to recall circumstances that have disadvantaged them (headwinds) compared to those that have been advantageous (tailwinds). For example, people believe that their parents’ actions more frequently favored their sibling, and political party members believe that the political system is biased against their own party. Does the tendency to overweight headwinds events first emerge during recall, or might it originate earlier, in visual attention and perception? If so, when playing a game of Pong against a computer-controlled paddle, people might be more likely to notice changes that impede their goals, relative to changes that help them. In a preregistered experiment, 160 online players completed a 60-second game of Pong with an embedded measure of visual change detection/blindness. Every five seconds, the screen flickered gray for 150ms, masking a 5% length change in one paddle. In Headwinds games, the changes hindered the player (either the player’s paddle shortened, or the computer paddle lengthened); in Tailwinds games, the changes helped the player (either the player’s paddle lengthened, or the computer paddle shortened). After the game, we measured players’ awareness of the visual changes and their impressions of whether the game was fair. Although paddle length changes were perfectly matched across conditions, players were substantially more likely to notice Headwinds changes vs. Tailwinds changes (65% vs. 35%) and to judge Headwinds vs. Tailwinds games as unfair (74% vs. 46%). Players selectively noticed changes that hindered them, and were blind to changes that helped them. We propose that the headwinds/tailwinds evaluative bias is rooted in a visual information processing bias, shaping not just what we can recall, but what we even see in the first place.