Attention in crisis: Clarifying flicker change blindness

Poster Presentation 36.454: Sunday, May 17, 2026, 2:45 – 6:45 pm, Pavilion
Session: Attention: Inattention, attentional blindness

Justin George1, Jonathan Flombaum1; 1Johns Hopkins University

Computational advances, combined with a better understanding of stimulus pooling in the visual system, suggest that processing limits attributed to attention may be structural rather than resource-limited. Across three behavioral experiments and simulations with a pooling model, we demonstrate that indeed, change detection limitations in a flicker paradigm can be well-characterized without appealing to a limited attentional capacity. We found that, in addition to obscuring changes outside fixation, pooling creates a threshold level of “change noise” across an image, which legitimate change signals must overcome. Participants searched for changes in flickering photographic images. Initially, we found that image-wise reaction times were correlated for scenes presented upright versus inverted, implicating image structure over attention. We then derived a model-based change-magnitude by subtracting the model’s pooled feature vectors for each base and changed image across multiple fixations. This isolates the altered region because unchanged areas cancel. This metric did not predict detection latencies, even after several reweightings, indicating that low-level pooling corruption alone cannot explain performance. The model pools features over progressively larger, receptive field–like regions with distance from fixation, enabling synthesis of metamers that match its pooled statistics. Using metamerized versions of each base and changed image, we computed pixel-wise differences to obtain a change-magnitude that includes pooling “change noise” in unchanged regions; this measure correlated strongly and significantly with detection latencies. We conclude that change blindness in this context arises from a combination of undetectable differences when a change region is outside fixation and, crucially, with how much other regions of the image may appear to change. We further demonstrate through simulation how attention can be conceptualized as capacity-free spatial selection such that it reduces the impact of general change noise, producing novel predictions for cueing experiments.