The post-target dip: Detecting targets in a continuous stream boosts memory for target-paired images but impairs memory for the next image

Poster Presentation 56.456: Tuesday, May 19, 2026, 2:45 – 6:45 pm, Pavilion
Session: Attention: Temporal

Teresa P. Pham1 (), Vanessa G. Lee1; 1University of Minnesota

Visual attention is rarely constant over time. In the attentional boost effect (ABE), participants encode background images while monitoring a sequence of colors for target colors. Memory is better for target-paired than distractor-paired images, reflecting a transient temporal orienting response. But is this response confined to the target moment, or does it facilitate (akin to the “lag-1 sparing” effect) or impair the next image, relative to later temporal positions? In four experiments, participants encoded a continuous stream of objects (800ms/item) for a later memory test while pressing the spacebar for a pre-specified target tone (Experiments 1 and 2) or a pre-specified target colored square (Experiments 3 and 4) in a separate stream. We examined five nontarget temporal positions after the target, followed by 1-4 additional trials to disrupt target regularity, and varied the nature of nontarget stimuli to create either a continuous dual-task (nontargets were distractor tones or colors, “distractors” task) or an occasional dual-task context (nontarget trials were blank, “no distractors” task). We found an attentional boost effect—better memory for target-paired than nontarget-paired images. The ABE was attenuated when nontargets were blank rather than distractors. Importantly, memory varied systematically for the five nontarget positions immediately following the target: memory was worst at the T+1 position and gradually improved at later positions, revealing a post-target dip followed by gradual recovery. These findings suggest that detecting and responding to behaviorally relevant stimuli produces a temporal ripple, facilitating target-paired images while pulling resources away from the next image. The gradual recovery at later positions may reflect a shift of resources to the background images. These results uncover a new temporal dynamic in continuous tasks and have implications for everyday multitasking, such as driving or receiving occasional text messages.

Acknowledgements: McKnight Foundation