Retrospective linking in visual statistical learning
Poster Presentation 43.471: Monday, May 18, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Perceptual Training, Learning and Plasticity: Statistical learning
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Sophie D. Allen1 (), Nicholas B. Turk-Browne1,2; 1Yale University, 2Wu Tsai Institute
Combining visual information across experiences enables novel inferences that support flexible behavior. Much of our understanding of how information is combined across experiences comes from studies of integrative encoding, in which related information is retrieved during a new experience and linked to that experience in memory. However, this mechanism does not work for situations in which seemingly unrelated experiences get linked retrospectively in light of new information. Here, we tested whether visual statistical learning supports such retrospective linking to enable multi-step inferences. Participants viewed a continuous stream of objects containing embedded pairs while performing a size judgment cover task. Unbeknownst to them, the sequence was structured into tetrads presented across two blocks: initially unrelated pairs in Block 1 (AB, CD) were followed by a bridging pair in Block 2 (BC). This structure allowed us to test not only whether the directly experienced pairs (AB, CD, BC) were learned but also whether the bridging pair retrospectively linked the initially unrelated pairs to enable a two-step inference (AD). In a two-alternative force-choice familiarity test, participants showed high accuracy for the directly experienced pairs, indicating successful visual statistical learning. However, accuracy for the two-step inference was substantially lower and not significantly greater than chance. This result stands in striking contrast with a second experiment in which retrospective linking occurred robustly in an explicit associative inference task where participants were instructed to learn associations. Thus, when associations are learned more implicitly during visual statistical learning, overlapping experiences do not automatically integrate earlier object associations. This dissociation between explicit and implicit learning suggests that multiple mechanisms can support the combination of experiences over time.