Representational momentum in 2D visual feature space follows a Feature-Selection strategy
Poster Presentation 56.436: Tuesday, May 19, 2026, 2:45 – 6:45 pm, Pavilion
Session: Perceptual Organization: Neural mechanisms, models
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Sedthapong Chunamchai1,2,3,4 (), Nithit Singtokum2,5, Pakapon Suesatchapong3, Anthipa Chokesuwattanaskul2,3,4, Chaipat Chunharas2,3,4; 1Medical sciences program, Division of Neuroscience, Faculty of Medicine, Chulalongkorn university, Bangkok, Thailand, 2Cognitive Clinical and Computational Neuroscience Center of Excellence, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand, 3Division of Neurology, Department of Medicine, Faculty of Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, Thailand, 4Chula Neuroscience Center, King Chulalongkorn Memorial Hospital, Bangkok, Thailand, 5Department of Physiology, Faculty of Medicine, Chulalongkorn university, Bangkok, Thailand
Representational momentum (RM) describes a robust perceptual bias in which memory for a changing stimulus is displaced forward along its trajectory. Although well established for single-feature representations, less is known about how RM operates in multidimensional feature spaces and whether the effect reflects integrating information across features or reliance on a dominant one. We examined RM in tasks where visual stimuli changed either along one dimension—size (log scale) or orientation (absolute degrees)—or simultaneously along both. Participants completed 288 one-dimensional (1D) trials and 384 two-dimensional (2D) trials. Stimulus trajectories were created as linear paths in latent 1D or 2D space. On each trial, participants viewed three sequential stimuli sampled along the trajectory, with step length manipulated across low, medium, and high levels and predicted a fourth stimulus using two-AFC response. Foils followed a 2×2 design manipulating deviation direction (Overestimation vs. Underestimation) and target distance (Near vs. Far). Twenty-three adults participated. In 1D trials, accuracy was higher for orientation than size changes (t = 9.0, p < 0.001), and accuracy on 2D trials exceeded mean 1D accuracy (t = 6.3, p < 0.001). Across conditions, near target foils elicited more errors than far foils (F = 158.2, p < 0.001). RM was robust in both 1D and 2D conditions (F = 30.7, p < 0.001). Notably, RM magnitude decreased as step length increased (F = 22.8, p < 0.001), suggesting that forward displacement reflects an absolute representational mechanism rather than a proportional scaling with trajectory size. To test whether 2D decisions reflect feature integration, we modeled 2D accuracy using 1D performance. A Feature Selection model—predicting performance based on the more reliable single feature—significantly outperformed a Probability Summation model (t = 28.1, p < 0.001). This indicates that participants predominantly relied on the most diagnostic feature rather than combining information across dimensions.