Dynamic Perception: Synergy between Grouping, Retinotopic Masking, and Non-retinotopic Feature Attribution
63.330, Wednesday, 21-May, 8:30 am - 12:30 pm, Jacaranda Hall
Haluk Ogmen1,2, Michael Herzog3, Babak Noory1,2; 1Dept. of ECE, University of Houston, 2Center for Neuro-Engineering & Cognitive Science, University of Houston, 3Laboratory of Psychophysics, Brain Mind Institute, EPFL
Purpose: Due to visible persistence, moving objects should appear highly blurred with their features blending to those of other objects or the background. This does not occur under normal viewing conditions. We proposed that clarity of vision is achieved through a synergy between grouping, retinotopic masking, and non-retinotopic feature attribution. Here, we investigated the retinotopy of visual masking, non-retinotopic feature attribution, and their relationship to perceptual grouping. Methods: We used a radial Ternus-Pikler Display (TPD) in which the target and mask were positioned either according to retinotopic coordinates (retinotopic mask) or according to non-retinotopic grouping (non-retinotopic mask). Two ISIs were used to generate element and group motion percepts in the TPD. In experiment 1, we used a metacontrast mask that produced non-monotonic (type-B) masking. In experiment 2, we used a structure mask that produced monotonic (type-A) masking. To study feature attribution, in Experiment 3 we made the direction of the TPD predictable. In all experiments, observers kept steady fixation at the center of the display and eye movements were monitored in control experiments. Results: Retinotopic masking predicts masking effects only for retinotopic masks for both element and group motion percepts in TPD. In contrast, non-retinotopic masking predicts masking effects for retinotopic masks only for element motion percepts in TPD and masking effects for non-retinotopic masks only for group motion percepts in TPD. Our results are consistent with retinotopic masking effects for both metacontrast and structure masks and for type-A and type-B masking functions. In Experiment 3, the retinotopic mask maintained its masking effect in element motion percept but not in the group motion percept, indicating effective non-retinotopic feature attribution in the latter case. Conclusions: Our results suggest that retinotopic masking controls motion blur while non-retinotopic feature attribution allows the computation of form across space and time.