Cross-modal feature based attention facilitates spatial transfer of perceptual learning in motion-domain figure-ground segregation

Poster Presentation 43.465: Monday, May 22, 2023, 8:30 am – 12:30 pm, Pavilion
Session: Multisensory Processing: Audio-visual, visuo-vestibular

Catherine A. Fromm1 (), Krystel R. Huxlin2,3, Gabriel J. Diaz1,3; 1Rochester Institute of Technology Center for Imaging Science, 2Flaum Eye Institute, University of Rochester Medical Center, 3University of Rochester Center for Visual Science

This study tested the role of a cross-modal feature based attention (FBA) cue on perceptual learning and spatial transfer. The trained task was figure-ground segregation in the motion domain. The experiment involved a pre-test, ten days of training, and a post-test. Twelve visually intact participants were immersed in a virtual environment delivered to a Vive Pro Eye. Participants identified the location and motion direction (MD) of a peripheral 10° aperture of semi-coherently moving dots embedded at randomized locations within whole-field random dot motion. The aperture contained both randomly moving dots and signal dots which had global leftward or rightward motion. To manipulate motion coherence, a 3-up-1-down staircase adjusted the direction range of the signal dots in response to segregation judgments. The dot stimulus was preceded by a 1s white-noise spatialized auditory cue emitted from the fixation point (neutral group), or from an emitter moving in the direction of signal dots at 80°/s in a horizontal arc centered on the fixation point (FBA cue group). Visual feedback indicated the selected and true aperture locations, and correctness of the MD judgment. Analysis measured MD discrimination within the aperture as well as segregation ability, both measured in terms of direction range threshold (DRT). At trained locations, MD DRT improved similarly in FBA and neutral groups, and learning was retained when the pre-cue was removed (ΔDRT from pre-test to post-test: 61±10˚ (SD) FBA, 74±10˚ neutral), and transferred to untrained locations (41±10˚ FBA, 45±10˚ neutral). DRT for localization also improved in both groups when pre-cues were removed (49±10˚FBA, 44±10˚ neutral), but only the FBA group showed full transfer of learning to untrained locations in the segregation task (32±10˚ FBA, 23±10˚ neutral). In summary, transfer occurred for both MD and segregation tasks, but the segregation transfer required the presence of the cross-modal FBA cue during training.

Acknowledgements: NIH 1R15EY031090