Integrating Impaired Vision and Hearing to Improve Spatial Localization

Poster Presentation 23.360: Saturday, May 18, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Multisensory Processing: Development, clinical

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Yingzi Xiong1,2 (), Quan Lei3, Shirin Hassan4, Daniel Kersten2, Gordon Legge2; 1Johns Hopkins University, 2University of Minnesota, 3Wichita State University, 4Indiana University

Introduction. Spatial localization, which is critical for safe mobility and social interactions, relies heavily on vision and hearing. When vision and/or hearing impairment occurs, integrating vision and hearing may maximize the use of the residual senses. However, such impairment is often associated with degraded sensory input and unstable sensory status, which may influence the integration process. Here we investigated the integration of vision and hearing in a spatial localization task in individuals with heterogeneous vision and hearing impairment. Methods. Eighty-five participants completed a spatial localization task: 36 younger and 13 older controls with normal vision and hearing, 10 with hearing impairment only, 13 with vision impairment only, and 13 with dual vision and hearing impairment. Participants verbally reported the directions of visual (200ms, 3 deg diameter, 90% contrast target), auditory (200ms, pink noise with 200-8000 Hz, 60 dB Hearing Level), or audiovisual targets (simultaneous from the same location) across 17 locations spanning 180 degrees in the horizontal plane. Spatial biases (offsets) and uncertainties (variability) were obtained for each location in each condition. Results. Vision and hearing impairments were each associated with increased biases and uncertainties in unimodal localization, resulting in large variations across locations and individuals. To reconcile these variations, we identified individualized integration zones and segregation zones based on whether the audiovisual discrepancies support a common cause inference. Across all locations, people with sensory impairment, especially those with dual sensory impairment, showed less integration zones than controls. However, the benefit of integration (reduced uncertainty in the bimodal condition) in the integration zones, or lack thereof in the segregation zones, were consistent across all groups. Conclusion. Impairments in vision and hearing reduce the likelihood of making a common cause inference while localizing a bimodal target. However, the advantage of integration persists when the criteria for a common cause are satisfied.

Acknowledgements: NIH R00 EY030145