Representation of auditory motion in hMT+ of early blind individuals

Poster Presentation 53.432: Tuesday, May 19, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Motion: Mechanisms, models

Yang YANG1 (), Kelly Chang2, Ione Fine2, Woon Ju Park1; 1Georgia Institute of Technology, 2University of Washington

INTRODUCTION: hMT+ responds to auditory motion in early blind people. However, it remains unknown whether these responses engage spatiotemporally non-separable motion selectivity mechanisms analogous to those that mediate visual motion processing in the sighted. Recent evidence suggests that both sighted and early blind listeners rely on separable perceptual filters that emphasize stimulus onset and offset to hear auditory motion (Park and Fine, 2023). Here, we tested whether early blind hMT+ fMRI responses are primarily determined by onset/offsets, or monotonically increase to motion coherence, as predicted by non-separable motion mechanisms. METHODS: Seven early blind and eight sighted adults discriminated the direction of auditory motion stimuli that varied in motion coherence. Stimuli were nine broadband noise bursts (500-14000 Hz, 90 ms each). At 100% coherence, bursts changed location coherently over time, producing continuous motion. At 50% coherence, the first and last two bursts matched the coherent sequence, and the middle bursts were shuffled. At 0% coherence, all bursts were shuffled. BOLD responses were measured in functionally defined Planum Temporale (PT) and hMT+. RESULTS: Within PT, a region associated with auditory motion processing, the strongest fMRI responses were observed at 0% coherence and the weakest at 100% coherence in both groups. In hMT+, responses to auditory motion were weak in sighted individuals, whereas in early blind individuals, the pattern of responses was similar to that in PT. An encoding model analysis revealed that these results were best explained by a model based on spatially localized filters tuned for stimulus onsets and offsets. CONCLUSIONS: The recruitment of hMT+ for auditory motion in early blind individuals does not rely on mechanisms analogous to those used for visual motion in the sighted. Instead, hMT+ adopts novel computations, similar to those found in PT, that are optimized for extracting auditory motion.

Acknowledgements: NEI R00 EY034546 and Georgia Tech Smithgall-Watts Early Career Award to WP; NEI R01EY014645 to IF