Comparing auditory and visual category learning

Poster Presentation: Tuesday, May 21, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Plasticity and Learning: Properties

Casey L. Roark1 (); 1University of New Hampshire

Introduction: Categorization is a fundamental skill that spans the senses. Categories enable quick identification of visual objects in our surroundings and phonemes and words in spoken speech. While categories are ubiquitous across modalities, the amodal and modality-specific mechanisms of perceptual category learning are not well understood. I investigated learning of artificial auditory and visual categories that shared a higher-level unidimensional rule structure. If learners build amodal category representations, they should benefit from simultaneous learning of categories from different modalities that share a higher-level structure. If learners build representations separately across modalities, their learning should either be unaffected or impaired by simultaneously learning categories from different modalities. Methods: Learners were randomly assigned to learn two auditory and two visual categories either simultaneously (interleaved) or separately (blocked). The higher-level category structure was the same across modalities – learning required selective attention to one dimension (temporal modulation, spatial frequency) while ignoring a category-irrelevant dimension (spectral modulation, orientation). After 400 training trials (interleaved: auditory and visual together; blocked: auditory then visual or vice versa), participants completed two separate generalization test blocks for both modalities (counterbalanced order). Results: When learning categories separately, accuracies were no different across modalities, indicating that the categories were well-matched for difficulty. When learning categories simultaneously, learners were significantly more accurate for visual than auditory categories. Importantly, there were no significant differences in test performance across blocked and interleaved training conditions in either modality. Conclusion: These results indicate that learners build separate, modality-specific representations even when learning auditory and visual categories simultaneously. Further, learners do not exploit the shared amodal structure of categories across modalities to facilitate learning. These results have important implications for understanding learning of real-world categories, which are often multimodal, and highlights the importance of considering the role of modality in models of category learning.