The role of motor and auditory predictive cues in modulating neural processing of predicted visual stimuli

Poster Presentation 43.467: Monday, May 22, 2023, 8:30 am – 12:30 pm, Pavilion
Session: Multisensory Processing: Audio-visual, visuo-vestibular

Batel Buaron1, Roy Mukamel1; 1Tel Aviv University

Performance of goal directed actions requires integration of motor commands with their expected sensory outcome. A prominent theory suggests that predictions of actions’ sensory outcome (‘efference copies’) are sent to relevant sensory regions and modulate their neural state, resulting in differential processing of the reafferent sensory signal. However, predictive signals are not unique to actions and have been found for non-motor sources. It is an open question whether motor and non-motor predictive signals share common neural mechanisms. Our previous study showed that neural activity in visual regions depend on the hand (Right/ Left) that was used to trigger identical visual stimuli. This phenomenon provides a handle for comparing the mechanism underlying motor and sensory predictions, by testing whether sensory predictive cues also modulate processing in visual cortex in a laterality-dependent manner. To this end, we used multi-voxel pattern analysis to classify fMRI activity patterns of identical visual stimuli according to the laterality of preceding cue: either Right/Left button-presses or tones delivered to Right/Left ear. Preliminary results (n=5) suggest that activity in visual cortex evoked by identical visual stimuli was modulated in a hand-dependent manner, similar to our previous findings. In addition, activity in visual cortex was also modulated in an ear-dependent manner (cue presented to Right/Left Ear). Interestingly, we see little overlap between patches separating Right/Left Motor and Auditory cues. This pattern of results suggests that lateral representation of cues in visual cortex is common for both motor and auditory prediction mechanisms, though the anatomical distribution of such representations is different. We will further examine this by performing a cross-decoding analysis between the two modalities (e.g., train a classifier to separate between hands and test it on separating between ears based on signals in visual cortex). These results will help shape models of predictive mechanisms in visual processing.

Acknowledgements: This research was supported by the Israel Science Foundation (grant No. 2392/19 to R.M.)