An Efficient Multimodal fMRI Localizer for High-Level Visual, Auditory, and Cognitive Regions in Humans

Poster Presentation 36.412: Sunday, May 19, 2024, 2:45 – 6:45 pm, Pavilion
Session: Face and Body Perception: Neural mechanisms 2

Samuel Hutchinson1 (), Ammar Marvi1, Freddy Kamps1, Emily M. Chen1, Rebecca Saxe1, Ev Fedorenko1, Nancy Kanwisher1; 1Massachusetts Institute of Technology

Although localizers for functional identification of category-selective regions in individual participants are widely used in fMRI research, most have not been optimized for the reliability and number of functionally-distinctive regions they can identify, or for the amount of scan time needed to identify these regions. Further, functional localizers for regions in high-level visual cortex do not enable localization of cortical regions specialized for other domains of cognition. Here we attempt to solve these problems by developing a single localizer that enables reliable localization in just 23 minutes of fMRI scan time of cortical regions selectively engaged in processing faces, places, bodies, words, and objects, as well as cortical regions selectively engaged in processing speech sounds, language, and theory of mind. To this end, we use a blocked design in which participants watch videos from five different visual categories (of scenes, faces, objects, words, and bodies), while simultaneously listening to and performing tasks on five different kinds of audio stimuli (false belief sentences, false photo sentences, arithmetic problems, nonword strings, and texturized speech). We counterbalance these conditions across five runs of ten blocks each, with each block consisting of one 21-second auditory stimulus and seven three-second videos from one visual category. Each visual stimulus occurs equally often with each audio stimulus, so that contrasts in each modality are unconfounded from conditions in the other. Data from ten participants show that this Efficient Multimodal Localizer robustly identifies, within individual participants, cortical regions selectively engaged in processing faces, places, bodies, words, and objects, as well as speech sounds, language, and theory of mind, as tested against established standard localizers for these functions. The stimuli and presentation code for this new localizer will be made publicly available online, enabling future studies to identify functional regions of interest with the same procedure across multiple labs.

Acknowledgements: This work was supported by NIH grant 1R01HD103847-01A1 (awarded to RS) and NIH grant 5UM1MH130981.