Multisensory Processing
Talk Session: Sunday, May 21, 2023, 8:15 – 9:45 am, Talk Room 2
Moderator: Patrizia Fattori, University of Bologna
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 8:15 am, 31.21
Visual cortical regions carry information about auditory attention
Abigail Noyce1, Weizhe Guo1, Wenkang An2, Barbara Shinn-Cunningham1; 1Carnegie Mellon University, 2Boston Children's Hospital
So-called visual regions of the human brain often participate in tasks that have no visual elements. Visual-biased frontal and some parietal regions are active during spatial auditory tasks (Michalka 2015, 2016; Deng 2019); visual-biased frontal regions also support non-spatial auditory cognition (Noyce 2017, 2021). Here, we used functional magetic resonance imaging (fMRI) and representational similarity analysis (RSA) to compare spatial with non-spatial attention in the visual cortical network. On each trial, subjects were first cued to use spatial attention, non-spatial attention, or passive listening, then cued to the exact target feature (a location or pitch). Four temporally-overlapping syllables, spoken by different talkers and spatialized to different locations, were presented, and subjects reported the target’s identity (/ba/, /da/, or /ga/). After preprocessing, fMRI data were fitted with a separate general linear model for each trial, including a regressor for that trial and nuisance regressors for each attention condition (Turner 2012), yielding trial-wise activation maps. For each subject, we defined anatomical regions of interest (ROIs), then trained support vector machines (SVM) for pairwise classification of all conditions. SVM classifier accuracy measures dissimilarity between conditions within that ROI. The resulting representational dissimilarity matrices summarize the information about task state encoded in each ROI. Active attention could be classified from passive listening across a broad network of brain areas, including visual-biased superior and inferior precentral sulcus (sPCS, iPCS) and superior parietal lobule (SPL), as well as auditory-biased superior frontal gyrus, superior temporal gyrus, and planum temporale. All of these regions also encoded spatial versus non-spatial attention. Bilateral SPL, right sPCS, and bilateral calcarine sulcus encoded the direction of spatial attention (left vs. right). These results further demonstrate that brain regions within the well-established visual processing network can be more generally recruited, especially for spatial processing.
Acknowledgements: Supported by the Office of Naval Research, Grant/Award Number: N00014-20-1-2709.
Talk 2, 8:30 am, 31.22
Visual vs. Auditory Landmark for Vestibular Self-motion Perception
Silvia Zanchi1,2,3,5 (), Luigi Felice Cuturi1,4, Giulio Sandini3, Monica Gori1, Elisa Raffaella Ferrè5; 1Unit of Visually Impaired People, Italian Institute of Technology, Genoa, Italy, 2DIBRIS Department, University of Genoa, Italy, 3Robotics Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy, 4Department of Cognitive, Psychological, Pedagogical Sciences and of Cultural Studies, University of Messina, Messina, Italy, 5Department of Psychological Sciences, Birkbeck, University of London, London, UK
Spatial navigation requires us to precisely perceive our position and the spatial relationships between our own and environmental objects’ location in space. As we move through the environment, multiple cues convey congruent spatial information: indeed, we rely both on inertial vestibular self-motion information and on visual and auditory landmarks. Here we directly investigate the perceptual interaction between inertial cues and environmental landmarks. Twenty-six healthy participants sat on a chair in a darkened room, leaning on a chin rest. On each trial, to test for self-motion detection, we delivered Galvanic Vestibular Stimulation (GVS) or sham stimulation pulse (0.7 mA of amplitude and 250 ms of duration). Critically, GVS activates the peripheral vestibular organs, i.e., the otoliths and semicircular canal afferents, eliciting a self-motion sensation (a roll tilt sensation). However, the chosen stimulation parameters induce a relatively weak virtual sensation of roll rotation. To test whether self-motion sensitivity could be aided by the environmental cue, participants performed the detection task with or without external visual (LED red light) or auditory landmark (pink noise sound emitted by a loudspeaker) both placed in front of them, in different blocks of trials. Participants’ ability to detect virtual vestibular-induced self-motion sensation with and without a landmark was measured using a signal detection approach. We computed the d prime as a measure of participants’ sensitivity and the criterion as an index of their response bias. Results showed that the sensitivity to detect self-motion was higher in the presence of the visual landmark, but not in the presence of the auditory one. The response bias remained unaffected. This finding shows that visual signals coming from the environment provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.
Acknowledgements: This work was supported by a Bial Foundation grant (041/2020) to E.R.F. and by the MYSpace project (principal investigator: M.G.) from the European Research Council (Grant 948349). S.Z. was also supported by an UK Experimental Psychology Grant.
Talk 3, 8:45 am, 31.23
Effect of subjective visual awareness on multisensory integration: evidence from behavioural data and computational modelling
Sanni Ahonen1 (), Thomas Otto2, Arash Sahraie1; 1University of Aberdeen, 2University of St Andrews
Multisensory stimuli are processed with higher speed and accuracy than could be expected based on responses to the unisensory components. This redundant signal effect (RSE) holds true both for healthy populations and individuals for whom stimulus detection and awareness are dissociated due to neurological complications. Pairing visual targets with concurrent stimulation in a different modality may improve subjective visual awareness, but the mechanisms for such interactions are unclear. In a behavioural experiment we investigated the role of subjective visual awareness in multisensory RSE. Continuous Flash Suppression (CFS) was used to reduce the awareness of visual targets. Participants were asked to press a response button if they saw or heard audio-visual stimuli. A visual target or an auditory tone were either presented alone or simultaneously. Speed and accuracy for stimulus detection were measured alongside subjective visual awareness. A benefit was seen in multisensory trials across all behavioural measures but only when participants were consciously aware of the visual target. The incidence rate of subjective awareness of the visual target was not affected by the simultaneous presentation of an auditory target. To further investigate the reaction time data, a Shifted Wald (SW) model was applied to the pooled RT distributions. Computational modelling revealed lower gain and threshold parameters in the reaction time distributions of unaware multisensory trials compared to the aware multisensory trials. These findings indicate that for a given audio-visual stimulus, the state of conscious visual awareness can lead to differences in reaction times and computational modelling is a useful tool to explore the differences in characteristics of the underlying cognitive functions in multisensory stimuli.
Acknowledgements: This work was supported by the Biotechnology and Biological Sciences Research Council (BBSRC) [grant number BB/M010996/1]
Talk 4, 9:00 am, 31.24
Interactions of body representations in rubber hand illusion and tool-use paradigms
Inci Ayhan1 (), Alp Erkent1, Emre Ugur1, Erhan Oztop2,3; 1Bogazici University, Istanbul, Turkey, 2Ozyegin University, Istanbul, Turkey, 3Osaka University, Japan
In neuropsychological literature, numerous case studies suggest two separate body representations in the brain; one for perception, called the body image, and one for action, called the body schema. Rubber hand illusion and tool-use paradigms have been used frequently in the last twenty-five years to investigate these body representations, respectively, with minimal overlap between the fields. However, interactions between these paradigms are probable, considering the common sensory modalities targeted by the techniques used for measuring their effects. Here, we combined rubber hand illusion and tool-use paradigms in a novel behavioral experimental setup (N=72) and comparatively examined the resulting changes in body representations through measures of forearm bisection (for body schema), proprioceptive drift, and subjective experience questionnaire (for body image). Specifically, after a tool-use task where subjects actively used a grabber tool with their right hand to move cubes close to or away from their body, we observed a change in the metric representation of the right forearm length depending on the length of the tool used (using the long tool resulted in an increase in perceived forearm length). Subsequent to the tool-use task, the “tool-holding” rubber hand illusion, where the experimenter stroked the tip of the tool held by the rubber hand either synchronously or asynchronously to that of the tool held by the subjects, also resulted in perceived forearm elongation if the subject observed a longer tool held by the rubber hand. Follow-up experiments showed that the forearm elongation effect that occurred during rubber hand illusion depended on prior active use of the tool, embodiment of the observed hand and tool, and a length disparity between the held and observed tools, which together revealed that the representation of forearm length, a component of body schema, can be modified through changes in body image.
Acknowledgements: Bogazici University Research Grant No: 19341
Talk 5, 9:15 am, 31.25
Sensorimotor reorganization in visual cortex in brain-damaged individuals with primary somatosensory damage
Jared Medina1, Yuqi Liu1,2, Elizabeth J. Halfen3, Jeffrey M. Yau3, Simon Fischer-Baum4, Peter Kohler5, Olufunsho Faseyitan6, H. Branch Coslett6; 1University of Delaware, 2Chinese Academy of Neuroscience, 3Baylor College of Medicine, 4Rice University, 5York University, 6University of Pennsylvania
Cortical reorganization after lesions to primary somatosensory cortex (S1) has been studied extensively in animal models, yet little is known regarding post-stroke sensorimotor plasticity in humans. We examined two brain-damaged individuals, LS and RF, who suffered lesions in the hand area of right S1 and posterior parietal cortex with spared motor cortex. Behavioral investigations revealed expected deficits in tactile detection and tactile localization in the contralesional limb in both patients, while the ability to perform simple hand movements was preserved. We then conducted functional neuroimaging experiments in which they received tactile stimulation or performed hand movements (opening-closing the fist). For LS, whose damage encompassed the entire hand area of S1, tactile stimulation on the contralesional right hand activated secondary somatosensory area (S2) and middle temporal gyrus. RF’s lesion was slightly different with a spared strip of S1 along the posterior bank of the central sulcus. When presented with tactile stimuli, she demonstrated activation in spared S1. When moving the contralesional hand, RF showed bilateral activity in sensorimotor cortex, whereas LS showed stronger activation in bilateral putamen and deactivation in ipsilateral cerebellum, indicating reweighting in the motor system. Surprisingly, when moving the contralesional hand, both patients showed significantly greater activation in lateral occipital cortex compared to eight age-matched controls. These results suggest the recruitment of body representations in visual areas after damage to somatosensory cortex, either due to increased visual imagery of the limbs due to reduced sensory feedback or some form of post-stroke plasticity and/or reweighting.
Acknowledgements: This material is based upon work supported by the National Science Foundation under Grant No. 1632849.
Talk 6, 9:30 am, 31.26
Narrative, not low-level vision, synchronises audiences during television viewing
Hugo Hammond1 (), Michael Armstrong2, Graham Thomas2, Edwin Dalmaijer1, Iain Gilchrist1; 1University of Bristol, 2BBC Research and Development, UK
Cinematic media (e.g. film, television) possesses a remarkable ability to synchronise audiences' neural, behavioural, and physiological responses. This synchrony, sometimes termed the 'tyranny of film' is largely considered to arise from low-level visual features and editing conventions. Recently, evidence has suggested that synchrony may also emerge from shared interpretation of narrative. However, no study to date has aimed to assess the relative contributions of narrative and low-level features towards synchrony. We designed a study in which participants (n=60) were presented with a 55-minute episode of the BBC television drama The Tourist. Content was presented to participants in one of two modalities: audio-only with audio-description, or visual-only with subtitles. In this way, the presentations shared no low-level features, but participants experienced the same narrative. During the sessions, we recorded participant's heart rate, and computed synchrony from this physiological measure using intersubject correlation analysis. We found evidence that synchrony was higher in the audio versus visual condition, however no significant differences were found between either visual or audio conditions and synchrony between-groups (i.e. narrative condition). Further, when modelling heart rate, 22% of variance could be explained by narrative, compared to 1.7% from low-level saliency. Saliency was derived through Itti, Koch, & Niebur’s (1998) saliency model, and computing root mean square energy from the audio track. Our results provide strong support for the idea that processing of a narrative can lead to markedly similar physiological responses across an audience. This effect is likely high-level, and cannot be explained by visual or auditory salience alone.