Multisensory Embodiment Modulates Body-Centered Spatial Attention Beyond Visual Cues
Poster Presentation 43.339: Monday, May 18, 2026, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Face and Body Perception: Bodies
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Hiroaki Shigemasu1 (), Yuichi Takao1, Harin Hapuarachchi1; 1Kochi University of Technology
Visual attention is captured by body-part representations (Reed et al., 2006). However, it remains unclear whether the attentional bias reported for body parts reflects purely visual factors or also critically depends on multisensory body ownership. We investigated how embodiment—multisensory integration of visual, proprioceptive, and motor signals—modulates spatial attention using virtual extended body parts. Twenty-four right-handed participants performed a visual detection task in VR across three conditions: (1) self-body with embodied extended bodies (virtual hands controlled by foot trackers), (2) self-body with non-embodied extended bodies (visually identical hands without motor control), and (3) self-body only. Following an embodiment induction task (tracking moving targets for 60 s), participants detected target stimuli appearing near self-body or extended body locations while maintaining central fixation. Since hands and feet were occupied for avatar control, responses were recorded by orienting the head-mounted display toward target locations. Reaction times (RTs) were measured. Contrary to our hypothesis that self-body would capture attention and facilitate detection, RTs were significantly faster for locations away from self-body compared to self-body locations across all conditions (p = .046). No significant RT differences emerged among non-self-body locations across the three conditions, suggesting that attention was not preferentially allocated based on extended body presence. Critically, comparing self-body (strong embodiment with robust proprioceptive/motor feedback) to extended bodies (weak or no embodiment with visual input only) revealed significant RT differences despite identical visual features, indicating that embodiment strength—driven by non-visual multisensory signals—contributes to attentional prioritization beyond visual body representations alone. The faster RTs away from self-body may reflect inhibition of return from initially attended self-body positions. These findings demonstrate that while visual body representations influence spatial attention, the presence of multisensory embodiment through proprioception and motor feedback provides additional attentional effects that cannot be explained by vision alone, revealing the multisensory foundations of body-centered attention.
Acknowledgements: MEXT/JSPS KAKENHI Grant Number 23K03004