Neural processing of scene-relative object movement during self-movement

Poster Presentation 26.303: Saturday, May 18, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Motion: Optic flow

Xuechun Shen1,2 (), ZhouKuiDong Shan3,2, Simon Rushton4, ShuGuang Kuai1,2, Li Li3,2; 1East China Normal University, Shanghai, China, 2NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China, 3New York University Shanghai, Shanghai, China, 4Cardiff University, Cardiff, United Kingdom

Much research has examined how the visual system identifies scene-relative object movement during self-movement. Here we examined the related neural processing by identifying brain regions involved in this task. In a Siemens Magnetom Prisma Fit 3T MRI scanner, participants viewed through prism glasses a stereo display (9.5°Hx19°V) that simulated lateral self-movement (speed: 0.032 m/s) through a 3D volume composed of 63 randomly positioned red wireframe objects (depth: 0.55-1.05 m) with counter rotation of gaze. In the non-moving target condition, a yellow target object was positioned at 1/4 (near) or 3/4 (far) of the scene’s depth range. In the moving target condition, the target at the near distance was given its retinal speed at the far distance and vice versa, causing the target to appear moving in the scene. The target movement was thus not defined by higher or lower speeds than the rest of the scene objects, and the moving and non-moving target conditions were equated for all retinal information. A control condition without simulated self-movement was also tested in which the scene remained static on the screen. During scanning, on each 2-s trial, participants were asked to report when the scene objects underwent a luminance contrast change to control attention (irrelevant to object movement identification). We identified known visual and optic flow areas as regions of interest (ROI) using standard localizers and performed multiple-voxel-pattern-analysis on the most active 300 voxels for each ROI. Across 20 participants, the decoding accuracy of scene-relative object movement versus no object movement was significantly higher than chance in higher-level dorsal visual areas V7 and MT+. Furthermore, these areas could successfully differentiate scene-relative object movement with and without simulated self-movement. Using well-designed visual stimuli, the current study reveals that areas V7 and MT+ play a crucial role in processing the scene-relative object movement during self-movement.

Acknowledgements: Supported by research grants from the National Natural Science Foundation of China (32071041, 32161133009, 32022031), China Ministry of Education (ECNU 111 Project, Base B1601), the major grant seed fund and the boost fund from NYU Shanghai, and UK Economic and Social Research Council (ES/S015272/1)