Motion-corrected eye-tracking (MoCET) improves gaze accuracy during visual fMRI experiments

Poster Presentation 63.341: Wednesday, May 22, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Eye Movements: Accuracy, pursuit and eccentricity

Jiwoong Park1,2,3 (), JaeYoung Jeon1,3, Royoung Kim1,3, Kendrick Kay4, Won Mok Shim1,2,3; 1Center for Neuroscience Imaging Research, Institute of Basic Science (IBS), Republic of Korea, 2Department of Biomedical Engineering, Sungkyunkwan University (SKKU), Republic of Korea, 3Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University (SKKU), Republic of Korea, 4Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota

Human eye movements are deeply connected to perception, attention, and memory (Hayhoe and Ballard, 2005), and are worthy of study from both neural and behavioral perspectives. In visual fMRI experiments, acquiring eye-tracking data could enable ecologically valid experiments in which eye movements are allowed. However, it is notoriously challenging to perform traditional camera-based eye-tracking in fMRI with high accuracy. Hence, most visual fMRI experiments are conducted under central fixation. In this study, we aim to improve eye-tracking methodology: specifically, we tackle the challenge that when the participant's head deviates from the initial calibration position, a significant drop in gaze accuracy is incurred (Morimoto and Mimica, 2005). First, we performed simulations using a computational geometry-based eyeball model in order to confirm that small head shifts on the order of what is typically observed in fMRI can lead to substantial inaccuracies in eye-tracking results (0.5-mm head shift can lead to 2-3° gaze error). Next, we quantify the effects of subtle head motions on gaze accuracy during actual fMRI scans, and propose a novel method that leverages head motion parameters derived from standard neuroimaging data preprocessing to compensate for head shifts. This approach, termed Motion-Corrected Eye-Tracking (MoCET), does not require any additional hardware and can even be retrospectively applied to existing data. Our results, based on 3T and 7T fMRI datasets encompassing a diverse range of structured and naturalistic tasks (e.g. interactive 3D video gameplay and retinotopic mapping experiments), reveal that MoCET effectively compensates for head motion-driven drifts, leading to a significant enhancement in gaze accuracy. Specifically, MoCET reduces the error to 1.29 visual degrees compared to traditional eye-tracking methods (e.g. polynomial detrending; 3.24°, uncorrected; 4.4°). Our findings provide a feasible and efficient approach to addressing a major challenge of integrating eye-tracking with fMRI, contributing substantially to the field of cognitive neuroscience research.

Acknowledgements: This work was supported by the IBS-R015-D1, NRF-2019M3E5D2A01060299, NRF-2019R1A2C1085566 and the Fourth Stage of Brain Korea 21 Project in Department of Intelligent Precision Healthcare, Sungkyunkwan University (SKKU)