Exploring eye-hand coordination with central field loss in virtual reality

Poster Presentation 43.419: Monday, May 18, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Eye Movements: Clinical

Jade Guénot1, Preeti Verghese1; 1Smith-Kettlewell Eye Research Institute

Eye-hand coordination relies on both accurate depth perception and gaze control, which are impaired in central field loss (CFL). To better understand how CFL affects eye and hand control during dynamic 3D tasks, we developed a realistic VR experiment using the HTC Vive Pro Eye headset and the PTVR toolbox. Participants (4 with CFL, 9 controls) tracked a butterfly moving unpredictably at an average speed of 0.5m/s, with random changes in speed and direction to prevent anticipatory strategies. Three conditions were tested: eye-tracking only (following the butterfly with gaze), net-tracking (keeping the butterfly centered in a virtual net rendered with the hand controller), and catching (capturing it with the net as quickly as possible). Gaze (eye + head) and hand movements were recorded continuously. Participants completed these conditions monocularly and binocularly, and controls additionally performed them with simulated scotomas. Controls caught the butterfly significantly faster with binocular compared to monocular viewing, with simulated scotomas producing additional slowing. Patients were overall slower, with disproportionately longer catching times in monocular viewing. When misses occurred, they often resulted from underestimating the butterfly’s distance. Gaze precision analyses revealed distinct eye-hand coordination patterns. Controls consistently showed slightly better angular precision in the eye-tracking condition than in the net-tracking condition (1-2° difference across conditions). In contrast, patients showed improved gaze precision during net-tracking binocularly, reducing angular error by 2° on average. Pursuit latency revealed the strongest difference between both groups: patients exhibited significantly reduced latency during net-tracking compared to eye-tracking (85 ms vs 121 ms), whereas controls showed comparable latencies (78 ms vs 70 ms). These preliminary findings suggest that in CFL, hand movements may help guide the eyes, enhancing gaze precision and reducing pursuit delays. VR offers a powerful framework to quantify these compensatory eye-hand strategies in realistic environments, with potential implications for rehabilitation approaches.

Acknowledgements: NIH funding R01 EY27390