Introducing EYE-LEAN (Locomotion, Exploration, Action and Navigation with Eye Tracking): a Behavioral Research Toolkit for Data Rich Virtual Reality Experiments
Poster Presentation 23.470: Saturday, May 16, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Action: Navigation, locomotion
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Jakub Suchojad1, Kavindya Dalawella1, Serena DeStefani2, Karin Stromswold1, Jacob Feldman1; 1Rutgers University, 2Ohio State University
The advent of affordable Virtual Reality (VR) Head-Mounted Displays (HMDs) has opened brand-new avenues for designing ecologically valid behavioral studies and collecting rich datasets from freely locomoting human subjects. Here, we present a comprehensive toolkit for scientists looking to utilize VR in their experiments, which includes both ready-to-use assets as well as a list of resources and best practices for designing VR experiments. We address the entirety of such a project, from creating the experimental environment inside the Unity game engine, through obtaining avatar models, all the way to animating them by using off-the-shelf motion capture data or recording your own. As a part of our setup, we provide a robust, C#- based eye tracking data collection and visualization system. Our approach utilizes the open- source OpenXR standard and can be deployed on HMDs equipped with an eye-tracker that supports the standard. The eye tracking system allows for dynamic, vergence-based estimation and visualization of the current gaze point (based on Duchowski et al.), allowing proper calibration of the eye tracker. The OpenXR API allows for collection of detailed raw gaze characteristics data, which in turn permits robust secondary data analyses, including pupillometry. We provide code to replay the recorded gaze data inside Unity, allowing for informative visualization of gaze patterns. To illustrate the toolkit, we present selected results from several recent studies from our own lab and others demonstrating practical applications of our system, illustrating how one base paradigm can be adjusted to support behavioral studies across disciplines. These examples, which span navigation, decision making, and linguistics, highlight the advantages of our paradigm over existing solutions by allowing simultaneous locomotion and display of complex stimuli. We see our toolkit as useful both for young researchers newly starting behavioral work, and seasoned groups looking for novel research solutions.
Acknowledgements: NSF training grant NRT-FW-HTF 202162811 NSF BCS Grant 2324598 NSF BCS Grant 2122119 Rutgers Research Council Grant