SPHEER: a rich dataset of time-resolved gaze and head movements in virtual reality

Poster Presentation 53.407: Tuesday, May 21, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Eye Movements: Natural world and VR

Erwan David1 (), Melissa L.-H. Vo1; 1Goethe University Frankfurt, Scene Grammar Lab, Germany

The opportunities offered by extended reality (XR) devices with embedded eye tracking capabilities have opened the door to many new studies and applications, may they be scientific, video-ludic or for training. We now share a rich dataset related to 11 experiments in which we collected both eye and head data rotations as well as positions thanks to eye tracking in virtual reality devices: the Scene Perception, Head and Eye in Extended Reality dataset. The different experiments cover several types of stimuli (360 images, 360 videos, 3D environments) and tasks (free-viewing, object search). This dataset totals more than 380 participants, and accumulates over 6 days of continuous trial time, sampled at 120Hz and 250Hz. Along with this dataset we share meta-data linking every trial to the related experimental conditions and stimulus (e.g. bounding boxes of objects in indoor scenes) in order to make the dataset as rich and useful as possible. In addition, we share a new dataset of gaze data, created specifically for the identification of gaze events in 3D. We implemented a testing protocol in which participants produced fixations, saccades, smooth pursuits and vestibulo-ocular responses, and additionally made them vary vergence distance during some of these events. Such data will be very helpful for creating new methods of identifying gaze-events in 3D, which are now starting to gather substantial interests thanks to XR devices, but have not been the focus of as much dedicated methodological work as of yet, due to common eye tracking experiments historically being set on desktop computers. We share this rich dataset in the hope that communities interested in modeling gaze (e.g., saliency, or scanpath prediction models) get a chance to create new specific and generalised models derived from a rich and broad corpus of real-world eye and head tracking data.

Acknowledgements: This work was supported by SFB/TRR 26 135 project C7 to Melissa L.-H. Võ and the Hessisches Ministerium für Wissenschaft und Kunst (HMWK; project ‘The Adaptive Mind’).