Head and eye dynamics across different navigational goals

Poster Presentation 26.306: Saturday, May 18, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Motion: Optic flow

Andrés H. Méndez1, Cristina de la Malla1, Joan López-Moliner1; 1Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain

The pattern of visual motion that we experience as we move (i.e., the optic flow) has been proposed as the substrate from which visual self motion is estimated. Until recently, this was studied in controlled lab settings assessing participant’s abilities to detect the focus of expansion or heading direction from 2D optic flow patterns. Subsequent work has concentrated on studying the statistics and dynamics of the visual input in natural settings (e.g. Durant & Zanker,2020; Matthis et al 2022; Müller et al, 2023). Results suggest that flow patterns do not frequently resemble the expansive symmetric structure used in psychophysical studies. The characteristics of the visual input, however, are not independent from behavior, as organisms control the generation of visual information which will vary depending on the position of the eyes in the head, and the head in the world. Here we study head and eye dynamics of participants using head-mounted eye-trackers and inertial measurement units across three different navigational tasks: free locomotion, recreating a previous path and following someone else (n=4 for each condition). Results show that head stabilization does not vary significantly across conditions. Fixations towards the ground were less frequent than reported in previous studies in locomotion across different terrains. Across all conditions, fixations were close to the centre of the image but horizontal variability was larger in free locomotion. These findings suggest that self-motion may vary across different navigational goals in real-world scenarios leading to distinct retinal inputs and in ways that are relevant to the ongoing task. Knowledge of these dynamics can contribute to advancing computational models of visual processing and navigation.

Acknowledgements: This work is funded by grants PID2020-116400GA-I00 and PID2020-114713GB-I00 funded by MCIN/AEI/10.13039/501100011033 to CM and JLM, respectively. AM is supported by the Maria de Maeztu grant PRE2021-097688 assigned to the Institute of Neurosciences of the University of Barcelona (MDM-2017-0729-21-1)