Can people learn their unique retinal motion statistics?

Poster Presentation 26.301: Saturday, May 18, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Motion: Optic flow

Jiaming Xu1 (), Karl Muller1, Kate Bonnen2, Robbe Goris1, Mary Hayhoe1; 1UT Austin, 2Indiana University

Sensory representations are adapted to the statistical regularities of the environment. In the case of retinal motion generated by self-motion, Matthis et al (2022) demonstrated that retinal flow results from the way the body moves during the gait cycle. This is because gaze is held stable in the environment during fixations and the eyes counter-rotate, transported by the body as it moves forward and sways during a step. This body motion is determined by its passive dynamics, and therefore differs between individuals. Do retinal motion statistics also differ between individuals? We examined the data of Muller et al. (2023) who tracked eye and body movements during natural locomotion. This allowed the calculation of the retina-centered motion patterns across a 90° region of the visual field for 7 subjects. Motion speed varies systematically across the visual field. Highest speeds were in the lower visual field where the mean of the distributions was 28.4° with a SD between subjects of 9.8°. Similarly, in the upper visual field, the mean speed was 13.1°, with a SD across subjects of 8.5°. Similar variability between subjects was found in the left and right visual fields. The average retinal motion directions, measured as a function of polar angles of the visual field, displayed a bimodal distribution with an over-representation of upward and downward motion. The first mode had a mean of 86.9° with a SD of 43.0°, and the second mode had a mean of 267.5° with a SD of 39.0°. Because the variability is substantial, and reflects motion statistics individuals are exposed to throughout experience, it seems likely that subjects learn their own motion statistics. Thus, individuals may have unique internal models of their own motion statistics, allowing prediction of their time-varying motion patterns through the gait cycle.

Acknowledgements: Supported by NIH grants EY05729 and EY032999