Representation of event boundaries in the first-person navigation
Poster Presentation 53.350: Tuesday, May 23, 2023, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Scene Perception: Neural mechanisms
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2023 to view it.
Please go to your Account Home page to register.
Byunghoon Choi1, Donald Shi Pui Li2, Soojin Park1; 1Yonsei University, 2Johns Hopkins University
As we navigate in our daily lives, we experience a continuous percept of the visual environment, seamlessly gluing fragments of seconds into a continuous stream of visual perception. Seminal works on memory have used movie films with plots and narratives to show how a continuous stream of perceptual input is chunked into separate events. However, it is still unclear how the visual system organizes this seemingly continuous percept of navigation into blocks of places, spaces, and navigational turning points. In this study, we used naturalistic first-person perspective navigation videos to explore how the scene-selective regions represent continuous navigational experience. Eight six-minute long walking travel videos (4 indoor & 4 outdoor) without drastic viewpoint transitions and scripts were viewed during fMRI scanning. We asked whether the neural boundaries in scene-selective regions are segmented based on navigation specific boundaries, such as first-person location, navigational direction and turns, high-level semantic place category changes, or low-level image statistic changes between the frames. Using a data-driven Hidden Markov Model (HMM) approach, we extracted the neural boundary from each region of interest. Preliminary results (N=4) suggest that boundaries in scene-selective ROIs differ from the boundaries obtained by low-level image statistics extracted from the frames. Interestingly, turning back to the video time points corresponding to the neural boundaries of scene-selective regions revealed navigationally relevant events of first-person navigation, such as arriving or leaving one place from another (e.g., arriving to the next floor). Segmentation of a spatio-temporally continuous visual experience into events may facilitate our visually guided navigation.
Acknowledgements: This research was support by NEI grant (R01EY026042)