Navigation in virtual reality: the role of language and language background
Poster Presentation 16.310: Friday, May 15, 2026, 3:45 – 6:00 pm, Banyan Breezeway
Session: Multisensory Processing: Motor
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Kavindya Dalawella1 (), Jakub Suchojad1, Jacob Feldman1, Karin Stromswold1; 1Rutgers University
We investigated how people integrate visual, social, and linguistic information in dynamic environments to perform complex tasks. The environment was a virtual reality (VR) train station populated by moving and static VR “people” and objects. The task was to physically navigate to a location based on a spoken announcement. Participants were assigned a destination city at the start of the experiment. In half of the trials (go trials), the announced city matched participants’ assigned city and they proceeded to the announced track; in no-go trials, the city didn’t match, and participants returned to a “waiting room”. We used two linguistic frames to vary how early a trial could be identified as go/no-go: “City First” (“The next train to CityX is now boarding on Track#”) or “Track First” (“Now boarding on Track# is the next train to CityX). “City First” trials could be identified as go-trials earlier, while “Track First” trials required participants to remember track numbers and wait. Participants were university students who said English was their best language. Half were native English (NE) speakers with NE-speaking parents and half learned English in early childhood, but didn’t have NE-speaking parents. Position, velocity, head direction, eye gaze and pupillometry data were collected. Results indicate that task, information order (linguistic frame) and language background all affected participants’ behavior. Both groups responded faster in no-go tasks (task effect) and completed trials faster in “City First” trials where they didn’t have to wait to begin planning (information order). Notably, language background also influenced performance: participants with NE-speaking parents took less time overall and identified trials as go/no-go faster. These findings indicate adults efficiently integrate visual, social and linguistic information to plan and execute a complex navigational task, but early childhood language experience modulates how quickly they use linguistic information.
Acknowledgements: Supported by the NSF-NRT grant: Socially Cognizant Robotics for a Technology Enhanced Society (SOCRATES), No. 2021628. Additional support provided by NSF BCS Grants 2324598, 2122119, & Rutgers Research Council Grant.