Scene semantic and gaze effects on allocentric coding in naturalistic (virtual) environments

Poster Presentation 36.327: Sunday, May 19, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Scene Perception: Virtual environments, intuitive physics

Bianca Baltaretu1, Immo Schuetz1, Melissa Vo2, Katja Fiehler1; 1Justus Liebig University, Giessen, Germany, 2Goethe University Frankfurt, Frankfurt am Main, Germany

Interacting with objects in our surroundings involves object perception and object location coding, the latter of which can be accomplished egocentrically (i.e., relative to the self) and/or allocentrically (i.e., relative to other objects). Allocentric coding for actions under more naturalistic scenarios can be influenced by multiple factors, (e.g., task relevance and prior knowledge). Within the hierarchy of scene grammar, the semantic relationship of local objects (small/moveable) can strengthen allocentric coding (i.e., stronger effects for local objects of the same vs. different object categories). One would assume that the next level of the scene grammar hierarchy, i.e., anchor objects (large/stationary), also modulates this process, since anchors tend to predict the identity and location of surrounding local objects that we interact with. Here, we investigated the effect of semantically congruent versus incongruent anchors on allocentric coding of local objects within two scene types (kitchen, bathroom). In a virtual environment, three local objects were presented on a shelf connecting two anchors (semantically congruent or incongruent with the local objects). After a brief mask and delay, the scene was presented again without the local objects and one of the anchor objects shifted (leftward or rightward) or not shifted. Then, one of the local objects appeared in front of the participant, who then had to grab the object with the controller and place it in its remembered location on the empty shelf. Our findings show systematic placement errors in the direction of the anchor shift, with no clear influence of semantic congruency. Eye movements confirm these findings, with gaze behaviour predominantly directed toward local objects over anchors (with no effect of semantics when gaze landed on these). The present results suggest that, even if they are task-irrelevant, anchors play an important role in allocentric coding of local objects in naturalistic, virtual environments for action.

Acknowledgements: JUSTUS Plus II program, Justus Liebig University, Giessen, Germany