Virtual Reality, real information for action and perception? A VR study.

Poster Presentation 36.474: Sunday, May 19, 2024, 2:45 – 6:45 pm, Pavilion
Session: Action: Reach, grasp, track

Caterina Foglino1, Tamara Watson3, Niklas Stein4, Patrizia Fattori1,2, Annalisa Bosco1,2; 1Department of Biomedical and Neuromotor Sciences, University of Bologna, Italy, 2Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Italy, 3School of Psychology, MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia, 4Institute for Psychology, University of Münster, Germany

Distances are often misperceived in virtual environments but, while a brief period of interaction through physical walking can improve this, other body-based interactions, such as reaching, do not. In fact, research in the real world suggests that action planning affects perceptual processing by biasing the cognitive system toward response-related dimensions facilitating their perception, but the role of this action-perception interplay in the virtual space is far from being fully understood. To contribute to this area of research, this study investigates the perception of an object in a virtual environment by testing the performance of 15 healthy participants (Males=3; mean age=28.4 ± 3.62 years; all right-handed) using a size judgment task. Participants were instructed to interact with the virtual object by either grasping, reaching, or no hand-movement conditions using a virtual copy of their real hand. They were then asked to report the estimation of the object size by adjusting the dimension of a comparison-object before and after the interaction phase. The interaction phase was preceded by a walking simulation to approach the target positioned far away from the participant. The results show that, overall, the size estimation errors improve after the interaction phase (β= -0.33812 mm, SE= 0.14486, t= 2.334, p<0.05). However, the no hand-movement interaction condition leads to significant smaller errors compared to the grasping one (β= -0.78053 mm, SE= 0.25181, t= -3.100, p<0.01). These findings suggest that interacting with the object through hand movements, specifically grasping, which is known to facilitate the detection of size-related features in the real-world, does not lead to an improvement on the perception of the object’s size in this virtual reality context.

Acknowledgements: MAIA project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No 951910; Work supported by Ministry of University and Research (MUR), PRIN2022-2022BK2NPS; Work supported by Horizon Europe MSCA No 101086206-PLACES.