Virtual reality assessment of functional vision using dynamic, task-oriented scenarios
Poster Presentation 53.338: Tuesday, May 19, 2026, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Scene Perception: Virtual reality
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Brian Bartley1, Rayyan W Saidi2, Michelle Harter1, Dana Aravich1, Galen Holland1, Rakié Cham1, Alessandro Fascetti1; 1University of Pittsburgh, 2Fox Chapel Area High School, Pittsburgh, PA
Advances in vision restoration technologies such as visual prostheses, gene therapies, and whole-eye transplants have outpaced our ability to rigorously measure their impact on functional vision - specifically how people perform real-world tasks like mobility, instrumental activities of daily living (IADLs), and social interaction. Existing tools (patient reported outcomes, observational ratings, and performance-based tests) each address parts of the problem but suffer important limitations, including lack of ecological validity and standardization, as well as limited suitability for ultra-low vision populations. To this end, we propose a virtual reality, task-oriented assessment, VISTA, to directly address these gaps by providing objective, repeatable, and modular performance measures embedded in high-fidelity parametrically controlled environments that can be deployed across sites. The initial implementation focused on constructing and evaluating a kitchen scene, the first of five planned interactive environments. Task flows were derived from validated low-vision assessments and the HOVER consensus framework, incorporating functional subtasks such as simulated meal prep, detecting room lighting, visual scanning, and navigating within a furnished space. Scene layouts were parameterized to allow controlled variations in object placement across trials, enabling assessment of visual performance rather than memorization. The platform extracts outcome measures, including task completion, absolute and relative speed, accuracy (e.g., misidentification errors), and biomechanical metrics such as head movement, step count, and gaze patterns. Preliminary results demonstrate that VISTA can deliver realistic environmental conditions while maintaining experimental control, producing outcome measures with precision generally unachievable in physical tests. Early feasibility testing confirmed successful execution of all task flows, with expected gradations in task completion, speed, and efficiency for visually demanding subtasks such as structured searches, obstacle avoidance, and detection of lights or motion. These findings support VISTA as a promising tool for standardized, objective, and ecologically valid assessment of functional vision and motivate continued development for future clinical studies.
Acknowledgements: This research was, in part, funded by the Advanced Research Projects Agency for Health (ARPA-H). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Government.