Virtual Reality and Vision Science

S6 – Virtual Reality and Vision Science

Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Pavilion
Organizer(s): Bas Rokers, University of Wisconsin – Madison & Karen B. Schloss, University of Wisconsin – Madison
Presenters: Jacqueline Fulvio, Robin Held, Emily Cooper, Stefano Baldassi, David Luebke

< Back to 2017 Symposia

Virtual reality (VR) and augmented reality (AR) provide exciting new opportunities for vision research. In VR sensory cues are presented to simulate an observer’s presence in a virtual environment. In AR sensory cues are presented that embed virtual stimuli in the real world. This symposium will bring together speakers from academia and industry to present new scientific discoveries enabled by VR/AR technology, discuss recent and forthcoming advances in the technology, and identify exciting new avenues of inquiry. From a basic research perspective, VR and AR allow us to answer fundamental scientific questions that have been difficult or impossible to address in the past. VR/AR headsets provide a number of potential benefits over traditional psychophysical methods, such as incorporating a large field of view, high frame rate/low persistence, and low latency head tracking. These technological innovations facilitate experimental research in highly controlled, yet naturalistic three-dimensional environments. However, VR/AR also introduces its own set of unique challenges of which potential researchers should be aware. Speakers from academia will discuss ways they have used VR/AR as a tool to advance knowledge about 3D perception, multisensory integration, and navigation in naturalistic three-dimensional environments. Speakers will also present research on perceptual learning and neural plasticity, which may benefit from training in cue-rich environments that simulate real-world conditions. These talks will shed light on how VR/AR may ultimately be used to mitigate visual deficits and contribute to the treatment of visual disorders. Speakers from industry will highlight recent technological advances that can make VR such a powerful tool for research. Industry has made significant strides solving engineering problems involving latency, field of view, and presence. However, challenges remain, such as resolving cue conflicts and eliminating motion sickness. Although some of these issues may be solved through engineering, others are due to limitations of the visual system and require solutions informed by basic research within the vision science community. This symposium aims to provide a platform that deepens the dialog between academia and industry. VR holds unprecedented potential for building assistive technologies that will aid people with sensory and cognitive disabilities. Hearing from speakers in industry will give vision scientists an overview of anticipated technological developments, which will help them evaluate how they may incorporate VR/AR in their future research. In turn vision researchers may help identify science-based solutions to current engineering challenges. In sum, this symposium will bring together two communities for the mutually beneficial advancement of VR-based research. Who may want to attend: This symposium will be of interest to researchers who wish to consider incorporating AR/VR into their research, get an overview of existing challenges, and get a sense of future directions of mutual interest to industry and academia. The talks will be valuable to researchers at all stages of their careers. Hearing from representatives from both industry and academia may be useful for early stage researchers seeking opportunities beyond the highly competitive academic marketplace and may help researchers at all stages identify funding sources in the highly competitive granting landscape.

Extra-retinal cues improve accuracy of 3D motion perception in virtual reality environments

Speaker: Jacqueline Fulvio, University of Wisconsin – Madison
Additional Authors: Jacqueline M. Fulvio & Bas Rokers, Department of Psychology, UW-Madison

Our senses provide imperfect information about the world that surrounds us, but we can improve the accuracy of our perception by combining sensory information from multiple sources. Unfortunately, much of the research in visual perception has utilized methods of stimulus presentation that eliminate potential sources of information. It is often the case for example, that observers are asked to maintain a fixed head position while viewing stimuli generated on flat 2D displays. We will present recent work on the perception of 3D motion using the Oculus Rift, a virtual reality (VR) head-mounted display with head-tracking functionality. We describe the impact of uncertainty in visual cues presented in isolation, which have surprising consequences for the accuracy of 3D motion perception. We will then describe how extra-retinal cues, such as head motion, improve visual accuracy. We will conclude with a discussion of the potential and limitations of VR technology for the understanding visual perception.

Perceptual considerations for the design of mixed-reality content

Speaker: Robin Held, Microsoft
Additional Authors: Robin Held, Microsoft

Virtual-reality head-mounted displays (VR HMDs) block out the real world while engulfing the user in a purely digital setting. Meanwhile, mixed-reality (MR) HMDs embed digital content within the real-world while maintaining the user’s perception of her or his surroundings. This ability to simultaneously perceive both rendered content and real objects presents unique challenges for the design of MR content. I will briefly review the technologies underlying current MR headsets, including display hardware, tracking systems, and spatial audio. I will also discuss how the existing implementations of those technologies impact the user’s perception of the content. Finally, I will show how to apply that knowledge to optimize MR content for comfort and aesthetics.

Designing and assessing near-eye displays to increase user inclusivity

Speaker: Emily Cooper, Dartmouth College
Additional Authors: Nitish Padmanaban, Robert Konrad, and Gordon Wetzstein, Department of Electrical Engineering, Stanford University

From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. But in each case, one thing stays the same: the primary interface between the computer and the user is a visual display. Recent years have seen impressive growth in near-eye display systems, which are the basis of most virtual and augmented reality experiences. There are, however, a unique set of challenges to designing a display that is literally strapped to the user’s face. With an estimated half of all adults in the United States requiring some level of visual correction, maximizing inclusivity for near-eye displays is essential. I will describe work that combines principles from optics, optometry, and visual perception to identify and address major limitations of near-eye displays both for users with normal vision and those that require common corrective lenses. I will also describe ongoing work assessing the potential for near-eye displays to assist people with less common visual impairments at performing day-to-day tasks.

See-through Wearable Augmented Reality: challenges and oppor-tunities for vision science

Speaker: Stefano Baldassi, Meta Company
Additional Authors: Stefano Baldassi & Moqian Tian, Analytics & Neuro-science Department, Meta Company

We will present Meta’s Augmented Reality technology and the challenges faced in product development that may generate strong mutual connections between vision science and technology, as well as new areas of research for vision science and research methods using AR. The first line of challenges comes from the overlap between virtual content and the real world due to the non-opacity of the rendered pixels and the see-through optics. What are the optimal luminance, contrast and color profile to enable least interference? Will the solutions be qualitatively different in photonic and scotopic conditions? With SLAM, the virtual objects can be locked onto the real scene. Does the real world provide the same environmental context to the virtual object as a real object? Last, what are the implication of digital content in the periphery, given Meta’s industry-leading 90° FOV? The second line of challenges is in the domain of perception and action and multi-sensory integration. Meta supports manipulation of virtual objects. In the absence of haptic stimulation, when hands interact with the virtual object we currently rely on visual and proprioceptive cues to guide touch. How is the visuo-motor control of hands affected by manipulations without hap-tics? In order to enable people to interact with the virtual objects realistically and effectively, are cues like occlusion and haptic feedback necessary? Will time locked sound introduce valuable cues?

Computational Display for Virtual and Augmented Reality

Speaker: David Luebke, NVIDIA
Additional Authors: David Luebke, VP Graphics Research, NVIDIA

Wearable displays for virtual & augmented reality face tremendous challenges, including: Near-Eye Display: how to put a display as close to the eye as a pair of eyeglasses, where we cannot bring it into focus? Field of view: how to fill the user’s entire vision with displayed content? Resolu-tion: how to fill that wide field of view with enough pixels, and how to ren-der all of those pixels? A “brute force” display would require 10,000×8,000 pixels per eye! Bulk: displays should be as unobtrusive as sunglasses, but optics dictate that most VR displays today are bigger than ski goggles. Focus cues: today’s VR displays provide binocular display but only a fixed optical depth, thus missing the monocular depth cues from defocus blur and introducing vergence-accommodation conflict. To overcome these challenges requires understanding and innovation in vision science, optics, display technology, and computer graphics. I will describe several “computational display” VR/AR prototypes in which we co-design the optics, display, and rendering algorithm with the human visual system to achieve new tradeoffs. These include light field displays, which sacrifice spatial resolution to provide thin near-eye display and focus cues; pinlight displays, which use a novel and very simple optical stack to produce wide field-of-view see-through display; and a new approach to foveated rendering, which uses eye tracking and renders the peripheral image with less detail than the foveal region. I’ll also talk about our current efforts to “operation-alize” vision science research, which focuses on peripheral vision, crowding, and saccadic suppression artifacts.

< Back to 2017 Symposia