Bringing color into focus: accommodative state varies systematically with the spectral content of light

Poster Presentation 53.415: Tuesday, May 21, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Eye Movements: Natural world and VR

Benjamin M Chin1 (), Martin S Banks1, Derek Nankivil2, Austin Roorda1, Emily A Cooper1; 1University of California, Berkeley, 2Johnson & Johnson Vision Care

Humans bring the visual world into focus by changing the power of the lens in their eye until the retinal image is sharp. Light in the natural environment, however, can almost never be focused perfectly because it contains multiple wavelengths that refract differently through the lens. How does the visual system determine the wavelength to put in best-focus? We compared possible strategies used to focus light containing different proportions of long and short wavelengths. Under a ‘switching’ strategy, an observer would accommodate (focus their lens) to whichever wavelength has the highest luminance. In contrast, under a ‘weighting’ strategy, the accommodative response would be a weighted sum of the luminances across visible wavelengths. We measured the dynamic accommodative responses of eight participants with an autorefractor recording at 30Hz. On each trial, an observer viewed a three-letter word (24 arcmins per letter) against a black background on an OLED display for six seconds. Halfway through the trial, a focus-adjustable lens generated a step change in the optical distance of the stimulus, synchronized to a change in stimulus color (the proportion of long and short wavelength subpixels). We then fit participants’ accommodative changes with both the ‘switching’ and ‘weighting’ models separately. The Akaike Information Criterion showed that for all but one subject, the likelihood of the data was greater under the ‘weighting’ model. Increasing luminance of long wavelengths caused the eye to accommodate nearer, while increasing luminance of short wavelengths caused it to accommodate farther. This is remarkable because it implies that people may bring wavelengths into best focus that are weak or even absent from the visual stimulus. Using these data, we aim to develop an image-computable model that can predict how the eye accommodates to the complex spectral and spatial patterns encountered during natural vision.