Multitask Machine Learning of Contrast Sensitivity Functions

Poster Presentation: Tuesday, May 21, 2024, 2:45 – 6:45 pm, Pavilion
Session: Spatial Vision: Machine learning, neural networks

Dennis Barbour1 (), Zhiting Zhou1, Dom Marticorena1, Quinn Wai Wong1, Jake Browning1, Ken Wilbur1, Pinakin Davey2, Aaron Seitz3, Jacob Gardner4; 1Washington University in St. Louis, 2Western University of Health Sciences, 3Northeastern University, 4University of Pennsylvania

Contrast Sensitivity Functions (CSFs) represent useful diagnostic adjuncts for helping assess both retinal and central visual functionality. Gaussian Process (GP) classifiers have been shown to efficiently estimate individual CSF models by leveraging active machine learning for optimal stimulus selection. Model convergence in these cases can be achieved with between 10 and 50 actively selected stimuli. By assuming model independence, this disjoint process requires sequential estimation to obtain CSF models for multiple eyes or stimulus conditions (e.g., luminance, eccentricity). Conjoint estimators, on the other hand, have now been developed to estimate multiple CSFs simultaneously using an active multitask implementation. In the current study, conjoint CSF estimator performance was compared to disjoint performance on simulated eyes using generative models created from human data. The high degree of expected similarity between CSFs originating from different eyes or conditions allows conjoint learning between the related models. This procedure is designed to enable faster convergence than sequential disjoint model learning. Indeed, conjoint CSF estimation does speed model convergence over disjoint estimation under commonly encountered scenarios. These findings confirm that incorporation of additional information beyond immediate behavioral responses into new machine learning models of vision functions may improve visual system assessment.

Acknowledgements: R21-EY033553, R01-EY019693