How to estimate noise ceilings for computational models of visual cortex

Poster Presentation 63.401: Wednesday, May 22, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Object Recognition: Models

Zirui Chen1 (), Michael Bonner1; 1Johns Hopkins University

A pivotal goal in neuroscience is to develop computational models that can account for the explainable variance in cortical responses to sensory stimuli. It is widely recognized that when evaluating the similarity between brain and model representations, it is necessary to estimate the noise ceiling in measurements of cortical activity. Traditional approaches have focused on factors such as reliability across trials or subjects, with the goal of establishing a benchmark for the maximum predictive accuracy that any model could theoretically achieve. However, one important source of noise that has been largely overlooked in the literature is the reliability of the computational models themselves. In the case of deep learning models, a natural measure of reliability is the consistency of learned representations across different random initializations. Using such a metric of model reliability, we demonstrate how an aggregate noise ceiling can be estimated that accounts for the reliability of trials, subjects, and computational models. Our approach provides a more comprehensive assessment of the limitations in representational models of sensory systems. Our results unveil a striking impact of model reliability as a key constraint in explaining variance in cortical representations. More broadly, our findings highlight the importance of identifying and mitigating model variability, and they open new avenues for refining computational models of cortical sensory representations.