Statistical inference on representational geometries

Poster Presentation 43.309: Monday, May 22, 2023, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Object Recognition: Models

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2023 to view it.
Please go to your Account Home page to register.

Heiko Schütt1,2 (), Alexander D. Kipnis3, Jörn Diedrichsen4, Nikolaus Kriegeskorte2; 1New York University, 2Zuckerman Institute, Columbia University, 3Max Planck Institute for Biological Cybernetics, 4Western University

Models and brain measurements of visual processing have substantially increased in complexity in recent years. Summarizing and comparing their high-dimensional representations to each other requires specialized statistical methods. Here, we introduce new inference methods to evaluate models based on their predicted representational geometries, i.e. based on how well they match the distances or dissimilarities between the representations. Our inference methods are based on cross-validation and bootstrapping. We introduce a novel 2-factor bootstrap technique wrapped around a cross-validation procedure with analytically derived adjustments for the biases induced by 2-factor inflation of measurement noise and the choice of cross-validation folds. We validate our new inference methods using extensive simulations. We first simulate fMRI-like data based on local averages of deep neural network activations for images sampled from ecoset. In these simulations, we have full access to the true data generating process and can thus test a wide range of experiments. Additionally, we performed simulations based on subsampling data from large scale calcium imaging and fMRI experiments. These simulations are less flexible, but we are more confident that the patterns and their variability are representative of true experimental data. In all simulations, our new methods yield good estimates of the variance of model evaluations and thus valid statistical tests. In contrast, uncorrected bootstrap methods substantially overestimate variance and thus yield overly conservative tests. Also, ignoring the desired generalization to new stimuli leads to underestimated variance and thus to overly liberal tests. Similar statistical problems occur whenever other bootstrap methods aim to generalize to new stimuli and new subjects and/or are combined with cross-validation. Our new methods are available as part of the open source rsatoolbox in python at

Acknowledgements: This study was funded in part by the German Research Foundation (DFG) through grant SCHU 3351/1-1 to HHS.