Disentangling the unique contribution of human retinotopic regions using neural control

Poster Presentation 33.405: Sunday, May 19, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Object Recognition: Neural mechanisms

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Alessandro T. Gifford1 (), Radoslaw M. Cichy1; 1Freie Universität Berlin

Early- and mid-level retinotopic regions of the human ventral visual stream (V1 to V4) implement key stages of visual information processing. However, what aspects of the visual input each region uniquely encodes remains incompletely known. A major experimental roadblock in assessing each regions’ unique role is that typically their activation profiles are highly correlated, hiding their respective contribution to information processing. Here we used a novel analytical approach to disentangle the unique contribution of each retinotopic region. We started by leveraging NSD, a large-scale fMRI dataset, to build encoding models of all retinotopic regions. With these models we predicted neural responses for >100k naturalistic images (coming from NSD/ImageNet). We then implemented two neural control algorithms to find images that maximally distinguished predicted responses between all pairwise region combinations, thus revealing their idiosyncratic computations. The first neural control algorithm determined images that maximally activated the univariate response of each region while maximally deactivating the univariate response of other regions. The second neural control algorithm used genetic optimization to select an imageset that decorrelated (r=0) the multivariate responses between regions, through representational similarity analysis. We cross-validated both algorithms across NSD subjects, resulting in quantitatively disentangled responses, particularly for non-adjacent regions. The controlling images showed consistent qualitative patterns such as texture frequency, color, and object presence. Finally, we collected EEG responses for the V1-V4 comparison controlling images. These images disentangled the univariate and multivariate EEG responses over time, showcasing the generalizability of the neural control solutions across neuroimaging modalities. In sum, our contributions are threefold: we provide new quantitative and qualitative findings on the unique computation of retinotopic regions; we propose novel neural control algorithms capable of disentangling univariate and multivariate representations within biological and artificial information processing systems; and we demonstrate how data-driven exploration promotes discovery in understudied regions of the brain.

Acknowledgements: A.T.G. is supported by a PhD fellowship of the Einstein Center for Neurosciences. R.M.C. is supported by German Research Council (DFG) Grant Nos. (CI 241/1-1, CI 241/3-1, CI 241/1-7) and the European Research Council (ERC) starting grant (ERC-StG-2018–803370).