Encoding modelling for working memory research: Pattern similarity, representational geometry, and model comparison

Poster Presentation: Tuesday, May 21, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Memory: Working memory and behavior, models

Thomas B. Christophel1,2 (), Andreea-Maria Gui1,2, Carsten Allefeld3, José M. Baldaque1,4, Joana Pereira Seabra1,2; 1Department of Psychology, Humboldt Universität zu Berlin, Rudower Chaussee 18, Berlin, 12489, Germany, 2Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Charité Universitätsmedizin, corporate member of Freie Universität Berlin, Humboldt Universität zu Berlin, and Berlin Institute of Health, Berlin, Philippstraße 13, Haus 6, 10115, Germany, 3Department of Psychology, City, University of London, London EC1V 0HB, United Kingdom, 4School of Psychology & Neuroscience, University of Glasgow, Glasgow G12 8QQ, Scotland, United Kingdom

Research into the neural basis of working memory relies heavily on the use of multivariate decoding techniques to ascertain the presence of neural representations of working memory content. Recent work attempts to understand the tuning properties underlying cortical representations using encoding modelling. Encoding modelling aims to explain the multivariate patterns underlying mnemonic function using stimulus dependent regressors called basis functions. These basis functions are capable of explicitly modelling the similarity relationships between different stimuli analogous to tuning functions. Here, we aim to evaluate the use of encoding modelling for the quantification of mnemonic representations using simulated and real data. We make use of a recently developed flexible simulation toolbox to simulate patterned neural activity generated using different underlying voxel tuning distributions across a large number of possible experimental designs (designSim). We quantify the information content of these neural signals as the variance explained by a given encoding model using cvCrossMANOVA. We demonstrate that realistically modelled neural representation can be more reliably identified using encoding models with realistic similarity assumptions as compared to a simplistic classifier-like models. We show that estimates of representational similarity between two conditions (e.g., cross-classification accuracy, correlation, and variance explained between conditions) are strongly biased by the signal-to-noise ratio of individual representations and provide an SNR-independent measure of pattern similarity by comparing variance explained within and between conditions. Finally, we ask whether encoding modelling can ascertain the precise representational code used for memorization. We show that, consistent with prior work, stimulus-driven representation of memorized contents can be fitted and explained using a large variety of differently shaped encoding models such. This means that a reliable fit alone gives little insight into the representational geometry (‘feature fallacy’). Instead, we demonstrate that comparing the explained variance of two or more competing models allows to reliably identify the true model.