What can be inferred about neural population codes from psychophysical and neuroimaging data?

Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Fabian Soto, Department of Psychology, Florida International University
Presenters: Justin L. Gardner, Rosie Cowell, Kara Emery, Jason Hays, Fabian A. Soto

< Back to 2019 Symposia

Symposium Description

Vision scientists have long assumed that it is possible to make inferences about neural codes from indirect measures, such as those provided by psychophysics (e.g., thresholds, adaptation effects) and neuroimaging. While this approach has been very useful to understand the nature of visual representation in a variety of areas, it is not always clear under what circumstances and assumptions such inferences are valid. Recent modeling work has shown that some patterns of results previously thought to be the hallmark of particular encoding strategies can be mimicked by other mechanisms. Examples abound: face adaptation effects once thought to be diagnostic of norm-based encoding are now known to be reproduced by other encoding schemes, properties of population tuning functions reconstructed from fMRI data can be explained by multiple neural encoding mechanisms, and tests of invariance applied to fMRI data may be unrelated to invariance at the level of neurons and neural populations. This highlights how important it is to study encoding models through simulation and mathematical theory, to get a better understanding of exactly what can and cannot be inferred about neural encoding from psychophysics and neuroimaging, and what assumptions and experimental designs are necessary to facilitate valid inferences. This symposium has the goal of highlighting recent advances in this area, which pave the way for modelers to answer similar questions in the future, and for experimentalists to perform studies with a clearer understanding of what designs, assumptions, and analyses are optimal to answer their research questions. Following a brief introduction to the symposium’s theme and some background (~5 minutes), each of the five scheduled talks will be presented (20 minutes each), followed by a Q&A from the audience (15 minutes). The format of the Q&A will be the following: questions from the audience will be directed to specific speakers, and after an answer other speakers will be invited to comment if they wish. Questions from one speaker to another will be allowed after all the audience questions are addressed. This symposium is targeted to a general audience of researchers interested in performing inferences about neural population codes from psychophysical and neuroimaging data. This includes any researcher interested on how visual dimensions (e.g., orientation, color, face identity and expression, etc.) are encoded in visual cortex, and on how this code is modified by high-level cognitive processes (e.g., spatial and feature attention, working memory, categorization, etc.) and learning (e.g., perceptual learning, value learning). It also includes researchers with a general interest in modeling and measurement. The target audience is composed of researchers at all career stages (i.e., students, postdoctoral researchers and faculty). Those attending this symposium will benefit by a clearer understanding of what inferences they can make about encoding of visual information from psychophysics and neuroimaging, and what assumptions are necessary to make such inferences. The audience will learn about recently discovered pitfalls in this type of research and newly developed methods to deal with such pitfalls.

Presentations

Inverted encoding models reconstruct the model response, not the stimulus

Speaker: Justin L. Gardner, Department of Psychology, Stanford University
Additional Authors: Taosheng Liu, Michigan State University

Life used to be simpler for sensory neuroscientists. Some measurement of neural activity, be it single-unit activity or increase in BOLD response, was measured against systematic variation of a stimulus and the resulting tuning functions presented and interpreted. But as the field discovered signal in the pattern of responses across voxels in a BOLD measurement or dynamic structure hidden within the activity of a population of neurons, computational techniques to extract features not easily discernible from raw measurement increasingly began to intervene between measurement and data presentation and interpretation. I will discuss one particular technique, the inverted encoding model, and how it extracts model responses rather than stimulus representations and what challenges that makes for interpretation of results.

Bayesian modeling of fMRI data to infer modulation of neural tuning functions in visual cortex

Speaker: Rosie Cowell, University of Massachusetts Amherst
Additional Authors: Patrick S. Sadil, University of Massachusetts Amherst; David E. Huber, University of Massachusetts Amherst.

Many visual neurons exhibit tuning functions for stimulus features such as orientation. Methods for analyzing fMRI data reveal analogous feature-tuning in the BOLD signal (e.g., Inverted Encoding Models; Brouwer and Heeger, 2009). Because these voxel-level tuning functions (VTFs) are superficially analogous to the neural tuning functions (NTFs) observed with electrophysiology, it is tempting to interpret VTFs as mirroring the underlying NTFs. However, each voxel contains many subpopulations of neurons with different preferred orientations, and the distribution of neurons across the subpopulations is unknown. Because of this, there are multiple alternative accounts by which changes in the subpopulation-NTFs could produce a given change in the VTF. We developed a hierarchical Bayesian model to determine, for a given change in the VTF, which account of the change in underlying NTFs best explains the data. The model fits many voxels simultaneously, inferring both the shape of the NTF in different conditions and the distribution of neurons across subpopulations in each voxel. We tested this model in visual cortex by applying it to changes induced by increasing visual contrast — a manipulation known from electrophysiology to produce multiplicative gain in NTFs. Although increasing contrast caused an additive shift in the VTFs, the Bayesian model correctly identified multiplicative gain as the change in the underlying NTFs. This technique is potentially applicable to any fMRI study of modulations in cortical responses that are tuned to a well-established dimension of variation (e.g., orientation, speed of motion, isoluminant hue).

Inferring neural coding strategies from adaptation aftereffects

Speaker: Kara Emery, University of Nevada Reno

Adaptation aftereffects have been widely used to infer mechanisms of visual coding. In the context of face processing, aftereffects have been interpreted in terms of two alternative models: 1) norm-based codes, in which the facial dimension is represented by the relative activity in a pair of broadly-tuned mechanisms with opposing sensitivities; or 2) exemplar codes, in which the dimension is sampled by multiple channels narrowly-tuned to different levels of the stimulus. Evidence for or against these alternatives has been based on the different patterns of aftereffects they predict (e.g. whether there is adaptation to the norm, and how adaptation increases with stimulus strength). However, these predictions are often based on implicit assumptions about both the encoding and decoding stages of the models. We evaluated these latent assumptions to better understand how the alternative models depend on factors such as the number, selectivity, and decoding strategy of the channels, to clarify the consequential differences between these coding schemes and the adaptation effects that are most diagnostic for discriminating between them. We show that the distinction between norm and exemplar codes depends more on how the information is decoded than encoded, and that some aftereffect patterns commonly proposed to distinguish the models fail to in principle. We also compare how these models depend on assumptions about the stimulus (e.g. broadband vs. punctate) and the impact of noise. These analyses point to the fundamental distinctions between different coding strategies and the patterns of visual aftereffects that are best for revealing them.

What can be inferred about changes in neural population codes from psychophysical threshold studies?

Speaker: Jason Hays, Florida International University
Additional Authors: Fabian A. Soto, Florida International University

The standard population encoding/decoding model is now routinely used to study visual representation through psychophysics and neuroimaging. Such studies are indispensable to understand human visual neuroscience, where more invasive techniques are usually not available, but researchers should be careful not to interpret curves obtained from such indirect measures as directly comparable to analogous data from neurophysiology. Here we explore through simulation exactly what kind of inference can be made about changes in neural population codes from observed changes in psychophysical thresholds. We focus on the encoding of orientation by a dense array of narrow-band neural channels, and assume statistically optimal decoding. We explore several mechanisms of encoding change, which could be produced by factors such as attention and learning, and which have been highlighted in the previous literature: (non)specific gain, (non)specific bandwidth-narrowing, inward/outward tuning shifts, and specific suppression with(out) nonspecific gain. We compared the pattern of psychophysical thresholds produced by the model with and without the influence of such mechanisms, in several experimental designs. Each type of model produced a distinctive behavioral pattern, but only if changes in encoding are strong enough and two or more experiments with different designs are performed (i.e., no single experiment can discriminate among all mechanisms). Our results suggest that identifying encoding changes from psychophysics is possible under the right conditions and assumptions and suggest that psychophysical threshold studies are a powerful alternative to neuroimaging in the study of visual neural representation in humans.

What can be inferred about invariance of visual representations from fMRI decoding studies?

Speaker: Fabian A. Soto, Florida International University
Additional Authors: Sanjay Narasiwodeyar, Florida International University

Many research questions in vision science involve determining whether stimulus properties are represented and processed independently in the brain. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. Here we develop a new framework that links general recognition theory from psychophysics and encoding models from computational neuroscience. We focus on separability, a special form of independence that is equivalent to the concept of “invariance” often used by vision scientists, but we show that other types of independence can be formally defined within the theory. We show how this new framework allows us to precisely define separability of neural representations and to theoretically link such definition to psychophysical and neuroimaging tests of independence and invariance. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In addition, two commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we discuss the results of an fMRI study used to validate and compare several tests of representational invariance, and confirm that the relations among them proposed by the theory are correct.

< Back to 2019 Symposia