Beyond representation and attention: Cognitive modulations of activity in visual cortex

Friday, May 13, 2022, 12:00 – 2:00 pm EDT, Talk Room 2

Organizers: Alex White1, Kendrick Kay2; 1Barnard College, Columbia University, 2University of Minnesota
Presenters: Alex L. White, Clare Press, Charlie S. Burlingham, Clayton E. Curtis, Jesse Breedlove

< Back to 2022 Symposia

The concept of sensory representation has been immensely productive for studying visual cortex, especially in the context of ‘image-computable models’ of visually evoked responses. At the same time, many experiments have demonstrated that various forms of attention modulate those evoked responses. Several computational models of attention explain how task-relevant stimuli are represented more faithfully than task-irrelevant stimuli. However, these models still paint an incomplete picture of processing in visual cortex. Activity in visual brain regions has been shown to depend on complex interactions between bottom-up sensory input and task demands. In many cases, that activity is affected by cognitive factors that are not clearly related to sensory representation or attention, such as memory, arousal, and expectation. This symposium will bring together complementary perspectives on cognitive effects on activity in visual cortex. Each speaker will present a recently studied interaction of vision and cognition and how it manifests in experimental data. In addition, the speakers will consider the underlying mechanisms for the effects they observe. Key questions include: Are visual representations simply enhanced for any behaviorally relevant stimulus, or do task-specific neural networks modulate visual cortex only in the presence of specific stimuli? How do we interpret activity observed in the absence of retinal stimulation? Are there distinct representational systems for visual working memory, imagery, and expectations? In a final panel discussion, we will broach additional fundamental issues: To what extent is it possible to study representation in the absence of manipulating cognition? How can we build formal models that account for the range of cognitive and sensory effects in visual cortex? Each of the 5 speakers will be allotted 15 minutes for presentation plus 3 minutes for audience questions, for a total of 18 minutes per speaker. The final panel discussion will last 30 minutes. The panel will be moderated by Kendrick Kay, who will deliver an initial brief summary that will attempt to integrate the disparate studies presented by the speakers and weave together a coherent bigger picture regarding overall challenges and goals for studying cognitive effects on activity in visual cortex. A panel discussion will follow with questions posed by the moderator, as well as questions solicited from the audience.

Presentations

High specificity of top-down modulation in word-selective cortex

Alex L. White1, Kendrick Kay2, Jason D. Yeatman3; 1Barnard College, Columbia University, 2University of Minnesota, 3Stanford University

Visual cortex is capable of processing a wide variety of stimuli for any number of behavioral tasks. So how does the specific information required for a given task get selected and routed to other necessary brain regions? In general, stimuli that are relevant to the current task evoke stronger responses than stimuli that are irrelevant, due to attentional selection on the basis of visual field location or non-spatial features. We will first review evidence that such attentional effects happen in category-selective regions, such as the visual word form area, as well as early retinotopic regions. We will then demonstrate evidence for top-down effects that are not domain-general, but extremely specific to task demands, stimulus features, and brain region. We measured fMRI responses to written words and non-letter shapes in retinotopic areas as well as word- and face-selective regions of ventral occipitotemporal cortex. In word-selective regions, letter strings evoked much larger responses when they were task-relevant (during a lexical decision task) than when they were irrelevant (during a color change task on the fixation mark). However, non-letter shapes evoked smaller responses when they were task-relevant than when irrelevant. This surprising modulation pattern was specific to word-selective regions, where response variability was also highly correlated with a region in the pre-central sulcus that is involved in spoken language. Therefore, we suggest that top-down modulations in visual cortex do not just generally enhance task-relevant stimuli and filter irrelevant stimuli, but can reflect targeted communication with broader networks recruited for specific tasks.

The influence of expectation on visual cortical processing

Clare Press1,2, Emily Thomas1,3, Daniel Yon1; 1Birkbeck, University of London, 2University College London, 3New York University

It is widely assumed that we must use predictions to determine the nature of our perceptual experiences. Work from the last few years suggests that supporting mechanisms operate via top-down modulations of sensory processing. However, theories within the domain of action concerning the operation of these mechanisms are at odds with those from other perceptual disciplines. Specifically, action theories propose that we cancel predicted events from perceptual processing to render our experiences informative – telling us what we did not already know. In contrast, theories outside of action – typically couched within Bayesian frameworks – demonstrate that we combine our predictions (priors) with the evidence (likelihood) to determine perception (posterior). Such functions are achieved via predictions sharpening processing in early sensory regions. In this talk I will present three fMRI studies from our lab that ask how these predictions really shape early visual processing. They will ask whether action predictions in fact shape visual processing differently from other types of prediction and about differences in representation across different cortical laminae. The studies compare processing of observed avatar movements and simple grating events, and ask about the information content associated with the stimulus types as well as signal level across different types of voxels. We can conclude that action expectations exhibit a similar sharpening effect on visual processing to other expectations, rendering our perception more veridical on average. Future work must now establish how we also use our predictions – across domains – to yield informative experiences.

Task-related activity in human visual cortex

Charlie S. Burlingham1, Zvi Roth2, Saghar Mirbagheri3, David J. Heeger1, Elisha P. Merriam2; 1New York University, 2National Institute of Mental Health, National Institutes of Health, 3University of Washington

Early visual cortex exhibits widespread hemodynamic responses during task performance even in the absence of a visual stimulus. Unlike the effects of spatial attention, these “task-related responses” rise and fall around trial onsets, are spatially diffuse, and even occur in complete darkness. In visual cortex, task-related and stimulus-evoked responses are similar in amplitude and sum together. Therefore, to interpret BOLD fMRI signals, it is critical to characterize task-related responses and understand how they change with task parameters. We measured fMRI responses in early visual cortex (V1/2/3) while human observers judged the orientation of a small peripheral grating in the right visual field. We measured task-related responses by only analyzing voxels in the ipsilateral hemisphere, i.e., far from the stimulus representation. Task-related responses were present in all observers. Response amplitude and timing precision were modulated by task difficulty, reward, and behavioral performance, variables that are frequently manipulated in cognitive neuroscience experiments. Surprising events, e.g., responding incorrectly when the task was easy, produced the largest modulations. Response amplitude also covaried with peripheral signatures of arousal, including pupil dilation and changes in heart rate. Our findings demonstrate that activity in early visual cortex reflects internal state — to such a large extent that behavioral performance can have a greater impact on BOLD activity than a potent visual stimulus. We discuss the possible physiological origins of task-related responses, what information about internal state can be gleaned from them, and analytic approaches for modelling them.

Unveiling the abstract format of mnemonic representations

Clayton E. Curtis1, Yuna Kwak1; 1New York University

Working memory (WM) enables information storage for future use, bridging the gap between perception and behavior. We hypothesize that WM representations are abstractions of low-level perceptual features. Yet the neural nature of these putative abstract representations has thus far remained impenetrable. Here, we first demonstrate that distinct visual stimuli (orientated gratings and moving dots) are flexibly re-coded into the same WM format in visual and parietal cortex when that representation is useful for memory-guided behavior. Next, we aimed to reveal the latent nature of the abstract WM representation. We predicted that the spatial distribution of higher response amplitudes across a topographic map forms a line at a given angle, as if the retinal positions constituting a line were actually visually stimulated. To test this, we reconstructed the spatial profile of neural activity during WM by projecting the amplitudes of voxel activity during the delay period for each orientation and direction condition into visual field space using parameters obtained from models of each visual map’s population receptive field. Remarkably, the visualization technique unveiled a stripe encoded in the amplitudes of voxel activity at an angle matching the remembered feature in many of the visual maps. Finally, we used models of V1 that demonstrate the feasibility of such a working memory mechanism and ruled out potential confounds. We conclude that mnemonic representations in visual cortex are abstractions of percepts that are more efficient than and proximal to the behaviors they guide.

With or without the retina: analyses of non-optic visual activity in the brain

Jesse Breedlove1, Ghislain St-Yves1, Logan Dowdle1, Tom Jhou2, Cheryl Olman1, Thomas Naselaris1; 1University of Minnesota, 2Medical University of South Carolina

One way to investigate the contribution of cognition on activity in the visual cortex is to fix or remove the retinal input altogether. There are many such non-optic visual experiences to draw from (e.g., mental imagery, synesthesia, hallucinations), all of which produce brain activity patterns consistent with the visual content of the experience. But how does the visual system manage to both accurately represent the external world and synthesize visual experiences? We approach this question by expanding on a theory that the human visual system embodies a probabilistic generative model of the visual world. We propose that retinal vision is just one form of inference that this internal model can support, and that activity in visual cortex observed in the absence of retinal stimulation can be interpreted as the most probable consequence unpacked from imagined, remembered, or otherwise assumed causes. When applied to mental imagery, this theory predicts that the encoding of imagined stimuli in low-level visual areas will resemble the encoding of seen stimuli in higher areas. We confirmed this prediction by estimating imagery encoding models from brain activity measured while subjects imagined complex visual stimuli accompanied by unchanging retinal input. In a different fMRI study, we investigated another far rarer form of non-optic vision: a case subject who, after losing their sight to retinal degeneration, now “sees” objects they touch or hear. The existence of this phenomenon further supports visual perception being a generative process that depends as much on top-down inference as on retinal input.

< Back to 2022 Symposia