Neuromodulation of Visual Perception

Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 6-8
Organizers: Jutta Billino, Justus-Liebig-University Giessen and Ulrich Ettinger, Rheinische Friedrich-Wilhelms-Universität Bonn
Presenters: Anita A. Disney, Alexander Thiele, Behrad Noudoost, Ariel Rokem, Ulrich Ettinger, Patrick J. Bennett

< Back to 2012 Symposia

Symposium Description

Over the last decades insights into the neurobiological mechanisms of visual perception have accumulated an impressive knowledge base. However, only recently research has started to uncover how different neurotransmitters affect visual processing. Advances in this research area expand our understanding of the complex regulation of sensory and sensorimotor processes. They moreover shed light on the mechanisms underlying individual differences in visual perception and oculomotor control that have been repeatedly observed, but are still insufficiently understood. The symposium aims to bring together experts in the field that complement each other with regard to different neurotransmitter systems, methods, and implications of their findings. Thus, the audience will be provided with an up-to-date overview of our knowledge on neuromodulation of visual perception. The symposium will start with presentations on physiological data showing the complexity of neuromodulation in early visual cortex. Anita Disney (Salk Institute) has worked together with Mike Hawken (New York University) on cholinergic mechanisms in macaque V1. Their findings show that nicotinergic receptors for acetylcholine are involved in gain modulation. The effects of nicotine application resemble those of attention in the awake monkey. Thus, it has been suggested that attentional effects in V1 activity might be partly mediated by acetylcholine. The presentation by Alexander Thiele and colleagues (Newcastle University) will tie in with the focus on attention. They have studied differential contributions of acetylcholine and glutamate to attentional modulation in V1. They were able to show that both neurotransmitters independently influence firing characteristics of V1 neurons associated with enhanced attention. The work of Behrad Noudoost and Tirin Moore (Stanford University) addresses prefrontal control of visual cortical signals mediated by dopamine. Their findings reveal that dopaminergic manipulation in the frontal eye fields does not only affect saccadic target selection, but also modulates response characteristics of V4 neurons. In the second part of the symposium presentations are supposed to bridge the gap between insights from physiology and behavioral data in humans. Ariel Rokem (Stanford University) and Michael Silver (UC Berkeley) pharmacologically enhanced cholinergic transmission in healthy humans and studied perceptual learning. Results support that acetylcholine increases the effects of perceptual learning which points to its role in regulation of neural plasticity. Ulrich Ettinger (Ludwig-Maximilians-University Munich) will summarize his work on the modulation of oculomotor control by cholinergic and dopaminergic challenges. He has studied effects of pharmacological manipulation as well as of functional genetics on saccadic eye movements. His methods also include imaging and clinical neuropsychology. The symposium will be completed by a presentation of Patrick Bennett and Allison Sekuler (McMaster University) on age-related changes in visual perception and how these can be modeled by altered neurotransmitter activity. The symposium on neuromodulation of visual perception will attract a broad audience because it offers a comprehensive and interdisciplinary overview of recent advances in this innovative research area. Presentations cover fundamental mechanisms of visual processing as well as implications for perception and visuomotor control. Attendees with diverse backgrounds will benefit and will be inspired to apply insights into neuromodulation to their own research field.

Presentations

Modulating visual gain: cholinergic mechanisms in macaque V1

Anita A. Disney, Salk Institute

Michael J. Hawken, Center for Neural Science, New York University

Cholinergic neuromodulation has been suggested to underlie arousal and attention in mammals. Acetylcholine (ACh) is released in cortex by volume transmission and so specificity in its effects must largely be conferred by selective expression of ACh receptors (AChRs). To dissect the local circuit action of ACh, we have used both quantitative anatomy and in vivo physiology and pharmacology during visual stimulation in macaque primary visual cortex (V1). We have shown that nicotinic AChRs are found presynaptically at thalamocortical synapses arriving at spiny neurons in layer 4c of V1 and that nicotine acts in this layer to enhance the gain of visual neurons. Similar evidence for nicotinic enhancement of thalamocortical transmission has been found in the primary cortices of other species and across sensory systems. In separate experiments we have shown that, amongst intrinsic V1 neurons, a higher proportion of GABAergic – in particular parvalbumin-immunoreactive – neurons express muscarinic AChRs than do excitatory neurons. We have also shown that ACh strongly suppresses visual responses outside layer 4c of macaque V1 and that this suppression can be blocked using a GABAa receptor antagonist. Suppression by ACh has been demonstrated in other cortical model systems but is often found to be mediated by reduced glutamate release rather than enhanced release of GABA. Recent anatomical data on AChR expression in the extrastriate visual cortex of the macaque and in V1 of rats, ferrets, and humans, suggest that there may be variation in the targeting of muscarinic mechanisms across neocortical model systems

Differential contribution of cholinergic and glutamatergic receptors to attentional modulation in V1

Alexander Thiele, Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, United Kingdom, Jose Herreo, Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, United Kingdom; Alwin Gieselmann, Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, United Kingdom

In V1, attentional modulation of firing rates is dependent on cholinergic (muscarinic) mechanisms (Herrero et al., 2008). Modelling suggests that appropriate ACh drive enables top-down feedback from higher cortical areas to exert its influence (Deco & Thiele, 2011). The implementation of such feedback at the transmitter/receptor level is poorly understood, but it is generally assumed that feedback relies on ionotropic glutamatergic (iGluR) mechanisms. We investigated this possibility by combining iontophoretic pharmacological analysis with V1 cell recordings while macaques performed a spatial attention task. Blockade or activation of iGluR did not alter attention-induced increases in firing rate, when compared to attend away conditions. However, attention reduced firing rate variance as previously reported in V4 (Mitchell, Sundberg, Reynolds, 2007), and this reduction depended on functioning iGluRs. Attention also reduced spike coherence between simultaneously recorded neurons in V1 as previously demonstrated for V4 (Cohen & Maunsell, 2009; Mitchell et al., 2007). Again, this reduction depended on functional iGluR. Thus overall excitatory drive (probably aided by feedback), increased the signal to noise ratio (reduced firing rate variance) and reduced redundancy of information transmission (noise correlation) in V1. Conversely, attention induced firing rate differences are enabled by the cholinergic system. These studies identify independent contributions of different neurotransmitter systems to attentional modulation in V1.

Dopamine-mediated prefrontal control of visual cortical signals

Behrad Noudoost, Department of Neurobiology, Stanford University School of Medicine, Tirin Moore, Department of Neurobiology, Stanford University School of Medicine & Howard Hughes Medical Institute, Stanford University School of Medicine

Prefrontal cortex (PFC) is believed to play a crucial role in executive control of cognitive functions. Part of this control is thought to be achieved by control of sensory signals in posterior sensory cortices. Dopamine is known to play a role in modulating the strength of signals within the PFC. We tested whether this neurotransmitter is involved in PFC’s top-down control of signals within posterior sensory areas. We recorded responses of neurons in visual cortex (area V4) before and after infusion of the D1 receptor (D1R)-antagonist SCH23390 into the frontal eye field (FEF) in monkeys performing visual fixation and saccadic target selection tasks. Visual stimuli were presented within the shared response fields of simultaneously studied V4 and FEF sites. We found that modulation of D1R-mediated activity within the FEF enhances the strength of visual signals in V4 and increases the monkeys’ tendency to choose targets presented within the affected part of visual space. Similar to the D1R manipulation, modulation of D2R-mediated activity within the FEF also increased saccadic target selection. However, it failed to alter visual responses within area V4. The observed effects of D1Rs in mediating the control of visual cortical signals and the selection of visual targets, coupled with its known role in working memory, suggest PFC dopamine as a key player in the control of cognitive functions.

Cholinergic enhancement of perceptual learning in the human visual system

Ariel Rokem, Department of Psychology, Stanford University, Michael A. Silver, Helen Wills Neuroscience Institute and School of Optometry, University of California, Berkeley

Learning from experience underlies our ability to adapt to novel tasks and unfamiliar environments. But how does the visual system know when to adapt and change and when to remain stable? The neurotransmitter acetylcholine (ACh) has been shown to play a critical role in cognitive processes such as attention and learning. Previous research in animal models has shown that plasticity in sensory systems often depends on the task relevance of the stimulus, but experimentally increasing ACh in cortex can replace task relevance in inducing experience-dependent plasticity. Perceptual learning (PL) is a specific and persistent improvement in performance of a perceptual task with training. To test the role of ACh in PL of visual discrimination, we pharmacologically enhanced cholinergic transmission in the brains of healthy human participants by administering the cholinesterase inhibitor donepezil (trade name: Aricept), a commonly prescribed treatment for Alzheimer’s disease. To directly evaluate the effect of cholinergic enhancement, we conducted a double-blind, placebo-controlled cross-over study, in which each subject participated in a course of training under placebo and a course of training under donepezil. We found that, relative to placebo, donepezil increased the magnitude and specificity of the improvement in perceptual performance following PL. These results suggest that ACh plays a role in highlighting occasions in which learning should occur. Specifically, ACh may regulate neural plasticity by selectively increasing responses of neurons to behaviorally relevant stimuli.

Pharmacological Influences on Oculomotor Control in Healthy Humans

Ulrich Ettinger, Rheinische Friedrich-Wilhelms-Universität Bonn

Oculomotor control can be studied as an important model system for our understanding of how the brain implements visually informed (reflexive and voluntary) movements. A number of paradigms have been developed to investigate specific aspects of the cognitive and sensorimotor processes underlying this fascinating ability of the brain. For example, saccadic paradigms allow the specific and experimentally controlled study of response inhibition as well as temporo-spatial prediction. In this talk I will present recent data from studies investigating pharmacological influences on saccadic control in healthy humans. Findings from nicotine studies point to improvements of response inhibition and volitional response generation through this cholinergic agonist. Evidence from methylphenidate on the other hand suggests that oculomotor as well as motor response inhibition is unaffected by this dopaminergic manipulation, whereas the generation of saccades to temporally predictive visual targets is improved. These findings will be integrated with our published and ongoing work on the molecular genetic correlates of eye movements as well as their underlying brain activity. I will conclude by (1) summarising the pharmacological mechanisms underlying saccadic control and (2) emphasising the role that such oculomotor tasks may play in the evaluation of potential cognitive enhancing compounds, with implications for neuropsychiatric conditions such as ADHD, schizophrenia and dementia.

The effects of aging on GABAergic mechanisms and their influence on visual perception

Patrick J. Bennett and Allison B. Sekuler, Department of Psychology, Neuroscience & Behaviour McMaster University

The functional properties of visual mechanisms, such as the tuning properties of visual cortical neurons, are thought to emerge from an interaction among excitatory and inhibitory neural mechanisms. Hence, changing the balance between excitation and inhibition should lead, at least in some cases, to measurable changes in these mechanisms and, presumably, visual perception. Recent evidence suggests that aging is associated with changes in GABAergic signaling (Leventhal et al., 2003; Pinto et al., 2010), however it remains unclear how these changes manifest themselves in performance in psychophysical tasks. Specifically, some psychophysical studies (Betts et al., 2005; Wilson et al., 2011), but not all, are consistent with the idea that certain aspects of age-related changes in vision are caused by a reduction in the effectiveness of cortical inhibitory circuits. In my talk I will review the evidence showing that aging is related to changes in GABAergic mechanisms and the challenges associated with linking such changes to psychophysical performance.

< Back to 2012 Symposia

Human visual cortex: from receptive fields to maps to clusters to perception

Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 4-5
Organizer: Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
Presenters: Serge O. Dumoulin, Koen V. Haak, Alex R. Wade, Mark M. Schira, Stelios M. Smirnakis, Alyssa A. Brewer

< Back to 2012 Symposia

Symposium Description

The organization of the visual system can be described at different spatial scales. At the smallest scale, the receptive field is a property of individual neurons and summarizes the region of the visual field where visual stimulation elicits a response. These receptive fields are organized into visual field maps, where neighboring neurons process neighboring parts of the visual field. Many visual field maps exist, suggesting that every map contains a unique representation of the visual field. This notion relates the visual field maps to the idea of functional specialization, i.e. separate cortical regions are involved in different processes. However, the computational processes within a visual field map do not have to coincide with perceptual qualities. Indeed most perceptual functions are associated with multiple visual field maps and even multiple cortical regions. Visual field maps are organized in clusters that share a similar eccentricity organization. This has lead to the proposal that perceptual specializations correlate with clusters rather than individual maps. This symposium will highlight current concepts of the organization of visual cortex and their relation to perception and plasticity. The speakers have used a variety of neuroimaging techniques with a focus on conventional functional magnetic resonance imaging (fMRI) approaches, but also including high-resolution fMRI, electroencephalography (EEG), subdural electrocorticography (ECoG), and invasive electrophysiology. We will describe data-analysis techniques to reconstruct receptive field properties of neural populations, and extend them to visual field maps and clusters within human and macaque visual cortex. We describe the way these receptive field properties vary within and across different visual field maps. Next, we extend conventional stimulus-referred notions of the receptive field to neural-referred properties, i.e. cortico-cortical receptive fields that capture the information flow between visual field maps. We also demonstrate techniques to reveal extra-classical receptive field interactions similar to those seen in classical psychophysical “surround suppression” in both S-cone and achromatic pathways. Next we will consider the detailed organization within the foveal confluence, and model the unique constraints that are associated with this organization. Furthermore, we will consider how these neural properties change with the state of chronic visual deprivation due to damage to the visual system, and in subjects with severely altered visual input due to prism-adaptation. The link between visual cortex’ organization, perception and plasticity is a fundamental part of vision science. The symposium highlights these links at various spatial scales. In addition, the attendees will gain insight into a broad spectrum of state-of-the-art data-acquisition and data-analyses neuroimaging techniques. Therefore, we believe that this symposium will be of interest to a wide range of visual scientists, including students, researchers and faculty.

Presentations

Reconstructing human population receptive field properties

Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands, B.M. Harvey, Experimental Psychology, Utrecht University, Netherlands

We describe a method that reconstructs population receptive field (pRF) properties in human visual cortex using fMRI. This data-analysis technique is able to reconstruct several properties of the underlying neural population, such as quantitative estimates of the pRF position (maps), size as well as suppressive surrounds. PRF sizes increase with increasing eccentricity and up the visual hierarchy. In the same human subject, fMRI pRF measurements are comparable to those derived from subdural electrocorticography (ECoG).   Furthermore, we describe a close relationship of pRF sizes to the cortical magnification factor (CMF). Within V1, interhemisphere and subject variations in CMF, pRF size, and V1 surface area are correlated. This suggests a constant processing unit shared between humans. PRF sizes increase between visual areas and with eccentricity, but when expressed in V1 cortical surface area (i.e., cortico-cortical pRFs), they are constant across eccentricity in V2 and V3. Thus, V2, V3, and to some degree hV4, sample from a constant extent of V1. This underscores the importance of V1 architecture as a reference frame for subsequent processing stages and ultimately perception.

Cortico-cortical receptive field modeling using functional magnetic resonance imaging (fMRI)

Koen V. Haak, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands, J. Winawer, Psychology, Stanford University; B.M. Harvey, Experimental Psychology, Utrecht University; R. Renken, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands; S.O. Dumoulin, Experimental Psychology, Utrecht University, Netherlands; B.A. Wandell, Psychology, Stanford University; F.W. Cornelissen, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands

The traditional way to study the properties of cortical visual neurons is to measure responses to visually presented stimuli (stimulus-referred). A second way to understand neuronal computations is to characterize responses in terms of the responses in other parts of the nervous system (neural-referred).   A model that describes the relationship between responses in distinct cortical locations is essential to clarify the network of cortical signaling pathways. Just as a stimulus-referred receptive field predicts the neural response as a function of the stimulus contrast, the neural-referred receptive field predicts the neural response as a function of responses elsewhere in the nervous system. When applied to two cortical regions, this function can be called the population cortico-cortical receptive field (CCRF), and it can be used to assess the fine-grained topographic connectivity between early visual areas. Here, we model the CCRF as a Gaussian-weighted region on the cortical surface and apply the model to fMRI data from both stimulus-driven and resting-state experimental conditions in visual cortex to demonstrate that 1) higher order visual areas such as V2, V3, hV4 and the LOC show an increase in the CCRF size when sampling from the V1 surface, 2) the CCRF size of these higher order visual areas is constant over the V1 surface, 3) the method traces inherent properties of the visual cortical organization, 4) it probes the direction of the flow of information.

Imaging extraclassical receptive fields in early visual cortex

Alex R. Wade, Department of Psychology University of York, Heslington, UK, B. Xiao, Department of Brain and Cognitive Sciences, MIT; J. Rowland, Department of Art Practise, UC Berkeley

Psychophysically, apparent color and contrast can be modulated by long-range contextual effects. In this talk I will describe a series of neuroimaging experiments that we have performed to examine the effects of spatial context on color and contrast signals in early human visual cortex.   Using fMRI we first show that regions of high contrast in the fovea exert a long-range suppressive effect across visual cortex that is consistent with a contrast gain control mechanism. This suppression is weaker when using stimuli that excite the chromatic pathways and may occur relatively early in the visual processing stream (Wade, Rowland, J Neurosci, 2010).   We then used high-resolution source imaged EEG to examine the effects of context on V1 signals initiated in different chromatic and achromatic precortical pathways (Xiao and Wade, J Vision, 2010). We found that contextual effects similar to those seen in classical psychophysical “surround suppression” were present in both S-cone and achromatic pathways but that there was little contextual interaction between these pathways – either in our behavioral or in our neuroimaging paradigms.   Finally, we used fMRI multivariate pattern analysis techniques to examine the presence of chromatic tuning in large extraclassical receptive fields (ECRFs). We found that ECRFs have sufficient chromatic tuning to enable classification based solely on information in suppressed voxels that are not directly excited by the stimulus. In many cases, performance using ECRFs was as accurate as that using voxels driven directly by the stimulus.

The human foveal confluence and high resolution fMRI

Mark M. Schira, Neuroscience Research Australia (NeuRA), Sydney & University of New South Wales, Sydney, Australia

After remaining terra incognita for 40 years, the detailed organization of the foveal confluence has just recently been described in humans. I will present recent high resolution mapping results in human subjects and introduce current concepts of its organization in human and other primates (Schira et al., J. Nsci, 2009). I will then introduce a new algebraic retino-cortical projection function that accurately models the V1-V3 complex to the level of our knowledge about the actual organization (Schira et al. PLoS Comp. Biol. 2010). Informed by this model I will discuss important properties of foveal cortex in primates. These considerations demonstrate that the observed organization though surprising at first hand is in fact a good compromise with respect to cortical surface and local isotropy, proving a potential explanation for this organization. Finally, I will discuss recent advances such as multi-channel head coils and parallel imaging which have greatly improved the quality and possibilities of MRI. Unfortunately, most fMRI research is still essentially performed in the same old 3 by 3 by 3 mm style – which was adequate when using a 1.5T scanner and a birdcage head coil. I will introduce simple high resolution techniques that allow fairly accurate estimates of the foveal organization in research subjects within a reasonable timeframe of approximately 20 minutes, providing a powerful tool for research of foveal vision.

Population receptive field measurements in macaque visual cortex

Stelios M. Smirnakis, Departments of Neurosci. and Neurol., Baylor Col. of Med., Houston, TX, G.A. Keliris, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany; Y. Shao, A. Papanikolaou, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany;   N.K. Logothetis, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany, Div. of Imaging Sci. and Biomed. Engin., Univ. of Manchester, United Kingdom

Visual receptive fields have dynamic properties that may change with the conditions of visual stimulation or with the state of chronic visual deprivation. We used 4.7 Tesla functional magnetic resonance imaging (fMRI) to study the visual cortex of two normal adult macaque monkeys and one macaque with binocular central retinal lesions due to a form of juvenile macular degeneration (MD). FMRI experiments were performed under light remifentanyl induced anesthesia (Logothetis et al. Nat. Neurosci. 1999). Standard moving horizontal/vertical bar stimuli were presented to the subjects and the population receptive field (pRF) method (Dumoulin and Wandell, Neuroimage 2008) was used to measure retinotopic maps and pRF sizes in early visual areas.   FMRI measurements of normal monkeys agree with published electrophysiological results, with pRF sizes and electrophysiology measurements showing similar trends. For the MD monkey, the size and location of the lesion projection zone (LPZ) was consistent with the retinotopic projection of the retinal lesion in early visual areas. No significant BOLD activity was seen within the V1 LPZ, and the retinotopic organization of the non-deafferented V1 periphery was regular without distortion. Interestingly, area V5/MT of the MD monkey showed more extensive activation than area V5/MT of control monkeys which had part of their visual field obscured (artificial scotoma) to match the scotoma of the MD monkey. V5/MT PRF sizes of the MD monkey were on average smaller than controls. PRF estimation methods allow us to measure and follow in vivo how the properties of visual areas change as a function of cortical reorganization. Finally, if there is time, we will discuss a different method of pRF estimation that yields additional information.

Functional plasticity in human parietal visual field map clusters: Adapting to reversed visual input

Alyssa A. Brewer, Department of Cognitive Sciences University of California, Irvine Irvine, CA, B. Barton, Department of Cognitive Sciences University of California, Irvine; L. Lin, AcuFocus, Inc., Irvine

Knowledge of the normal organization of visual field map clusters allows us to study potential reorganization within visual cortex under conditions that lead to a disruption of the normal visual inputs. Here we exploit the dynamic nature of visuomotor regions in posterior parietal cortex to examine cortical functional plasticity induced by a complete reversal of visual input in normal adult humans. We also investigate whether there is a difference in the timing or degree of a second adaptation to the left-right visual field reversal in adult humans after long-term recovery from the initial adaptation period. Subjects wore left-right reversing prism spectacles continuously for 14 days and then returned for a 4-day re-adaptation to the reversed visual field 1-9 months later. For each subject, we used population receptive field modeling fMRI methods to track the receptive field alterations within the occipital and parietal visual field map clusters across time points. The results from the first 14-day experimental period highlight a systematic and gradual shift of visual field coverage from contralateral space into ipsilateral space in parietal cortex throughout the prism adaptation period. After the second, 4-day experimental period, the data demonstrate a faster time course for both behavioral and cortical re-adaptation. These measurements in subjects with severely altered visual input allow us to identify the cortical regions subserving the dynamic remapping of cortical representations in response to altered visual perception and demonstrate that the changes in the maps produced by the initial long prism adaptation period persist over an extended time.

< Back to 2012 Symposia

Distinguishing perceptual shifts from response biases

Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 1-3
Organizer: Joshua Solomon, City University London
Presenters: Sam Ling, Vanderbilt; Keith Schneider, York University; Steven Hillyard, UCSD; Donald MacLeod, UCSD; Michael Morgan, City University London, Max Planck Institute for Neurological Research, Cologne; Mark Georgeson, Aston University

< Back to 2012 Symposia

Symposium Description

Sensory adaptation was originally considered a low-level phenomenon involving measurable changes in sensitivity, but has been extended to include many cases where a change in sensitivity has yet to be demonstrated. Examples include adaptation to blur, temporal duration and face identity.  It has also been claimed that adaptation can be affected by attention to the adapting stimulus, and even that adaptation can be caused by imaging the adapting stimulus.  The typical method of measurement in such studies involves a shift in the mean (p50) point of a psychometric function, obtained by the Method of Single Stimuli.  In Signal Detection Theory, the mean is determined by a decision rule, as opposed to the slope which is set by internal noise. The question that arises is how we can distinguish shifts in mean due to a genuine adaptation process from shifts due to a change in the observer’s decision rule.  This was a hot topic in the 60’s, for example, in discussion between Restle and Helson over Adaptation Level Theory, but it has become neglected, with the result that any shift in the mean of a psychometric function is now accepted as evidence for a perceptual shift.  We think that it is time to revive this issue, given the theoretical importance of claims about adaptation being affected by imagination and attention, and the links that are claimed with functional brain imaging.

Presentations

Attention alters appearance

Sam Ling, Vanderbilt University

Maintaining veridicality seems to be of relatively low priority for the human brain; starting at the retina, our neural representations of the physical world undergo dramatic transformations, often forgoing an accurate depiction of the world in favor of augmented signals that are more optimal for the task at hand. Indeed, visual attention has been suggested to play a key role in this process, boosting the neural representations of attended stimuli, and attenuating responses to ignored stimuli. What, however, are the phenomenological consequences of attentional modulation?  I will discuss a series of studies that we and others have conducted, all converging on the notion that attention can actually change the visual appearance of attended stimuli across a variety of perceptual domains, such as contrast, spatial frequency, and color. These studies reveal that visual attention not only changes our neural representations, but that it can actually affect what we think we see.

Attention increases salience and biases decisions but does not alter appearance.

Keith Schneider, York University

Attention enhances our perceptual abilities and increases neural activity.  Still debated is whether an attended object, given its higher salience and more robust representation, actually looks any different than an otherwise identical but unattended object.  One might expect that this question could be easily answered by an experiment in which an observer is presented two stimuli differing along one dimension, contrast for example, to one of which attention has been directed, and must report which stimulus has the higher apparent contrast.  The problem with this sort of comparative judgment is that in the most informative case, that in which the two stimuli are equal, the observer is also maximally uncertain and therefore most susceptible to extraneous influence.  An intelligent observer might report, all other things being equal, that the stimulus about which he or she has more information is the one with higher contrast.  (And it doesn’t help to ask which stimulus has the lower contrast, because then the observer might just report the less informed stimulus!)  In this way, attention can bias the decision mechanism and confound the experiment such that it is not possible for the experimenter to differentiate this bias from an actual change in appearance.  It has been over ten years since I proposed a solution to this dilemma – an equality judgment task in which observers report whether the two stimuli are equal in appearance or not.  This paradigm has been supported in the literature and has withstood criticisms.  Here I will review these findings.

Electrophysiological Studies of the Locus of Perceptual Bias

Steven Hillyard, UCSD

The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century.  Recent psychophysical studies have reported that attention increases the apparent contrast of visual stimuli, but there is still a controversy as to whether this effect is due to the biasing of decisions as opposed to the altering of perceptual representations and changes in subjective appearance.  We obtained converging neurophysiological evidence while observers judged the relative contrast of Gabor patch targets presented simultaneously to the left and right visual fields following a lateralized cue (auditory or visual).  This non-predictive cueing boosted the apparent contrast of the Gabor target on the cued side in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset.  The magnitude of the enhanced neural response in ventral extrastriate visual cortex was positively correlated with perceptual reports of the cued-side target being higher in contrast.  These results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

Adaptive sensitivity regulation in detection and appearance

Donald MacLeod, UCSD

The visual system adapts to changing levels of stimulation with alterations of sensitivity that are expressed both in in changes in detectability, and in changes of appearance. The connection between these two aspects of sensitivity regulation is often taken for granted but need not be simple. Even the proportionality between ‘thresholds’ obtained by self-setting and threshold based on reliability of detection (e.g. forced-choice) is not generally expected except under quite restricted conditions and unrealistically simple models of the visual system. I review some of the theoretical possibilities in relation to available experimental evidence. Relatively simple mechanistic models provide opportunity for deviations from proportionality, especially if noise can enter into the neural representation at multiple stages. The extension to suprathreshold appearance is still more precarious;  yet remarkably, under some experimental conditions, proportionality with threshold sensitivities holds, in the sense that equal multiples of threshold match.

Observers can voluntarily shift their psychometric functions without losing sensitivity

Michael Morgan, City University London, Max Planck Institute for Neurological Research, Cologne, Barbara Dillenburger, Sabine Raphael, Max Planck; Joshua A. Solomon, City University

Psychometric sensory discrimination functions are usually modeled by cumulative Gaussian functions with just two parameters, their central tendency and their slope. These correspond to Fechner’s “constant” and “variable” errors, respectively. Fechner pointed out that even the constant error could vary over space and time and could masquerade as variable error. We wondered whether observers could deliberately introduce a constant error into their performance without loss of precision. In three-dot vernier and bisection tasks with the method of single stimuli, observers were instructed to favour one of the two responses when unsure of their answer. The slope of the resulting psychometric function was not significantly changed, despite a significant change in central tendency. Similar results were obtained when altered feedback was used to induce bias. We inferred that observers can adopt artificial response criteria without any significant increase in criterion fluctuation. These findings have implications for some studies that have measured perceptual “illusions” by shifts in the psychometric functions of sophisticated observers.

Sensory, perceptual and response biases: the criterion concept in perception

Mark Georgeson, Aston University

Signal detection theory (SDT) established in psychophysics a crucial distinction between sensitivity (or discriminability, d’) and bias (or criterion) in the analysis of performance in sensory judgement tasks. SDT itself is agnostic about the origins of the criterion, but there seems to be a broad consensus favouring “response bias” or “decision bias”. And yet, perceptual biases exist and are readily induced. The motion aftereffect is undoubtedly perceptual  – compelling motion is seen on a stationary pattern – but its signature in psychophysical data is a shift in the psychometric function, indistinguishable from “response bias”.  How might we tell the difference? I shall discuss these issues in relation to some recent experiments and modelling of adaptation to blur (Elliott, Georgeson & Webster, 2011).  A solution might lie in dropping any hard distinction between perceptual shifts and decision biases. Perceptual mechanisms make low-level decisions. Sensory, perceptual and  response criteria might be represented neurally in similar ways at different levels of the visual hierarchy, by biasing signals that are set by the task and by the history of stimuli and responses (Treisman & Williams, 1984). The degree of spatial localization over which the bias occurs might reflect its level in the visual hierarchy. Thus, given enough data, the dilemma (are aftereffects perceptual or due to response bias?) might be resolved in favour of such a multi-level model.

< Back to 2012 Symposia

Part-whole relationships in visual cortex

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 6-8
Organizer: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven
Presenters: Johan Wagemans, Charles E. Connor, Scott O. Murray, James R. Pomerantz, Jacob Feldman, Shaul Hochstein

< Back to 2012 Symposia

Symposium Description

With his famous paper on phi motion, Wertheimer (1912) launched Gestalt psychology, arguing that the whole is different from the sum of the parts. In fact, wholes were considered primary in perceptual experience, even determining what the parts are. Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? Are wholes constructed from combinations of the parts? If so, to what extent are the combinations additive, what does superadditivity really mean, and how does it arise along the visual hierarchy? How much of the combination process occurs in incremental feedforward iterations or horizontal connections and at what stage does feedback from higher areas kick in? What happens to the representation of the lower-level parts when the higher-level wholes are perceived? Do they become enhanced or suppressed (“explained away”)? Or, are wholes occurring before the parts, as argued by Gestalt psychologists? But what does this global precedence really mean in terms of what happens where in the brain? Does the primacy of the whole only account for consciously perceived figures or objects, and are the more elementary parts still combined somehow during an unconscious step-wise processing stage? A century later, tools are available that were not at the Gestaltists’ disposal to address these questions. In this symposium, we will take stock and try to provide answers from a diversity of approaches, including single-cell recordings from V4, posterior and anterior IT cortex in awake monkeys (Ed Connor, Johns Hopkins University), human fMRI (Scott Murray, University of Washington), human psychophysics (James Pomerantz, Rice University), and computational modeling (Jacob Feldman, Rutgers University). Johan Wagemans (University of Leuven) will introduce the theme of the symposium with a brief historical overview of the Gestalt tradition and a clarification of the conceptual issues involved. Shaul Hochstein (Hebrew University) will end with a synthesis of the current literature, in the framework of Reverse Hierarchy Theory. The scientific merit of addressing such a central issue, which has been around for over a century, from a diversity of modern perspectives and in light of the latest findings should be obvious. The celebration of the centennial anniversary of Gestalt psychology also provides an excellent opportunity to doing so. We believe our line-up of speakers, addressing a set of closely related questions, from a wide range of methodological and theoretical perspectives, promises to be attracting a large crowd, including students and faculty working in psychophysics, neurosciences and modeling. In comparison with other proposals taking this centennial anniversary as a window of opportunity, ours is probably more focused and allows for a more coherent treatment of a central Gestalt issue, which has been bothering vision science for a long time.

Presentations

Part-whole relationships in vision science: A brief historical review and conceptual analysis

Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Exactly 100 years ago, Wertheimer’s paper on phi motion (1912) effectively launched the Berlin school of Gestalt psychology. Arguing against elementalism and associationism, they maintained that experienced objects and relationships are fundamentally different from collections of sensations. Going beyond von Ehrenfels’s notion of Gestalt qualities, which involved one-sided dependence on sense data, true Gestalts are dynamic structures in experience that determine what will be wholes and parts. From the beginning, this two-sided dependence between parts and wholes was believed to have a neural basis. They spoke of continuous “whole-processes” in the brain, and argued that research needed to try to understand these from top (whole) to bottom (parts ) rather than the other way around. However, Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? In this talk, I will briefly review the Gestalt position and analyse the different notions of part and whole, and different views on part-whole relationships maintained in a century of vision science since the start of Gestalt psychology. This will provide some necessary background for the remaining talks in this symposium, which will all present contemporary views based on new findings.

Ventral pathway visual cortex: Representation by parts in a whole object reference frame

Charles E. Connor, Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Anitha Pasupathy, Scott L. Brincat, Yukako Yamane, Chia-Chun Hung

Object perception by humans and other primates depends on the ventral pathway of visual cortex, which processes information about object structure, color, texture, and identity.  Object information processing can be studied at the algorithmic, neural coding level using electrode recording in macaque monkeys.  We have studied information processing in three successive stages of the monkey ventral pathway:  area V4, PIT (posterior inferotemporal cortex), and AIT (anterior inferotemporal cortex).  At all three stages, object structure is encoded in terms of parts, including boundary fragments (2D contours, 3D surfaces) and medial axis components (skeletal shape fragments).  Area V4 neurons integrate information about multiple orientations to produce signals for local contour fragments.  PIT neurons integrate multiple V4 inputs to produce representations of multi-fragment configurations.  Even neurons in AIT, the final stage of the monkey ventral pathway, represent configurations of parts (as opposed to holistic object structure).  However, at each processing stage, neural responses are critically dependent on the position of parts within the whole object.  Thus, a given neuron may respond strongly to a specific contour fragment positioned near the right side of an object but not at all when it is positioned near the left.  This kind of object-centered position tuning would serve an essential role by representing spatial arrangement within a distributed, parts-based coding scheme. Object-centered position sensitivity is not imposed by top-down feedback, since it is apparent in the earliest responses at lower stages, before activity begins at higher stages.  Thus, while the brain encodes objects in terms of their constituent parts, the relationship of those parts to the whole object is critical at each stage of ventral pathway processing.

Long-range, pattern-dependent contextual effects in early human visual cortex

Scott O. Murray, Department of Psychology, University of Washington, Sung Jun Joo, Geoffrey M. Boynton

The standard view of neurons in early visual cortex is that they behave like localized feature detectors. We will discuss recent results that demonstrate that neurons in early visual areas go beyond localized feature detection and are sensitive to part-whole relationships in images. We measured neural responses to a grating stimulus (“target”) embedded in various visual patterns as defined by the relative orientation of flanking stimuli. We varied whether or not the target was part of a predictable sequence by changing the orientation of distant gratings while maintaining the same local stimulus arrangement. For example, a vertically oriented target grating that is flanked locally with horizontal flankers (HVH) can be made to be part of a predictable sequence by adding vertical distant flankers (VHVHV). We found that even when the local configuration (e.g. HVH) around the target was kept the same there was a smaller neural response when the target was part of a predictable sequence (VHVHV). Furthermore, when making an orientation judgment of a “noise” stimulus that contains no specific orientation information, observers were biased to “see” the orientation that deviates from the predictable orientation, consistent with computational models of primate cortical processing that incorporate efficient coding principles. Our results suggest that early visual cortex is sensitive to global patterns in images in a way that is markedly different from the predictions of standard models of cortical visual processing and indicate an important role in coding part-whole relationships in images.

The computational and cortical bases for configural superiority

James R. Pomerantz, Department of Psychology, Rice University, Anna I. Cragin, Department of Psychology, Rice University; Kimberley D. Orsten, Department of Psychology, Rice University; Mary C. Portillo, Department of Social Sciences, University of Houston-Downtown

In the configural superiority effect (CSE; Pomerantz et al., 1977; Pomerantz & Portillo, 2011), people respond more quickly to a whole configuration than to any one of its component parts, even when the parts added to create a whole contribute no information by themselves.  For example, people discriminate an arrow from a triangle more quickly than a positive from a negative diagonal even when those diagonals constitute the only difference between the arrows and triangles.  How can a neural or other computational system be faster at processing information about combinations of parts – wholes – than about parts taken singly?   We consider the results of Kubilius et al. (2011) and discuss three possibilities: (1) Direct detection of wholes through smart mechanisms that compute higher order information without performing seemingly necessary intermediate computations; (2) the “sealed channel hypothesis” (Pomerantz, 1978), which holds that part information is extracted prior to whole information in a feedforward manner but is not available for responses; and (3) a closely related reverse hierarchy model holding that conscious experience begins with higher cortical levels processing wholes, with parts becoming accessible to consciousness only after feedback to lower levels is complete (Hochstein & Ahissar, 2002).  We describe a number of CSEs and elaborate both on these mechanisms that might explain them and how they might be confirmed experimentally.

Computational integration of local and global form

Jacob Feldman, Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick, Manish Singh, Vicky Froyen

A central theme of perceptual theory, from the Gestaltists to the present, has been the integration of local and global image information. While neuroscience has traditionally viewed perceptual processes as beginning with local operators with small receptive fields before proceeding on to more global operators with larger ones, a substantial body of evidence now suggests that supposedly later processes can impose decisive influences on supposedly earlier ones, suggesting a more complicated flow of information. We consider this problem from a computational point of view. Some local processes in perceptual organization, like the organization of visual items into a local contour, can be well understood in terms of simple probabilistic inference models. But for a variety of reasons nonlocal factors such as global “form” resist such simple models. In this talk I’ll discuss constraints on how form- and region-generating probabilistic models can be formulated and integrated with local ones. From a computational point of view, the central challenge is how to embed the corresponding estimation procedure in a locally-connected network-like architecture that can be understood as a model of neural computation.

The rise and fall of the Gestalt gist

Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University, Merav Ahissar

Reviewing the current literature, one finds physiological bases for Gestalt-like perception, but also much that seems to contradict the predictions of this theory. Some resolution may be found in the framework of Reverse Hierarchy Theory, dividing between implicit processes, of which we are unaware, and explicit representations, which enter perceptual consciousness. It is the conscious percepts that appear to match Gestalt predictions – recognizing wholes even before the parts. We now need to study the processing mechanisms at each level, and, importantly, the feedback interactions which equally affect and determine the plethora of representations that are formed, and to analyze how they determine conscious perception. Reverse Hierarchy Theory proposes that initial perception of the gist of a scene – including whole objects, categories and concepts – depends on rapid bottom-up implicit processes, which seems to follow (determine) Gestalt rules. Since lower level representations are initially unavailable to consciousness – and may become available only with top-down guidance – perception seems to immediately jump to Gestalt conclusions. Nevertheless, vision at a blink of the eye is the result of many layers of processing, though introspection is blind to these steps, failing to see the trees within the forest. Later, slower perception, focusing on specific details, reveals the source of Gestalt processes – and destroys them at the same time. Details of recent results, including micro-genesis analyses, will be reviewed within the framework of Gestalt and Reverse Hierarchy theories.

< Back to 2012 Symposia

What does fMRI tell us about brain homologies?

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 4-5
Organizer: Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology
Presenters: Martin Sereno, David Van Essen, Hauke Kolster, Jonathan Winawer, Reza Rajimehr

< Back to 2012 Symposia

Symposium Description

Over the past 20 years, the functional magnetic resonance imaging (fMRI) has provided a great deal of knowledge about the functional organization of human visual cortex. In recent years, the development of the fMRI technique in non-human primates has enabled neuroscientists to directly compare the topographic organization and functional properties of visual cortical areas across species. These comparative studies have shown striking similarities (‘homologies’) between human and monkey visual cortex. Many visual cortical areas in human can be corresponded to homologous areas in monkey – though detailed cross-species comparisons have also shown specific variations in visual feature selectivity of cortical areas and spatial arrangement of visual areas on the cortical sheet. Comparing cortical structures in human versus monkey provides a framework for generalizing results from invasive neurobiological studies in monkeys to humans. It also provides important clues for understanding the evolution of cerebral cortex in primates. In this symposium, we would like to highlight recent fMRI studies on the organization of visual cortex in human versus monkey. We will have 5 speakers. Each speaker will give a 25-minute talk (including 5 minutes of discussion time). Martin Sereno will introduce the concept of brain homology, elaborate on its importance, and evaluate technical limitations in addressing the homology questions. He will then continue with some examples of cross-species comparison for retinotopic cortical areas. David Van Essen will describe recent progress in applying surface-based analysis and visualization methods that provide a powerful approach for comparisons among primate species, including macaque, chimpanzee, and human. Hauke Kolster will test the homology between visual areas in occipital cortex of human and macaque in terms of topological organization, functional characteristics, and population receptive field sizes. Jonathan Winawer will review different organizational schemes for visual area V4 in human, relative to those in macaque. Reza Rajimehr will compare object-selective cortex (including face and scene areas) in human versus macaque. The symposium will be of interest to visual neuroscientists (faculty and students) and a general audience who will benefit from a series of integrated talks on fundamental yet relatively ignored topic of brain homology.

Presentations

Evolution, taxonomy, homology, and primate visual areas

Martin Sereno, Department of Cognitive Science, UC San Diego

Evolution involves the repeated branching of lineages, some of which become extinct. The  problem of determining the relationship between cortical areas within the brains of  surviving branches (e.g., humans, macaques, owl monkeys) is difficult because of: (1)  missing evolutionary intermediates, (2) different measurement techniques, (3) body size  differences, and (4) duplication, fusion, and reorganization of brain areas. Routine  invasive experiments are carried out in very few species (one loris, several New and Old  World monkeys). The closest to humans are macaque monkeys. However, the last common  ancestor of humans and macaques dates to more than 30 million years ago. Since then, New  and Old World monkey brains have evolved independently from ape and human brains,  resulting in complex mixes of shared and unique features. Evolutionary biologists are  often interested in “shared derived” characters — specializations from a basal condition  that are peculiar to a species or grouping of species. These are important for  classification (e.g., a brain feature unique to macaque-like monkeys). Evolutionary  biologists also distinguish similarities due to inheritance (homology — e.g., MT), from  similarities due to parallel or convergent evolution (homoplasy — e.g., layer 4A  staining in humans and owl monkey. By contrast with taxonomists, neuroscientists are  usually interested in trying to determine which features are conserved across species  (whether by inheritance or parallel evolution), indicating that those features may have a  basic functional and/or developmental role. The only way to obtain either of these kinds  of information is to examine data from multiple species.

Surface-based analyses of human, macaque, and chimpanzee cortical organization

David Van Essen, Department of Anatomy and Neurobiology, Washington University School of Medicine

Human and macaque cortex differ markedly in surface area (nine-fold), in their pattern of convolutions, and in the relationship of cortical areas to these convolutions.  Nonetheless, there are numerous similarities and putative homologies in cortical organization revealed by architectonic and other anatomical methods and more recently by noninvasive functional imaging methods.  There are also differences in functional organization, particularly in regions of rapid evolutionary expansion in the human lineage.  This presentation will highlight recent progress in applying surface-based analysis and visualization methods that provide a powerful general approach for comparisons among primate species, including the macaque, chimpanzee, and human. One major facet involves surface-based atlases that are substrates for increasingly accurate cortical parcellations in each species as well as maps of functional organization revealed using resting-state and task-evoked fMRI. Additional insights into cortical parcellations as well as evolutionary relationships are provided by myelin maps that have been obtained noninvasively in each species.  Together, these multiple modalities provide new insights regarding visual cortical organization in each species.  Surface-based registration provides a key method for making objective interspecies comparisons, using explicit landmarks that represent known or candidate homologies between areas.  Recent algorithmic improvements in landmark-based registration, coupled with refinements in the available set of candidate homologies, provide a fresh perspective on primate cortical evolution and species differences in the pattern of evolutionary expansion.

Comparative mapping of visual areas in the human and macaque occipital cortex

Hauke Kolster, Laboratorium voor Neurofysiologie en Psychofysiologie, Katholieke Universiteit Leuven Medical School

The introduction of functional magnetic resonance imaging (fMRI) as a non-invasive imaging modality has enabled the study of human cortical processes with high spatial specificity and allowed for a direct comparison of the human and the macaque within the same modality. This presentation will focus on the phase-encoded retinotopic mapping technique, which is used to establish parcellations of cortex consisting of distinct visual areas. These parcellations may then be used to test for similarities between the cortical organizations of the two species. Results from ongoing work will be presented with regard to retinotopic organization of the areas as well as their characterizations by functional localizers and population receptive field (pRF) sizes. Recent developments in fMRI methodology, such as improved resolution and stimulus design as well as analytical pRF methods have resulted in higher quality of the retinotopic field maps and revealed visual field-map clusters as new organizational principles in the human and macaque occipital cortex. In addition, measurements of population-average neuronal properties have the potential to establish a direct link between fMRI studies in the human and single cell studies in the monkey. An inter-subject registration algorithm will be presented, which uses a spatial correlation of the retinotopic and the functional test data to directly compare the functional characteristics of a set of putative homologue areas across subjects and species. The results indicate strong similarities between twelve visual areas in occipital cortex of human and macaque in terms of topological organization, functional characteristics and pRF sizes.

The fourth visual area: A question of human and macaque homology

Jonathan Winawer, Psychology Department, Stanford University

The fourth visual area, V4, was identified in rhesus macaque and described in a series of anatomical and functional studies (Zeki 1971, 1978). Because of its critical role in seeing color and form, V4 has remained an area of intense study. The identification of a color-sensitive region on the ventral surface of human visual cortex, anterior to V3, suggested the possible homology between this area, labeled ‘Human V4’ or ‘hV4’ (McKeefry, 1997; Wade, 2002) and macaque V4 (mV4). Both areas are retinotopically organized. Homology is not uniformly accepted because of substantial differences in spatial organization, though these differences have been questioned (Hansen, 2007). MV4 is a split hemifield map, with parts adjacent to the ventral and dorsal portions of the V3 map. In contrast, some groups have reported that hV4 falls wholly on ventral occipital cortex. Over the last 20 years, several organizational schemes have been proposed for hV4 and surrounding maps. In this presentation I review evidence for the different schemes, with emphasis on recent findings showing that an artifact of functional MRI caused by the transverse sinus afflicts measurements of the hV4 map in many (but not all) hemispheres. By focusing on subjects where the hV4 map is relatively remote from the sinus artifact, we show that hV4 can be best described as a single, unbroken map on the ventral surface representing the full contralateral visual hemifield. These results support claims of substantial deviations from homology between human and macaque in the organization of the 4th visual map.

Spatial organization of face and scene areas in human and macaque visual cortex

Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology

The primate visual cortex has a specialized architecture for processing specific object categories such as faces and scenes. For instance, inferior temporal cortex in macaque contains a network of discrete patches for processing face images. Direct comparison between human and macaque category-selective areas shows that some areas in one species have missing homologues in the other species. Using fMRI, we identified a face-selective region in anterior temporal cortex in human and a scene-selective region in posterior temporal cortex in macaque, which correspond to homologous areas in the other species. A surface-based analysis of cortical maps showed a high degree of similarity in the spatial arrangement of face and scene areas between human and macaque. This suggests that neighborhood relations between functionally-defined cortical areas are evolutionarily conserved – though the topographic relation between the areas and their underlying anatomy (gyral/sulcal pattern) may vary from one species to another.

< Back to 2012 Symposia

Pulvinar and Vision: New insights into circuitry and function

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 1-3
Organizer: Vivien A. Casagrande, PhD, Department of Cell & Developmental Biology, Vanderbilt Medical School Nashville, TN
Presenters: Gopathy Purushothaman, Christian Casanova, Heywood M. Petry, Robert H. Wurtz, Sabine Kastner, David Whitney

< Back to 2012 Symposia

Symposium Description

The thalamus is considered the gateway to the cortex. Yet, even the late Ted Jones who wrote two huge volumes on the organization of the thalamus remarked that we know amazingly little about many of its components and their role in cortical function. This is despite the fact that a major two-way highway connects all areas of cortex with the thalamus. The pulvinar is the largest thalamic nucleus in mammals; it progressively enlarged during primate evolution, dwarfing the rest of the thalamus in humans. The pulvinar also remains the most mysterious of thalamic nucleus in terms of its function. This symposium brings together six speakers from quite different perspectives who, using tools from anatomy, neurochemistry, physiology, neuroimaging and behavior will highlight intriguing recent insights into the structure and function of the pulvinar.  The speakers will jointly touch on: 1) the complexity of architecture, connections and neurochemistry of the pulvinar, 2) potential species similarities and differences in pulvinar’s role in transmitting visual information from subcortical visual areas to cortical areas, 3) the role of pulvinar in eye movements and in saccadic suppression, 4) the role of pulvinar in regulating cortico-cortical communication between visual cortical areas and finally, 5)  converging ideas on the mechanisms that might explain the role of the pulvinar under the larger functional umbrella of visual salience and attention.  Specifically, the speakers will address the following issues.  Purushothaman and Casanova will outline contrasting roles for pulvinar in influencing visual signals in early visual cortex in primates and non- primates, respectively.  Petry and Wurtz will describe the organization and the potential role of retino-tectal inputs to the pulvinar, and that of pulvinar projections to the middle temporal (MT/V5) visual area in primate and its equivalent in non-primates. Wurtz also will consider the role of pulvinar in saccadic suppression.  Kastner will describe the role of the pulvinar in regulating information transfer between cortical areas in primates trained to perform an attention task. Whitney will examine the role of pulvinar in human visual attention and perceptual discrimination.    This symposium should attract a wide audience from Visual Science Society (VSS) participants as the function of the thalamus is key to understanding cortical organization.  Studies of the pulvinar and its role in vision have seen a new renaissance given the new technologies available to reveal its function.  The goal of this session will be to provide the VSS audience with a new appreciation of the role of the thalamus in vision.

Presentations

Gating of the Primary Visual Cortex by Pulvinar for Controlling Bottom-Up Salience

Gopathy Purushothaman, PhD, Department of Cell & Developmental Biology Vanderbilt, Roan Marion, Keji Li and Vivien A. Casagrande Vanderbilt University

The thalamic nucleus pulvinar has been implicated in the control of visual attention.  Its reciprocal connections with both frontal and sensory cortices can coordinate top-down and bottom-up processes for selective visual attention.  However, pulvino-cortical neural interactions are little understood.  We recently found that the lateral pulvinar (PL) powerfully controls stimulus-driven responses in the primary visual cortex (V1).  Reversibly inactivating PL abolished visual responses in supra-granular layers of V1.  Excitation of PL neurons responsive to one region of visual space increased 4-fold V1 responses to this region and decreased 3-fold V1 responses to the surrounding region.  Glutamate agonist injection in LGN increased V1 activity 8-fold and induced an excitotoxic lesion of LGN; subsequently injecting the glutamate agonist into PL increased V1 activity 14-fold.  Spontaneous activity in PL and V1 following visual stimulation were strongly coupled and selectively entrained at the stimulation frequency.  These results suggest that PL-V1 interactions are well-suited to control bottom-up salience within a competitive cortico-pulvino-cortical network for selective attention.

Is The Pulvinar Driving or Modulating Responses in the Visual Cortex?

Christian Casanova, PhD, Univ. Montreal, CP 6128 Succ Centre-Ville, Sch Optometry, Montreal , Canada, Matthieu Vanni & Reza F. Abbas & Sébastien Thomas. Visual Neuroscience Laboratory, School of Optometry, Université de Montréal, Montreal, Canada

Signals from lower cortical areas are not only transferred directly to higher-order cortical areas via cortico-cortical connections but also indirectly through cortico-thalamo-cortical projections. One step toward the understanding of the role of transthalamic corticocortical pathways is to determine the nature of the signals transmitted between the cortex and the thalamus. Are they strictly modulatory, i.e. are they modifying the activity in relation to the stimulus context and the analysis being done in the projecting area, or are they used to establish basic functional characteristics of cortical cells?  While the presence of drivers and modulators has been clearly demonstrated along the retino-geniculo-cortical pathway, it is not known whether such distinction can be made functionally in pathways involving the pulvinar. Since drivers and modulators can exhibit a different temporal pattern of response, we measured the spatiotemporal dynamics of voltage sensitive dyes activation in the visual cortex following pulvinar electrical stimulation in cats and tree shrews. Stimulation of pulvinar induced fast and local responses in extrastriate cortex. In contrast, the propagated waves in the primary visual cortex (V1) were weak in amplitude and diffuse. Co-stimulating pulvinar and LGN produced responses in V1 that were weaker than the sum of the responses evoked by the independent stimulation of both nuclei. These findings support the presence of drivers and modulators along pulvinar pathways and suggest that the pulvinar can exert a modulatory influence in cortical processing of LGN inputs in V1 while it mainly provides driver inputs to extrastriate areas, reflecting the different connectivity patterns.

What is the role of the pulvinar nucleus in visual motion processing?

Heywood M. Petry, Department of Psychological & Brain Sciences, University of Louisville, Martha E. Bickford, Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine

To effectively interact with our environment, body movements must be coordinated with the perception of visual movement. We will present evidence that regions of the pulvinar nucleus that receive input from the superior colliculus (tectum) may be involved in this process. We have chosen the tree shrew (Tupaia belangeri, a prototype of early primates), as our animal model because tectopulvinar pathways are particularly enhanced in this species, and our psychophysical experiments have revealed that tree shrews are capable of accurately discriminating small differences in the speed and direction of moving visual displays. Using in vivo electrophysiological recording techniques to test receptive field properties, we found that pulvinar neurons are responsive to moving visual stimuli, and most are direction selective. Using anatomical techniques, we found that tectorecipient pulvinar neurons project to the striatum, amygdala, and temporal cortical areas homologous to the primate middle temporal area, MT/V5. Using in vitro recording techniques, immunohistochemistry and stereology, we found that tectorecipient pulvinar neurons express more calcium channels than other thalamic nuclei and thus display a higher propensity to fire with bursts of action potentials, potentially providing a mechanism to effectively coordinate the activity of cortical and subcortical pulvinar targets. Collectively, these results suggest that the pulvinar nucleus may relay visual movement signals from the superior colliculus to subcortical brain regions to guide body movements, and simultaneously to the temporal cortex to modify visual perception as we move though our environment.

One message the pulvinar sends to cortex

Robert H. Wurtz, NIH-NEI, Lab of Sensorimotor Research, Rebecca Berman, NIH-NEI, Lab of Sensorimotor Research

The pulvinar has long been recognized as a way station on a second visual pathway to the cerebral cortex. This identification has largely been based on the pulvinar’s connections, which are appropriate for providing visual information to multiple regions of visual cortex from subcortical areas. What is little known is what information pulvinar actually conveys especially in the intact functioning visual system.  We have identified one pathway through the pulvinar that extends from superior colliculus superficial visual layers though inferior pulvinar (principally PIm) to cortical area MT by using the techniques of combined anti- and orthodromic stimulation. We now have explored what this pathway might convey to cortex and have first concentrated on a modulation of visual processing first seen in SC, the suppression of visual responses during saccades.  We have been able to replicate the previous observations of the suppression in SC and in MT and now show that PIm neurons also are similarly suppressed.  We have then inactivated SC and shown that the suppression in MT is reduced. While we do not know all of the signals conveyed through this pathway to cortex, we do have evidence for one: the suppression of vision during saccades. This signal is neither a visual nor a motor signal but conveys the action of an internal motor signal on visual processing.  Furthermore combining our results in the behaving monkey with recent experiments in mouse brain slices (Phongphanphanee et al. 2011) provides a complete circuit from brainstem to cortex for conveying this suppression.

Role of the pulvinar in regulating information transmission between cortical areas

Sabine Kastner, MD, Department of Psychology, Center for Study of Brain, Mind and Behavior, Green Hall, Princeton, Yuri B. Saalman, Princeton Neuroscience Institute, Princeton University

Recent studies suggest that the degree of neural synchrony between cortical areas can modulate their information transfer according to attentional needs. However, it is not clear how two cortical areas synchronize their activities. Directly connected cortical areas are generally also indirectly connected via the thalamic nucleus, the pulvinar. We hypothesized that the pulvinar helps synchronize activity between cortical areas, and tested this by simultaneously recording from the pulvinar, V4, TEO and LIP of macaque monkeys performing a spatial attention task. Electrodes targeted interconnected sites between these areas, as determined by probabilistic tractography on diffusion tensor imaging data. Spatial attention increased synchrony between the cortical areas in the beta frequency range, in line with increased causal influence of the pulvinar on the cortex at the same frequencies. These results suggest that the pulvinar co-ordinates activity between cortical areas, to increase the efficacy of cortico-cortical transmission.

Visual Attention Gates Spatial Coding in the Human Pulvinar

David Whitney, The University of California, Berkeley, Jason Fischer, The University of California, Berkeley

Based on the pulvinar’s widespread connectivity with the visual cortex, as well as with putative attentional source regions in the frontal and parietal lobes, the pulvinar is suspected to play an important role in visual attention. However, there remain many hypotheses on the pulvinar’s specific function. One hypothesis is that the pulvinar may play a role in filtering distracting stimuli when they are actively ignored. Because it remains unclear whether this is the case, how this might happen, or what the fate of the ignored objects is, we sought to characterize the spatial representation of visual information in the human pulvinar for equally salient attended and ignored objects that were presented simultaneously. In an fMRI experiment, we measured the spatial precision with which attended and ignored stimuli were encoded in the pulvinar, and we found that attention completely gated position information: attended objects were encoded with high spatial precision, but there was no measurable spatial encoding of actively ignored objects. This is despite the fact that the attended and ignored objects were identical and present simultaneously, and both attended and ignored objects were represented with great precision throughout the visual cortex. These data support a role for the pulvinar in distractor filtering and reveal a possible mechanism: by modulating the spatial precision of stimulus encoding, signals from competing stimuli can be suppressed in order to isolate behaviorally relevant objects.

< Back to 2012 Symposia

How learning changes the brain

Time/Room: Friday, May 15, 2015, 5:00 – 7:00 pm, Pavilion
Organizer(s): Chris Baker and Hans Op de Beeck; NIMH, USA; University of Leuven, Belgium
Presenters: Krishna Srihasam, Rufin Vogels, David J. Freedman, Andrew E Welchman, Aaron Seitz

< Back to 2015 Symposia

Symposium Description

The capacity for learning is a critical feature of vision. It is well established that learning is associated with changes in visual representations and the underlying neural substrate (e.g. sharper behavioral discrimination and sharper neural tuning for trained visual features such as orientation or shape). However, the brain regions involved vary from experiment to experiment, ranging from primary visual cortex to all higher levels in the visual system. One working hypothesis suggests that the hierarchical level at which neural plasticity is most prominent is related to the complexity of the stimuli and the task context, but results do not necessarily support this prediction. Further, the nature of the changes is often inconsistent between studies. In this symposium we emphasize the viewpoint that in order to understand how learning changes the brain, it is critical to consider the underlying complexity and distributed nature of the visual system. The group of speakers we have assembled will present work using a variety of different approaches from behavior to TMS to fMRI in both monkeys and humans. The consistent theme across talks will be that a fuller and better understanding of neural plasticity might be achieved by considering how learning impacts processing from neurons to circuits to regions in the context of the distributed neural architecture of vision. Individually, the speakers will highlight specific properties of the visual system that have an important role in visual learning but are often not considered in theories of learning. First, brain regions differ in their average response and selectivity even before learning, and might each have a different role in learning, making a search for THE visual learning area unrealistic. Further, simple classification schemes such as low-level areas subserve low-level learning and high-level areas high-level learning might vastly underestimate how effects of learning are distributed across hierarchical levels. In addition, the regions critical for a task might change as a function of learning. Even more in detail, different cell types have different roles in visual processing and are possibly changed in different ways through learning. Finally, computational and behavioral approaches also emphasize that learning involves multiple learning processes, and understanding their interaction is crucial. Through these examples we will showcase the complexity of the processes involved in visual learning at the behavioral, neural, and computational level. This symposium should be of broad interest to the VSS community from students to faculty, providing a multidisciplinary overview of current approaches to visual learning. Often visual learning is studied in specific limited domains and the goal of this symposium is to try to integrate findings across different levels and different scales of visual processing, taking into account the complexity of the neural system.

Presentations

Novel module formation reveals underlying shape bias in primate infero-temporal cortex

Speaker: Krishna Srihasam; Department of Neurobiology, Harvard Medical School, Boston, MA
Authors: Margaret S. Livingstone; Department of Neurobiology, Harvard Medical School, Boston, MA

Primate inferotemporal cortex is divided up into domains specialized for processing specific object categories, such as faces, text, places, and body parts. These domains are in stereotyped locations in most humans and monkeys. What are the contributions of visual experience and innate programs in generating this organization? The reproducible location of different category-selective domains in humans and macaques suggests that some aspects of IT category organization must be innate. However, the existence of a visual word form area, the effects of expertise and our recent finding that novel specializations appear in IT as a consequence of intensive early training indicate that experience must also be important in the formation or refinement of category-selective domains in IT. To ask what determines the locations of such domains, we intensively trained juvenile monkeys to recognize three distinct sets of shapes: alphanumeric symbols, rectilinear shapes and cartoon faces. After training, the monkeys developed regions that were selectively responsive to each trained set. The location of each specialization was similar across monkeys, despite differences in training order. The fact that these domains consistently mapped to characteristic locations suggests that a pre-existing shape organization determines where experience will exert its effects.

Learning to discriminate simple stimuli modifies the response properties of early and late visual cortical areas

Speaker: Rufin Vogels; Laboratorium voor Neuro- en Psychofysiologie, Dpt. Neurowetenschappen, KU Leuven Campus Gasthuisberg, Belgium
Authors: Hamed Zivari Adab; Laboratorium voor Neuro- en Psychofysiologie, Dpt. Neurowetenschappen, KU Leuven Campus Gasthuisberg, Belgium

Practicing simple visual detection and discrimination tasks improves performance, a signature of adult brain plasticity. Current models of learning with simple stimuli such as gratings postulate either changes in early visual cortex or reweighting of stable early sensory responses at the decision stage. We showed that practice in orientation discrimination of noisy gratings (coarse orientation discrimination) increased the ability of single neurons of macaque visual area V4 to discriminate the trained stimuli. Then we asked whether practice in the same task also changes the response properties of later visual cortical areas. To identify candidate areas, we used fMRI to map activations to noisy gratings in the trained monkeys, revealing a region in the posterior inferior temporal (PIT) cortex. Subsequent single unit recordings showed that the PIT neurons discriminated better the trained compared with the untrained orientations, even when the animals were performing an orthogonal task. Unlike in previous single unit studies of learning in early visual cortex, more PIT neurons preferred trained compared with untrained orientations. Thus, practicing a simple discrimination of grating stimuli cannot only affect early visual cortex but also changes the response properties of late visual cortical areas. Perturbation of the activity in PIT reduced the coarse orientation discrimination performance in the trained animals, suggesting that this region is indeed part of the network underlying the performance in the task. We suggest that visual learning modifies the responses of most if not all areas that are part of the cortical network which supports the task execution.

Learning-dependent plasticity of visual encoding in inferior temporal cortex

Speaker: David J. Freedman; Department of Neurobiology, The University of Chicago
Authors: Jillian L. McKee; Department of Neurobiology, The University of Chicago

Our ability to recognize complex visual stimuli depends critically on our past experience. For example, we easily and seemingly automatically recognize visual stimuli such as familiar faces, our bicycle, or the characters on a written page. Visual form recognition depends on neuronal processing along a hierarchy of visual cortical areas which culminates in inferior temporal cortex (ITC), which contains neurons which show exquisite selectivity for complex visual stimuli. Although both passive experience and explicit training can modify or enhance visual selectivity in ITC, the mechanisms underlying this plasticity are not understood. This talk will describe studies aimed at understanding the impact of experience on visual selectivity in ITC. Monkeys were trained to perform a categorization task in which they classified images as novel or familiar. Familiar images had been repeatedly viewed over months of prior training sessions, while novel images had not been viewed prior to that session. Neurophysiological recordings from ITC and prefrontal cortex (PFC) revealed a marked impact of familiarity on neuronal responses in both areas. ITC showed greater stimulus selectivity than PFC, while PFC showed a more abstract encoding of the novel and familiar categories. We also examined familiarity-related changes in ITC encoding within individual sessions, while monkeys viewed initially novel stimuli ~50 times each. This revealed enhanced stimulus selectivity with increasing repetitions, and distinct patterns of effects among putative inhibitory and excitatory neurons. This may provide a mechanism for familiarity-related changes in ITC activity, and could help understand how ITC stimulus selectivity is shaped by learning.

Training transfer: from functional mechanisms to cortical circuits

Speaker: Andrew E Welchman; University of Cambridge, UK
Authors: Dorita F Chang; University of Cambridge, UK

While perception improves with practice, the brain is faced with a Goldilocks challenge in balancing the specificity vs. generality of learning. Learning specificity is classically established (e.g. Karni & Sagi, 1991, PNAS 88, 4966-4970), however, recent work also reveals generalisation that promotes the transfer of training effects (e.g., Xiao et al, 2008, Cur Biol, 18, 1922-26). Here I will discuss how we can understand the neural mechanisms that support these opposing drives for optimising visual processing. I will discuss work that uses perceptual judgments in visual displays where performance is limited by noise added to the stimuli (signal-in-noise tasks) or clearer displays that push observers to make fine differentiation between elements (feature difference tasks). I will review work that suggests different foci of fMRI activity during performance of these types of task (Zhang et al, 2010, J Neurosci, 14127-33), and then describe how we have used psychophysical tests of learning transfer to understand the mechanisms that support learning (Chang et al, 2013, J Neurosci, 10962-71). Finally, I will discuss recent TMS work that implicates a wide high-level network involved in generalisation of training between tasks.

Moving beyond a binary view of specificity in perceptual learning

Speaker: Aaron Seitz; Department of Psychology University of California, Riverside

A hallmark of modern perceptual learning is the nature to which learning effects are specific to the trained stimuli. Such specificity to orientation, spatial location and even eye of training (Karni and Sagi, 1991), has been used as psychophysical evidence of neural basis of learning. However, recent research shows that learning effects once thought to be specific depend on subtleties of the training procedure (Hung and Seitz, 2014) and that within even a simple training task that there are multiple aspects of the task and stimuli that are learned simultaneously (LeDantec, Melton and Seitz, 2012). Here, I present recent results my from my lab and others detailing some of the complexities of specificity and transfer and suggest that learning on any task involves a broad network of brain regions undergoing changes in representations, readout weights, decision rules, feedback processes, etc. However, importantly, that the distribution of learning across the neural system depends upon the fine details of the training procedure. I conclude with the suggestion that to advance our understanding of perceptual learning, the field must move towards understanding individual, and procedurally induced, differences in learning and how multiple neural mechanisms may together underlie behavioral learning effects.

< Back to 2015 Symposia

Linking behavior to different measures of cortical activity

Time/Room: Friday, May 15, 2015, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): Justin Gardner1, John Serences2, Franco Pestilli3; 1Stanford University, 2UC San Diego, 3Indiana University
Presenters: Justin Gardner, John Serences, Eyal Seidemann, Aniruddha Das, Farran Briggs, Geoffrey Boynton

< Back to 2015 Symposia

Symposium Description

A plethora of tools are available for visual neuroscientists to study brain activity across different spatiotemporal scales and the BRAIN initiative offers the promise of more. Pipettes and electrodes measure microscopic activity at channels, synapses and single-units. Multi-electrode arrays, calcium imaging, voltage-sensitive dyes and intrinsic imaging measure mesoscale population activity. Human cortical areas can be mapped using fMRI, ECoG and EEG. In principle, the multiplicity of technologies offers unprecedented possibilities to gain information at complementary spatiotemporal scales. Leveraging knowledge across measurement modalities and species is essential for understanding the human brain where the vast majority of what we know comes from non-invasive measurements of brain activity and behavior. Despite the potential for convergence, different methodologies also produce results that appear superficially inconsistent, leading to categorically distinct models of cortical computation subserving vision and cognition. Visual spatial attention provides an excellent case study. A great deal of behavioral work in humans has established that reaction times and discrimination thresholds can be improved with prior spatial information. Measurements of brain activity using very similar protocols have been made using metrics ranging from single-unit responses to functional imaging in both animals and humans. Despite this wealth of potentially complementary data, general consensus has yet to be achieved. Effects of attention on basic visual responses, such as contrast-response, have yielded different conclusions in and across measurements from fMRI (Buracas and Boynton, 2007; Murray, 2008; Pestilli et al, 2011), voltage sensitive dye imaging (Chen and Seidemann, 2012) and single-units and EEG (McAdams and Maunsell, 1999; Williford and Maunsell, 2006; Cohen and Maunsell 2012; Di Russo et al., 2001; Itthipuripat et al., 2014; Kim et al., 2007; Lauritzen et al., 2010; Wang and Wade, 2011). Task-related responses measured with optical imaging (Sirotin and Das, 2009; Cardoso et al., 2012) also suggest some discrepancy across measurements. These disparate results lead to different models that relate neural mechanisms of attention with behavior (e.g. Pestilli et al, 2011; Itthipuripat et al., 2014). Moreover, some attention effects like reduction in neural variance and pairwise correlations (Cohen and Maunsell, 2009; Herrero et al., 2013; Mitchell et al., 2007; 2009; Niebergall et al., 2011), as well as changes in synaptic efficacy (Briggs et al., 2013) can not even be assessed across all measurements. Rather than considering one specific measurement as privileged, providing ground truth, we propose striving for synthesis and explains the totality of evidence. Theoretic modeling (e.g. Reynolds and Heeger, 2009) provides frameworks that offer the potential for reconciling across measurements (Hara et al., 2014). This symposium is aimed at bringing together people using different spatiotemporal scales of measurements with an eye towards synthesizing disparate sources of knowledge about neural mechanisms for visual attention and their role in predicting behavior. Speakers are encouraged to present results from a perspective that allows direct comparison with other measurements, and critically evaluate whether and why there may be discrepancies. Importantly, the discrepancies observed using these different measures can either lead to very different models of basic neural mechanisms or can be used to mutually constrain models linking neural activity to behavior.

Presentations

Linking brain activity to visual attentional behavior considering multiple spatial-scales of measurement

Speaker: Justin Gardner; Department of Psychology, Stanford University
Authors: Franco Pestilli; Department of Psychological and Brain Sciences, Program in Neuroscience, Indiana University

Understanding the human neural mechanisms that underly behavioral enhancement due to visual spatial attention requires synthesis of knowledge gained across many different spatial scales of measurement and species. Our lab has focused on the measurement of contrast-response and how it changes with attention in humans. Contrast is a key visual variable in that it controls visibility and measurements from single-units to optical-imaging to fMRI find general consistency in that cortical visual areas respond in monotonically increasing functions to increases in contrast. Building on this commonality across multiple spatial-scales of measurement, we have implemented computational models that predict behavioral performance enhancement from fMRI measurements of contrast-response, in which we tested various linking hypotheses, from sensory enhancement, noise reduction to efficient selection. Our analysis of the human data using fMRI suggested a prominent role for efficient selection in determining behavior. Our work is heavily informed by the physiology literature particularly because some properties of neural response, such as efficiency of synaptic transmission or correlation of activity are difficult if not impossible to determine in humans. Nonetheless, discrepancies across measurements suggests potential difficulties of interpretation of results from any single measurement modality. We will discuss our efforts to address these potential discrepancies by adapting computational models used to explain disparate effects across different single-unit studies to larger spatial-scale population measures such as fMRI.

EEG and fMRI provide different insights into the link between attention and behavior in human visual cortex

Speaker: John Serences; Neurosciences Graduate Program and Psychology Department, University of California, San Diego
Authors: Sirawaj Itthipuripat1, Thomas Sprague1, Edward F Ester2, Sean Deering2; 1Neurosciences Graduate Program 2Psychology Department, University of California, San Diego

A fMRI study by Pestilli et al. (2011) established a method for modeling links between attention-related changes in BOLD activation in visual cortex and changes in behavior. The study found that models based on sensory gain and noise reduction could not explain the relationship between attention-related changes in behavior and attention-related additive shifts of the BOLD contrast-response function (CRF). However, a model based on efficient post-sensory read-out successfully linked BOLD modulations and behavior. We performed a similar study but used EEG instead of fMRI as a measure of neural activity in visual cortex (Itthipuripat et al., 2014). Instead of additive shifts in the BOLD response, attention induced a temporally early multiplicative gain of visually evoked potentials over occipital electrodes, and a model based on sensory gain sufficiently linked attention-induced changes in EEG responses and behavior, without the need to incorporate efficient read-out. We also observed differences between attention-induced changes in EEG-based CRFs (multiplicative gain) and fMRI-based CRFs (additive shift) within the same group of subjects who performed an identical spatial attention task. These results suggest that attentional modulation of EEG responses interacts with the magnitude of sensory-evoked responses, whereas attentional modulation of fMRI signals is largely stimulus-independent. This raises the intriguing possibility that EEG and fMRI signals provide complementary insights into cortical information processing, and that these complementary signals may help to better constrain quantitative models that link neural activity and behavior.

Attentional modulations of sub- and supra-threshold neural population responses in primate V1

Speaker: Eyal Seidemann; Department of Psychology and Center for Perceptual Systems The University of Texas at Austin

Voltage-sensitive dye imaging (VSDI) measures local changes in pooled membrane potentials, simultaneously from dozens of square millimeters of cortex, with millisecond temporal resolution and spatial resolution sufficient to resolve cortical orientation columns. To better understand the quantitative relationship between the VSDI signal and spiking activity of a local neural population, we compared visual responses measured from V1 of behaving monkeys using VSDI and single-unit electrophysiology. We found large and systematic differences between response properties obtained with these two techniques. We then used these results to develop a simple computational model of the quantitative relationship between the average VSDI signal and local spiking activity. In this talk I will describe the model and demonstrate how it can be used to interpret top-down attentional modulations observed using VSDI in macaque V1.

Task-related Responses in Intrinsic-Signal Optical Imaging

Speaker: Aniruddha Das; Department of Neuroscience, Psychiatry, and Biomedical Engineering, Columbia University
Authors: Cardoso, M.1,2, Lima, B.2, Sirotin, Y.2; 1Champalimaud Neuroscience Program (CNP), Lisbon, Portugal; 2Department of Neuroscience, Columbia University, New York, NY

There is a growing appreciation of the importance of endogenous, task-related processes such as attention and arousal even at the earliest stages of sensory processing. By combining intrinsic-signal optical imaging with simultaneous electrode recordings we earlier demonstrated a particular task-related response – distinct from stimulus-evoked responses – in primary visual cortex (V1) of macaque monkeys engaged in visual tasks. The task-related response powerfully reflects behavioral correlates of the task, independent of visual stimulation; it entrains to task timing, increasing progressively in amplitude and duration with temporal anticipation; and it correlates with both task-related rewards, and performance. Notably, however, the effect of the task-related response on stimulus-evoked responses – such as the contrast response function (CRF) – remains an open question. For tasks that are stereotyped and independent of visual stimulation, the task- and stimulus-related responses are linearly separable: the task-related component can be subtracted away leaving an imaged contrast response function that is robustly linear with stimulus-evoked spiking. When the task-related response is modified – such as, by increasing the reward size – the effect is largely additive: the baseline imaging response increases, without, to first order, changing the CRF of the stimulus-evoked component. Thus the important question remains: are there other reliable measures of changes in neural activity, such as changes in signal or noise correlation, rather than local spike rate or LFP magnitude, that can better characterize the task-related response?

Attention and neuronal circuits

Speaker: Farran Briggs; Geisel School of Medicine at Dartmouth College

Visual attention has a profound impact on perception, however we currently lack a neurobiological definition of attention. In other words, we lack an understanding of the cellular and circuit mechanisms underlying attentional modulation of neuronal activity in the brain. The main objective of my research is to understand how visual spatial attention alters the way in which neurons communicate with one another. Previously, my colleagues and I demonstrated that attention enhances the efficacy of signal transmission in the geniculocortical circuit. Through this work, we suggest that the mechanisms underlying attentional modulation of neuronal activity involve enhancement of signal transmission in neuronal circuits and increasing the signal-to-noise ratio of information transmitted in these circuits. Results from my lab indicate that these mechanisms can explain attentional modulations in firing rate observed in primary visual cortical neurons. Our current research focuses on understanding the rules governing attentional modulation of different functional circuits in the visual cortex. Preliminary results suggest that attention differentially regulates the activity of neuronal circuits dependent on the types of information conveyed within those circuits. Overall, our results support a mechanistic definition of attention as a process that alters the dynamics of communication in specific neuronal circuits. I believe this circuit-level understanding of how attention alters neuronal activity is required in order to develop more targeted and effective treatments for attention deficits.

A comparison of electrophysiology and fMRI signals in area V1

Speaker: Geoffrey Boynton; University of Washington, Seattle, WA

fMRI measures in area V1 typically show remarkable consistency with what is expected from monkey electrophysiology studies. However, discrepancies between fMRI and electrophysiology appear for non-stimulus driven factors such as attention and visual awareness. I will discuss possible explanations for these discrepancies, including the role of LFP’s in the hemodynamic coupling process, the effects of feedback and timing, and the overall sensitivity of the BOLD signal.

< Back to 2015 Symposia

How to break the cortical face perception network

Time/Room: Friday, May 15, 2015, 2:30 – 4:30 pm, Pavilion
Organizer(s): David Pitcher; NIMH
Presenters: Marlene Behrmann, Arash Afraz, Kevin Weiner, David Pitcher

< Back to 2015 Symposia

Symposium Description

Faces are a rich source of social information that simultaneously convey an individual’s identity, attentional focus, and emotional state. Primate visual systems are so efficient that processing this wealth of information seems to happen effortlessly. Yet the simplest functions, like recognizing your mother or judging her mood, require the interaction of multiple specialized brain regions distributed across cortex. Despite many years of study our understanding of the unique functions performed by each region and how these regions interact to facilitate face perception remains limited. The speakers in this symposium use novel combinations of experimental techniques to study the behavioral effects of damage and disruption in the cortical face perception network in both human and non-human primates. Our aims are to update the fundamental understanding of how faces are cortically represented and to establish common theoretical ground amongst researchers who use different experimental techniques. To achieve this we will present studies using a range of subject populations (healthy-humans, brain-damaged patients, pre-operative epileptic patients and macaques) and experimental methods (optogenetics, fMRI, microstimulation, physiology, TMS, diffusion weighted imaging and neuropsychology). We believe this symposium will be of great interest to VSS attendees for two reasons. Firstly, understanding the neural processes underlying face perception has proven to be a testing ground in which key disputes concerning anatomical specificity and computational modularity take place and which therefore generates great interest amongst all cognitive neuroscientists. Secondly, studying the face network serves as an excellent proxy for studying the whole brain as a network and we believe attendees will be eager to apply the experimental techniques discussed to address their own questions. The symposium will conclude with an open discussion between the speakers and the audience to establish common ground between those who use different experimental methods and who hold different theoretical positions.

Presentations

Reverse engineering the face perception system: insights from congenital prosopagnosia

Speaker: Marlene Behrmann; Department of Psychology, Carnegie Mellon University, USA

Reverse engineering involves disassembling a complex device and analyzing its components and workings in detail with the goal of understanding how the device works in its intact state. To elucidate the neural components implicated in normal face perception, we investigate the disrupted components in individuals with congenital prosopagnosia, an apparently lifelong impairment in face processing, despite normal vision and other cognitive skills. Structural and functional MRI data reveal compromised connectivity between more posterior face-selective cortical patches and more anterior regions that respond to face stimuli. Computational descriptions of the topology of this connectivity, using measures from graph theory that permit the construction of the network at the level of the whole brain, uncover atypical organization of the face network in CP. Moreover, this network disorganization is increasingly pronounced as a function of severity of the face recognition disorder. Last, we reconstruct the face images viewed by normal and prosopagnosic observers from the neural data and demonstrate the altered underlying representations in key cortical regions in the prosopagnosic individuals. This multipronged approach uncovers in fine-grained detail the alteration in information discrimination in the prosopagnosic individuals as well as the pertubations in the neural network that gives rise to normal face perception.

The causal role of face-selective neurons in face perception.

Speaker: Arash Afraz; Massachusetts Institute of Technology

Many neurons in the inferior temporal cortex (IT) of primates respond more strongly to images of faces than to images of non-face objects. Such so-called ‘face neurons’ are thought to be involved in face recognition behaviors such as face detection and face discrimination. While this view implies a causal role for face neurons in such behaviors, the main body of neurophysiological evidence to support it is only correlational. Here, I bring together evidence from electrical microstimulation, optogenetic and pharmacological intervention to bridge the gap between the neural spiking of IT face selective neurons and face perception.

The human face processing network is resilient after resection of specialized cortical inputs

Speaker: Kevin Weiner; Department of Psychology, Stanford University

Functional hierarchies are a prevalent feature of brain organization. In high-level visual cortex, the ‘occipital face area’ (OFA/IOG-faces) is thought to be the input to a specialized processing hierarchy subserving human face perception. However, evidence supporting or refuting the causal role of IOG-faces as a necessary input to the face network evades researchers because it necessitates a patient with a focal lesion of the right inferior occipital cortex, as well as functional measurements both before and after surgical removal of this region. Here, in a rare patient fulfilling both of these requirements, we show that the face network is surprisingly resilient in two ways following surgical removal of IOG-faces. First, the large-scale cortical layout and selectivity of the face network are stable after removal of IOG-faces. Second, following resection, face-selective responses in ventral temporal cortex surprisingly become more reliable in the resected hemisphere, but not in the intact hemisphere. Further investigations of the anatomical underpinnings of this resiliency using diffusion tensor imaging suggest the existence of additional white matter pathways connecting early visual cortex to downstream face-selective regions independent of IOG-faces. Thus, after resection, neural signals can still reach downstream regions via these pathways that are largely unconsidered by present neurofunctional models of face processing. Altogether, these measurements indicate that IOG-faces is not the key input to the face network. Furthermore, our results pose important constraints on hierarchical models in high-level sensory cortices and provide powerful insight into the resiliency of such networks after damage or cortical trauma.

Transient disruption in the face perception network: combining TMS and fMRI

Speaker: David Pitcher; NIMH

Faces contain structural information, for identifying individuals, as well as changeable information, that can convey emotion and direct attention. Neuroimaging studies reveal brain regions that exhibit preferential responses to invariant or changeable facial aspects but the functional connections between these regions are unknown. This issue was addressed by causally disrupting two face-selective regions with thetaburst transcranial magnetic stimulation (TBS) and measuring the effects of this disruption in local and remote face-selective regions with functional magnetic resonance imaging (fMRI). Participants were scanned, over two sessions, while viewing dynamic or static faces and objects. During these sessions, TBS was delivered over the right occipital face area (rOFA) or right posterior superior temporal sulcus (rpSTS). Disruption of the rOFA reduced the neural response to both static and dynamic faces in the downstream face-selective region in the fusiform gyrus. In contrast, the response to dynamic and static faces was doubly dissociated in the rpSTS. Namely, disruption of the rOFA reduced the response to static but not dynamic faces, while disruption of the rpSTS itself, reduced the response to dynamic but not static faces. These results suggest that dynamic and static facial aspects are processed via dissociable cortical pathways that begin in early visual cortex, a conclusion inconsistent with current models of face perception.

< Back to 2015 Symposia

Vision Sciences Society