2012 Young Investigator – Geoffrey F. Woodman

Geoffrey F. Woodman

Department of Psychology and Vanderbilt Vision Research Center
Vanderbilt University

Dr. Geoffrey F. Woodman is the 2012 winner of the Elsevier/VSS Young Investigator Award.  Dr. Woodman is  Assistant Professor in the Department of Psychology and Vanderbilt Vision Research Center at  Vanderbilt University, in Nashville, Tennessee. Geoff’s important contributions to vision science range from fundamental insights into human visual cognition to the development of novel electrophysiological techniques. His uniquely integrated approach to comparative electrophysiology has demonstrated homologies between man and monkey in the ERP components underlying attention and early visual processes, enabling new understanding of their neural bases. Geoff has also made key breakthroughs in the understanding of visual working memory, placing it at the center of the interaction between high-level cognition and perception.  In the ten years since gaining his PhD, Geoff has been exceptionally productive, moving forward the core disciplines of visual perception, attention and memory,  through his many insightful and high-impact papers. His breadth, technical versatility and innovation, particularly in linking human and non-human-primate studies, represent true excellence in vision sciences research.

Elsevier/Vision Research Article

Dr. Woodman’s presentation:

Attention, memory, and visual cognition viewed through the lens of electrophysiology

Sunday, May 13, 7:00 pm, Royal Palm Ballroom

How do we find our children on a crowded playground, our keys in the kitchen, or hazards in the roadway? This talk will begin by discussing how measurements of electrical potentials from the brain offer a lens through which to observe the processing of such complex scenes unfold.  For example, I will discuss our work showing that when humans search for targets in cluttered scenes, we can directly measure the target representations maintained in visual working memory and what information is selected by attention.  Moreover, when the searched-for target is the same across a handful of trials we can watch these attentional templates in working memory handed off to long-term memory. Next, I will discuss our recent work demonstrating that redundant target representations in working and long-term memory appear to underlie our ability to exert enhanced cognitive control over visual cognition.  Finally, I will discuss our work focused on understanding the nature of these electrophysiological tools.  In studies with nonhuman primates we have the ability to record event-related potentials from outside the brain, like we do with humans, but also activity inside the brain revealing the neural network generating these critical indices of attention, memory, and a host of other cognitive processes.

2012 Keynote – Ranulfo Romo

Ranulfo Romo, M.D., D.Sc.

Ranulfo Romo, M.D., D.Sc.

Professor of Neuroscience at the Institute of Cellular Physiology, National Autonomous University of Mexico (UNAM)

Audio and slides from the 2012 Keynote Address are available on the Cambridge Research Systems website.

Conversion of sensory signals into perceptual decisions

Saturday, May 12, 2012, 7:00 pm, Royal Ballroom 4-5

Most perceptual tasks require sequential steps to be carried out. This must be the case, for example, when subjects discriminate the difference in frequency between two mechanical vibrations applied sequentially to their fingertips. This perceptual task can be understood as a chain of neural operations: encoding the two consecutive stimulus frequencies, maintaining the first stimulus in working memory, comparing the second stimulus to the memory trace left by the first stimulus, and communicating the result of the comparison to the motor apparatus. Where and how in the brain are these cognitive operations executed? We addressed this problem by recording single neurons from several cortical areas while trained monkeys executed the vibrotactile discrimination task. We found that primary somatosensory cortex (S1) drives higher cortical areas where past and current sensory information are combined, such that a comparison of the two evolves into a decision. Consistent with this result, direct activation of the S1 can trigger quantifiable percepts in this task. These findings provide a fairly complete panorama of the neural dynamics that underlies the transformation of sensory information into an action and emphasize the importance of studying multiple cortical areas during the same behavioral task.

Biography

Ranulfo Romo is Professor of Neuroscience at the Institute of Cellular Physiology of the National Autonomous University of Mexico (UNAM). He received his M.D. degree from UNAM and a D.Sc. in the field of neuroscience from the University of Paris in France. His postdoctoral work was done with Wolfram Schultz at the University of Fribourg in Switzerland and Vernon Mountcastle at The Johns Hopkins University in Baltimore. Romo has received the Demuth Prize in Neuroscience from the Demuth Foundation, the National Prize on Sciences and Arts from the Mexican government and the Prize in Basic Medical Sciences from the Academy of Sciences for the Developing World (TWAS). He is a member of the Mexican Academy of Sciences, the Neurosciences Research Program headed by Nobel Prize Gerald Edelman and a Foreign Associate of the US National Academy of Sciences. Since 1991 Romo is a Howard Hughes International Research Scholar and recently was elected member of El Colegio Nacional.

2012 Symposia

Pulvinar and Vision: New insights into circuitry and function

Organizer: Vivien A. Casagrande, Department of Cell & Developmental Biology, Vanderbilt Medical School
Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 1-3

The most mysterious nucleus of the visual thalamus is the pulvinar. In most mammals the pulvinar is the largest thalamic nucleus, and it has progressively enlarged in primate evolution so that it dwarfs the remainder of the thalamus in humans. Despite the large size of the pulvinar, relatively little is known regarding its function, and consequently its potential influence on cortical activity patterns is unappreciated. This symposium will outline new insights regarding the role of the pulvinar nucleus in vision, and should provide the VSS audience with a new appreciation of the interactions between the pulvinar nucleus and cortex. More…

What does fMRI tell us about brain homologies?

Organizer: Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology
Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 4-5

Over the past 20 years, the functional magnetic resonance imaging (fMRI) has provided a great deal of knowledge about the functional organization of human visual cortex. In recent years, the development of the fMRI technique in non-human primates has enabled neuroscientists to directly compare visual cortical areas across species. These comparative studies have shown striking similarities (‘homologies’) between human and monkey visual cortex. Comparing cortical structures in human versus monkey provides a framework for generalizing results from invasive neurobiological studies in monkeys to humans. It also provides important clues for understanding the evolution of cerebral cortex in primates. More…

Part-whole relationships in visual cortex

Organizer: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven
Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 6-8

In 1912 Wertheimer launched Gestalt psychology, arguing that the whole is different from the sum of the parts. Wholes were considered primary in perceptual experience, even determining what the parts are. How to reconcile this position with what we now know about the visual brain, in terms of a hierarchy of processing layers from low-level features to integrated object representations at the higher level? What exactly are the relationships between parts and wholes then? A century later, we will take stock and provide an answer from a diversity of approaches, including single-cell recordings, human fMRI, human psychophysics, and computational modeling. More…

Distinguishing perceptual shifts from response biases

Organizer: Joshua Solomon, City University London
Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 1-3

Our general topic will be the measurement of perceptual biases. These are changes in appearance that cannot be attributed to changes in the visual stimulus. One perceptual bias that has received a lot of attention lately is the change in apparent contrast that observers report when they attend (or remove attention from) a visual target. We will discuss how to distinguish reports of truly perceptual changes from changes in response strategies. More…

Human visual cortex: from receptive fields to maps to clusters to perception

Organizer: Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 4-5

This symposium will introduce current concepts of the visual cortex’ organization at different spatial scales and their relation to perception. At the smallest scale, the receptive field is a property of individual neurons and summarizes the visual field region where visual stimulation elicits a response. These receptive fields are organized into visual field maps, which in turn are organized in clusters that share a common fovea. We will relate these principles to notions of population receptive fields (pRF), cortico-cortical pRFs, extra-classical contextual effects, detailed foveal organization, visual deprivation, prism-adaptation and plasticity. More…

Neuromodulation of Visual Perception

Organizer: Jutta Billino, Justus-Liebig-University Giessen, and Ulrich Ettinger, Rheinische Friedrich-Wilhelms-Universität Bonn
Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 6-8

Although the neuronal bases of vision have been extensively explored over the last decades we are just beginning to understand how visual perception is modulated by neurochemical processes in our brain. Recent research provides first insights into regulation of signal processing by different neurotransmitters. This symposium is devoted to the questions (1) by which mechanisms neurotransmitters influence perception and (2) how individual differences in neurotransmitter activity could explain normal variation and altered visual processing in mental disease and during ageing. Presentations will provide an overview of state-of-the-art methods and findings concerning the complexity of neuromodulation of visual perception. More…

Human visual cortex: from receptive fields to maps to clusters to perception

Human visual cortex: from receptive fields to maps to clusters to perception

Friday, May 11, 3:30 – 5:30 pm

Organizer: Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands

Presenters: Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands; Koen V. Haak,Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.; Alex R. Wade,Department of Psychology University of York, Heslington, UK; Mark M. Schira, Neuroscience Research Australia (NeuRA), Sydney & University of New South Wales, Sydney, Australia; Stelios M. Smirnakis,Departments of Neurosci. and Neurol., Baylor Col. of Med., Houston, TX; Alyssa A. Brewer, Department of Cognitive Sciences University of California, Irvine

Symposium Description

The organization of the visual system can be described at different spatial scales. At the smallest scale, the receptive field is a property of individual neurons and summarizes the region of the visual field where visual stimulation elicits a response. These receptive fields are organized into visual field maps, where neighboring neurons process neighboring parts of the visual field. Many visual field maps exist, suggesting that every map contains a unique representation of the visual field. This notion relates the visual field maps to the idea of functional specialization, i.e. separate cortical regions are involved in different processes. However, the computational processes within a visual field map do not have to coincide with perceptual qualities. Indeed most perceptual functions are associated with multiple visual field maps and even multiple cortical regions. Visual field maps are organized in clusters that share a similar eccentricity organization. This has lead to the proposal that perceptual specializations correlate with clusters rather than individual maps. This symposium will highlight current concepts of the organization of visual cortex and their relation to perception and plasticity. The speakers have used a variety of neuroimaging techniques with a focus on conventional functional magnetic resonance imaging (fMRI) approaches, but also including high-resolution fMRI, electroencephalography (EEG), subdural electrocorticography (ECoG), and invasive electrophysiology. We will describe data-analysis techniques to reconstruct receptive field properties of neural populations, and extend them to visual field maps and clusters within human and macaque visual cortex. We describe the way these receptive field properties vary within and across different visual field maps. Next, we extend conventional stimulus-referred notions of the receptive field to neural-referred properties, i.e. cortico-cortical receptive fields that capture the information flow between visual field maps. We also demonstrate techniques to reveal extra-classical receptive field interactions similar to those seen in classical psychophysical �surround suppression� in both S-cone and achromatic pathways. Next we will consider the detailed organization within the foveal confluence, and model the unique constraints that are associated with this organization. Furthermore, we will consider how these neural properties change with the state of chronic visual deprivation due to damage to the visual system, and in subjects with severely altered visual input due to prism-adaptation. The link between visual cortex� organization, perception and plasticity is a fundamental part of vision science. The symposium highlights these links at various spatial scales. In addition, the attendees will gain insight into a broad spectrum of state-of-the-art data-acquisition and data-analyses neuroimaging techniques. Therefore, we believe that this symposium will be of interest to a wide range of visual scientists, including students, researchers and faculty.

Presentations

Reconstructing human population receptive field properties

Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands, B.M. Harvey, Experimental Psychology, Utrecht University, Netherlands

We describe a method that reconstructs population receptive field (pRF) properties in human visual cortex using fMRI. This data-analysis technique is able to reconstruct several properties of the underlying neural population, such as quantitative estimates of the pRF position (maps), size as well as suppressive surrounds. PRF sizes increase with increasing eccentricity and up the visual hierarchy. In the same human subject, fMRI pRF measurements are comparable to those derived from subdural electrocorticography (ECoG).   Furthermore, we describe a close relationship of pRF sizes to the cortical magnification factor (CMF). Within V1, interhemisphere and subject variations in CMF, pRF size, and V1 surface area are correlated. This suggests a constant processing unit shared between humans. PRF sizes increase between visual areas and with eccentricity, but when expressed in V1 cortical surface area (i.e., cortico-cortical pRFs), they are constant across eccentricity in V2 and V3. Thus, V2, V3, and to some degree hV4, sample from a constant extent of V1. This underscores the importance of V1 architecture as a reference frame for subsequent processing stages and ultimately perception.

Cortico-cortical receptive field modeling using functional magnetic resonance imaging (fMRI)

Koen V. Haak, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands, J. Winawer, Psychology, Stanford University; B.M. Harvey, Experimental Psychology, Utrecht University; R. Renken, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands; S.O. Dumoulin, Experimental Psychology, Utrecht University, Netherlands; B.A. Wandell, Psychology, Stanford University; F.W. Cornelissen, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands

The traditional way to study the properties of cortical visual neurons is to measure responses to visually presented stimuli (stimulus-referred). A second way to understand neuronal computations is to characterize responses in terms of the responses in other parts of the nervous system (neural-referred).   A model that describes the relationship between responses in distinct cortical locations is essential to clarify the network of cortical signaling pathways. Just as a stimulus-referred receptive field predicts the neural response as a function of the stimulus contrast, the neural-referred receptive field predicts the neural response as a function of responses elsewhere in the nervous system. When applied to two cortical regions, this function can be called the population cortico-cortical receptive field (CCRF), and it can be used to assess the fine-grained topographic connectivity between early visual areas. Here, we model the CCRF as a Gaussian-weighted region on the cortical surface and apply the model to fMRI data from both stimulus-driven and resting-state experimental conditions in visual cortex to demonstrate that 1) higher order visual areas such as V2, V3, hV4 and the LOC show an increase in the CCRF size when sampling from the V1 surface, 2) the CCRF size of these higher order visual areas is constant over the V1 surface, 3) the method traces inherent properties of the visual cortical organization, 4) it probes the direction of the flow of information.

Imaging extraclassical receptive fields in early visual cortex

Alex R. Wade, Department of Psychology University of York, Heslington, UK, B. Xiao, Department of Brain and Cognitive Sciences, MIT; J. Rowland, Department of Art Practise, UC Berkeley

Psychophysically, apparent color and contrast can be modulated by long-range contextual effects. In this talk I will describe a series of neuroimaging experiments that we have performed to examine the effects of spatial context on color and contrast signals in early human visual cortex.   Using fMRI we first show that regions of high contrast in the fovea exert a long-range suppressive effect across visual cortex that is consistent with a contrast gain control mechanism. This suppression is weaker when using stimuli that excite the chromatic pathways and may occur relatively early in the visual processing stream (Wade, Rowland, J Neurosci, 2010).   We then used high-resolution source imaged EEG to examine the effects of context on V1 signals initiated in different chromatic and achromatic precortical pathways (Xiao and Wade, J Vision, 2010). We found that contextual effects similar to those seen in classical psychophysical �surround suppression� were present in both S-cone and achromatic pathways but that there was little contextual interaction between these pathways – either in our behavioral or in our neuroimaging paradigms.   Finally, we used fMRI multivariate pattern analysis techniques to examine the presence of chromatic tuning in large extraclassical receptive fields (ECRFs). We found that ECRFs have sufficient chromatic tuning to enable classification based solely on information in suppressed voxels that are not directly excited by the stimulus. In many cases, performance using ECRFs was as accurate as that using voxels driven directly by the stimulus.

The human foveal confluence and high resolution fMRI

Mark M. Schira, Neuroscience Research Australia (NeuRA), Sydney & University of New South Wales, Sydney, Australia

After remaining terra incognita for 40 years, the detailed organization of the foveal confluence has just recently been described in humans. I will present recent high resolution mapping results in human subjects and introduce current concepts of its organization in human and other primates (Schira et al., J. Nsci, 2009). I will then introduce a new algebraic retino-cortical projection function that accurately models the V1-V3 complex to the level of our knowledge about the actual organization (Schira et al. PLoS Comp. Biol. 2010). Informed by this model I will discuss important properties of foveal cortex in primates. These considerations demonstrate that the observed organization though surprising at first hand is in fact a good compromise with respect to cortical surface and local isotropy, proving a potential explanation for this organization. Finally, I will discuss recent advances such as multi-channel head coils and parallel imaging which have greatly improved the quality and possibilities of MRI. Unfortunately, most fMRI research is still essentially performed in the same old 3 by 3 by 3 mm style – which was adequate when using a 1.5T scanner and a birdcage head coil. I will introduce simple high resolution techniques that allow fairly accurate estimates of the foveal organization in research subjects within a reasonable timeframe of approximately 20 minutes, providing a powerful tool for research of foveal vision.

Population receptive field measurements in macaque visual cortex

Stelios M. Smirnakis, Departments of Neurosci. and Neurol., Baylor Col. of Med., Houston, TX, G.A. Keliris, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany; Y. Shao, A. Papanikolaou, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany;   N.K. Logothetis, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany, Div. of Imaging Sci. and Biomed. Engin., Univ. of Manchester, United Kingdom

Visual receptive fields have dynamic properties that may change with the conditions of visual stimulation or with the state of chronic visual deprivation. We used 4.7 Tesla functional magnetic resonance imaging (fMRI) to study the visual cortex of two normal adult macaque monkeys and one macaque with binocular central retinal lesions due to a form of juvenile macular degeneration (MD). FMRI experiments were performed under light remifentanyl induced anesthesia (Logothetis et al. Nat. Neurosci. 1999). Standard moving horizontal/vertical bar stimuli were presented to the subjects and the population receptive field (pRF) method (Dumoulin and Wandell, Neuroimage 2008) was used to measure retinotopic maps and pRF sizes in early visual areas.   FMRI measurements of normal monkeys agree with published electrophysiological results, with pRF sizes and electrophysiology measurements showing similar trends. For the MD monkey, the size and location of the lesion projection zone (LPZ) was consistent with the retinotopic projection of the retinal lesion in early visual areas. No significant BOLD activity was seen within the V1 LPZ, and the retinotopic organization of the non-deafferented V1 periphery was regular without distortion. Interestingly, area V5/MT of the MD monkey showed more extensive activation than area V5/MT of control monkeys which had part of their visual field obscured (artificial scotoma) to match the scotoma of the MD monkey. V5/MT PRF sizes of the MD monkey were on average smaller than controls. PRF estimation methods allow us to measure and follow in vivo how the properties of visual areas change as a function of cortical reorganization. Finally, if there is time, we will discuss a different method of pRF estimation that yields additional information.

Functional plasticity in human parietal visual field map clusters: Adapting to reversed visual input

Alyssa A. Brewer, Department of Cognitive Sciences University of California, Irvine Irvine, CA, B. Barton, Department of Cognitive Sciences University of California, Irvine; L. Lin, AcuFocus, Inc., Irvine

Knowledge of the normal organization of visual field map clusters allows us to study potential reorganization within visual cortex under conditions that lead to a disruption of the normal visual inputs. Here we exploit the dynamic nature of visuomotor regions in posterior parietal cortex to examine cortical functional plasticity induced by a complete reversal of visual input in normal adult humans. We also investigate whether there is a difference in the timing or degree of a second adaptation to the left-right visual field reversal in adult humans after long-term recovery from the initial adaptation period. Subjects wore left-right reversing prism spectacles continuously for 14 days and then returned for a 4-day re-adaptation to the reversed visual field 1-9 months later. For each subject, we used population receptive field modeling fMRI methods to track the receptive field alterations within the occipital and parietal visual field map clusters across time points. The results from the first 14-day experimental period highlight a systematic and gradual shift of visual field coverage from contralateral space into ipsilateral space in parietal cortex throughout the prism adaptation period. After the second, 4-day experimental period, the data demonstrate a faster time course for both behavioral and cortical re-adaptation. These measurements in subjects with severely altered visual input allow us to identify the cortical regions subserving the dynamic remapping of cortical representations in response to altered visual perception and demonstrate that the changes in the maps produced by the initial long prism adaptation period persist over an extended time.

 

 

Distinguishing perceptual shifts from response biases

Distinguishing perceptual shifts from response biases

Friday, May 11, 3:30 – 5:30 pm

Organizer: Joshua Solomon, City University London

Presenters: Sam Ling, Vanderbilt; Keith Schneider, York University; Steven Hillyard, UCSD; Donald MacLeod, UCSD; Michael Morgan, City University London, Max Planck Institute for Neurological Research, Cologne; Mark Georgeson, Aston University

Symposium Description

Sensory adaptation was originally considered a low-level phenomenon involving measurable changes in sensitivity, but has been extended to include many cases where a change in sensitivity has yet to be demonstrated. Examples include adaptation to blur, temporal duration and face identity.  It has also been claimed that adaptation can be affected by attention to the adapting stimulus, and even that adaptation can be caused by imaging the adapting stimulus.  The typical method of measurement in such studies involves a shift in the mean (p50) point of a psychometric function, obtained by the Method of Single Stimuli.  In Signal Detection Theory, the mean is determined by a decision rule, as opposed to the slope which is set by internal noise. The question that arises is how we can distinguish shifts in mean due to a genuine adaptation process from shifts due to a change in the observer�s decision rule.  This was a hot topic in the 60�s, for example, in discussion between Restle and Helson over Adaptation Level Theory, but it has become neglected, with the result that any shift in the mean of a psychometric function is now accepted as evidence for a perceptual shift.  We think that it is time to revive this issue, given the theoretical importance of claims about adaptation being affected by imagination and attention, and the links that are claimed with functional brain imaging.

Presentations

Attention alters appearance

Sam Ling, Vanderbilt University

Maintaining veridicality seems to be of relatively low priority for the human brain; starting at the retina, our neural representations of the physical world undergo dramatic transformations, often forgoing an accurate depiction of the world in favor of augmented signals that are more optimal for the task at hand. Indeed, visual attention has been suggested to play a key role in this process, boosting the neural representations of attended stimuli, and attenuating responses to ignored stimuli. What, however, are the phenomenological consequences of attentional modulation?  I will discuss a series of studies that we and others have conducted, all converging on the notion that attention can actually change the visual appearance of attended stimuli across a variety of perceptual domains, such as contrast, spatial frequency, and color. These studies reveal that visual attention not only changes our neural representations, but that it can actually affect what we think we see.

Attention increases salience and biases decisions but does not alter appearance.

Keith Schneider, York University

Attention enhances our perceptual abilities and increases neural activity.  Still debated is whether an attended object, given its higher salience and more robust representation, actually looks any different than an otherwise identical but unattended object.  One might expect that this question could be easily answered by an experiment in which an observer is presented two stimuli differing along one dimension, contrast for example, to one of which attention has been directed, and must report which stimulus has the higher apparent contrast.  The problem with this sort of comparative judgment is that in the most informative case, that in which the two stimuli are equal, the observer is also maximally uncertain and therefore most susceptible to extraneous influence.  An intelligent observer might report, all other things being equal, that the stimulus about which he or she has more information is the one with higher contrast.  (And it doesn’t help to ask which stimulus has the lower contrast, because then the observer might just report the less informed stimulus!)  In this way, attention can bias the decision mechanism and confound the experiment such that it is not possible for the experimenter to differentiate this bias from an actual change in appearance.  It has been over ten years since I proposed a solution to this dilemma�an equality judgment task in which observers report whether the two stimuli are equal in appearance or not.  This paradigm has been supported in the literature and has withstood criticisms.  Here I will review these findings.

Electrophysiological Studies of the Locus of Perceptual Bias

Steven Hillyard, UCSD

The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century.  Recent psychophysical studies have reported that attention increases the apparent contrast of visual stimuli, but there is still a controversy as to whether this effect is due to the biasing of decisions as opposed to the altering of perceptual representations and changes in subjective appearance.  We obtained converging neurophysiological evidence while observers judged the relative contrast of Gabor patch targets presented simultaneously to the left and right visual fields following a lateralized cue (auditory or visual).  This non-predictive cueing boosted the apparent contrast of the Gabor target on the cued side in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset.  The magnitude of the enhanced neural response in ventral extrastriate visual cortex was positively correlated with perceptual reports of the cued-side target being higher in contrast.  These results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

Adaptive sensitivity regulation in detection and appearance

Donald MacLeod, UCSD

The visual system adapts to changing levels of stimulation with alterations of sensitivity that are expressed both in in changes in detectability, and in changes of appearance. The connection between these two aspects of sensitivity regulation is often taken for granted but need not be simple. Even the proportionality between ‘thresholds’ obtained by self-setting and threshold based on reliability of detection (e.g. forced-choice) is not generally expected except under quite restricted conditions and unrealistically simple models of the visual system. I review some of the theoretical possibilities in relation to available experimental evidence. Relatively simple mechanistic models provide opportunity for deviations from proportionality, especially if noise can enter into the neural representation at multiple stages. The extension to suprathreshold appearance is still more precarious;  yet remarkably, under some experimental conditions, proportionality with threshold sensitivities holds, in the sense that equal multiples of threshold match.

Observers can voluntarily shift their psychometric functions without losing sensitivity

Michael Morgan, City University London, Max Planck Institute for Neurological Research, Cologne, Barbara Dillenburger, Sabine Raphael, Max Planck; Joshua A. Solomon, City University

Psychometric sensory discrimination functions are usually modeled by cumulative Gaussian functions with just two parameters, their central tendency and their slope. These correspond to Fechner�s �constant� and �variable� errors, respectively. Fechner pointed out that even the constant error could vary over space and time and could masquerade as variable error. We wondered whether observers could deliberately introduce a constant error into their performance without loss of precision. In three-dot vernier and bisection tasks with the method of single stimuli, observers were instructed to favour one of the two responses when unsure of their answer. The slope of the resulting psychometric function was not significantly changed, despite a significant change in central tendency. Similar results were obtained when altered feedback was used to induce bias. We inferred that observers can adopt artificial response criteria without any significant increase in criterion fluctuation. These findings have implications for some studies that have measured perceptual �illusions� by shifts in the psychometric functions of sophisticated observers.

Sensory, perceptual and response biases: the criterion concept in perception

Mark Georgeson, Aston University

Signal detection theory (SDT) established in psychophysics a crucial distinction between sensitivity (or discriminability, d�) and bias (or criterion) in the analysis of performance in sensory judgement tasks. SDT itself is agnostic about the origins of the criterion, but there seems to be a broad consensus favouring �response bias� or �decision bias�. And yet, perceptual biases exist and are readily induced. The motion aftereffect is undoubtedly perceptual  – compelling motion is seen on a stationary pattern � but its signature in psychophysical data is a shift in the psychometric function, indistinguishable from �response bias�.  How might we tell the difference? I shall discuss these issues in relation to some recent experiments and modelling of adaptation to blur (Elliott, Georgeson & Webster, 2011).  A solution might lie in dropping any hard distinction between perceptual shifts and decision biases. Perceptual mechanisms make low-level decisions. Sensory, perceptual and  response criteria might be represented neurally in similar ways at different levels of the visual hierarchy, by biasing signals that are set by the task and by the history of stimuli and responses (Treisman & Williams, 1984). The degree of spatial localization over which the bias occurs might reflect its level in the visual hierarchy. Thus, given enough data, the dilemma (are aftereffects perceptual or due to response bias?) might be resolved in favour of such a multi-level model.

 

 

 

Part-whole relationships in visual cortex

Part-whole relationships in visual cortex

Friday, May 11, 1:00 – 3:00 pm

Organizer: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Presenters: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven; Charles E. Connor,Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University; Scott O. Murray,Department of Psychology, University of Washington; James R. Pomerantz, Department of Psychology, Rice University; Jacob Feldman,Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick; Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University

Symposium Description

With his famous paper on phi motion, Wertheimer (1912) launched Gestalt psychology, arguing that the whole is different from the sum of the parts. In fact, wholes were considered primary in perceptual experience, even determining what the parts are. Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? Are wholes constructed from combinations of the parts? If so, to what extent are the combinations additive, what does superadditivity really mean, and how does it arise along the visual hierarchy? How much of the combination process occurs in incremental feedforward iterations or horizontal connections and at what stage does feedback from higher areas kick in? What happens to the representation of the lower-level parts when the higher-level wholes are perceived? Do they become enhanced or suppressed (�explained away�)? Or, are wholes occurring before the parts, as argued by Gestalt psychologists? But what does this global precedence really mean in terms of what happens where in the brain? Does the primacy of the whole only account for consciously perceived figures or objects, and are the more elementary parts still combined somehow during an unconscious step-wise processing stage? A century later, tools are available that were not at the Gestaltists� disposal to address these questions. In this symposium, we will take stock and try to provide answers from a diversity of approaches, including single-cell recordings from V4, posterior and anterior IT cortex in awake monkeys (Ed Connor, Johns Hopkins University), human fMRI (Scott Murray, University of Washington), human psychophysics (James Pomerantz, Rice University), and computational modeling (Jacob Feldman, Rutgers University). Johan Wagemans (University of Leuven) will introduce the theme of the symposium with a brief historical overview of the Gestalt tradition and a clarification of the conceptual issues involved. Shaul Hochstein (Hebrew University) will end with a synthesis of the current literature, in the framework of Reverse Hierarchy Theory. The scientific merit of addressing such a central issue, which has been around for over a century, from a diversity of modern perspectives and in light of the latest findings should be obvious. The celebration of the centennial anniversary of Gestalt psychology also provides an excellent opportunity to doing so. We believe our line-up of speakers, addressing a set of closely related questions, from a wide range of methodological and theoretical perspectives, promises to be attracting a large crowd, including students and faculty working in psychophysics, neurosciences and modeling. In comparison with other proposals taking this centennial anniversary as a window of opportunity, ours is probably more focused and allows for a more coherent treatment of a central Gestalt issue, which has been bothering vision science for a long time.

Presentations

Part-whole relationships in vision science: A brief historical review and conceptual analysis

Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Exactly 100 years ago, Wertheimer�s paper on phi motion (1912) effectively launched the Berlin school of Gestalt psychology. Arguing against elementalism and associationism, they maintained that experienced objects and relationships are fundamentally different from collections of sensations. Going beyond von Ehrenfels�s notion of Gestalt qualities, which involved one-sided dependence on sense data, true Gestalts are dynamic structures in experience that determine what will be wholes and parts. From the beginning, this two-sided dependence between parts and wholes was believed to have a neural basis. They spoke of continuous �whole-processes� in the brain, and argued that research needed to try to understand these from top (whole) to bottom (parts ) rather than the other way around. However, Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? In this talk, I will briefly review the Gestalt position and analyse the different notions of part and whole, and different views on part-whole relationships maintained in a century of vision science since the start of Gestalt psychology. This will provide some necessary background for the remaining talks in this symposium, which will all present contemporary views based on new findings.

Ventral pathway visual cortex: Representation by parts in a whole object reference frame

Charles E. Connor, Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Anitha Pasupathy, Scott L. Brincat, Yukako Yamane, Chia-Chun Hung

Object perception by humans and other primates depends on the ventral pathway of visual cortex, which processes information about object structure, color, texture, and identity.  Object information processing can be studied at the algorithmic, neural coding level using electrode recording in macaque monkeys.  We have studied information processing in three successive stages of the monkey ventral pathway:  area V4, PIT (posterior inferotemporal cortex), and AIT (anterior inferotemporal cortex).  At all three stages, object structure is encoded in terms of parts, including boundary fragments (2D contours, 3D surfaces) and medial axis components (skeletal shape fragments).  Area V4 neurons integrate information about multiple orientations to produce signals for local contour fragments.  PIT neurons integrate multiple V4 inputs to produce representations of multi-fragment configurations.  Even neurons in AIT, the final stage of the monkey ventral pathway, represent configurations of parts (as opposed to holistic object structure).  However, at each processing stage, neural responses are critically dependent on the position of parts within the whole object.  Thus, a given neuron may respond strongly to a specific contour fragment positioned near the right side of an object but not at all when it is positioned near the left.  This kind of object-centered position tuning would serve an essential role by representing spatial arrangement within a distributed, parts-based coding scheme. Object-centered position sensitivity is not imposed by top-down feedback, since it is apparent in the earliest responses at lower stages, before activity begins at higher stages.  Thus, while the brain encodes objects in terms of their constituent parts, the relationship of those parts to the whole object is critical at each stage of ventral pathway processing.

Long-range, pattern-dependent contextual effects in early human visual cortex

Scott O. Murray, Department of Psychology, University of Washington, Sung Jun Joo, Geoffrey M. Boynton

The standard view of neurons in early visual cortex is that they behave like localized feature detectors. We will discuss recent results that demonstrate that neurons in early visual areas go beyond localized feature detection and are sensitive to part-whole relationships in images. We measured neural responses to a grating stimulus (�target�) embedded in various visual patterns as defined by the relative orientation of flanking stimuli. We varied whether or not the target was part of a predictable sequence by changing the orientation of distant gratings while maintaining the same local stimulus arrangement. For example, a vertically oriented target grating that is flanked locally with horizontal flankers (HVH) can be made to be part of a predictable sequence by adding vertical distant flankers (VHVHV). We found that even when the local configuration (e.g. HVH) around the target was kept the same there was a smaller neural response when the target was part of a predictable sequence (VHVHV). Furthermore, when making an orientation judgment of a �noise� stimulus that contains no specific orientation information, observers were biased to �see� the orientation that deviates from the predictable orientation, consistent with computational models of primate cortical processing that incorporate efficient coding principles. Our results suggest that early visual cortex is sensitive to global patterns in images in a way that is markedly different from the predictions of standard models of cortical visual processing and indicate an important role in coding part-whole relationships in images.

The computational and cortical bases for configural superiority

James R. Pomerantz, Department of Psychology, Rice University, Anna I. Cragin, Department of Psychology, Rice University; Kimberley D. Orsten, Department of Psychology, Rice University; Mary C. Portillo, Department of Social Sciences, University of Houston�Downtown

In the configural superiority effect (CSE; Pomerantz et al., 1977; Pomerantz & Portillo, 2011), people respond more quickly to a whole configuration than to any one of its component parts, even when the parts added to create a whole contribute no information by themselves.  For example, people discriminate an arrow from a triangle more quickly than a positive from a negative diagonal even when those diagonals constitute the only difference between the arrows and triangles.  How can a neural or other computational system be faster at processing information about combinations of parts � wholes � than about parts taken singly?   We consider the results of Kubilius et al. (2011) and discuss three possibilities: (1) Direct detection of wholes through smart mechanisms that compute higher order information without performing seemingly necessary intermediate computations; (2) the �sealed channel hypothesis� (Pomerantz, 1978), which holds that part information is extracted prior to whole information in a feedforward manner but is not available for responses; and (3) a closely related reverse hierarchy model holding that conscious experience begins with higher cortical levels processing wholes, with parts becoming accessible to consciousness only after feedback to lower levels is complete (Hochstein & Ahissar, 2002).  We describe a number of CSEs and elaborate both on these mechanisms that might explain them and how they might be confirmed experimentally.

Computational integration of local and global form

Jacob Feldman, Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick, Manish Singh, Vicky Froyen

A central theme of perceptual theory, from the Gestaltists to the present, has been the integration of local and global image information. While neuroscience has traditionally viewed perceptual processes as beginning with local operators with small receptive fields before proceeding on to more global operators with larger ones, a substantial body of evidence now suggests that supposedly later processes can impose decisive influences on supposedly earlier ones, suggesting a more complicated flow of information. We consider this problem from a computational point of view. Some local processes in perceptual organization, like the organization of visual items into a local contour, can be well understood in terms of simple probabilistic inference models. But for a variety of reasons nonlocal factors such as global �form� resist such simple models. In this talk I’ll discuss constraints on how form- and region-generating probabilistic models can be formulated and integrated with local ones. From a computational point of view, the central challenge is how to embed the corresponding estimation procedure in a locally-connected network-like architecture that can be understood as a model of neural computation.

The rise and fall of the Gestalt gist

Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University, Merav Ahissar

Reviewing the current literature, one finds physiological bases for Gestalt-like perception, but also much that seems to contradict the predictions of this theory. Some resolution may be found in the framework of Reverse Hierarchy Theory, dividing between implicit processes, of which we are unaware, and explicit representations, which enter perceptual consciousness. It is the conscious percepts that appear to match Gestalt predictions � recognizing wholes even before the parts. We now need to study the processing mechanisms at each level, and, importantly, the feedback interactions which equally affect and determine the plethora of representations that are formed, and to analyze how they determine conscious perception. Reverse Hierarchy Theory proposes that initial perception of the gist of a scene � including whole objects, categories and concepts � depends on rapid bottom-up implicit processes, which seems to follow (determine) Gestalt rules. Since lower level representations are initially unavailable to consciousness � and may become available only with top-down guidance � perception seems to immediately jump to Gestalt conclusions. Nevertheless, vision at a blink of the eye is the result of many layers of processing, though introspection is blind to these steps, failing to see the trees within the forest. Later, slower perception, focusing on specific details, reveals the source of Gestalt processes � and destroys them at the same time. Details of recent results, including micro-genesis analyses, will be reviewed within the framework of Gestalt and Reverse Hierarchy theories.

Friday, May 11, 1:00 – 3:00 pm

Organizer: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Presenters: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven; Charles E. Connor,Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University; Scott O. Murray,Department of Psychology, University of Washington; James R. Pomerantz, Department of Psychology, Rice University; Jacob Feldman,Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick; Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University

Symposium Description

With his famous paper on phi motion, Wertheimer (1912) launched Gestalt psychology, arguing that the whole is different from the sum of the parts. In fact, wholes were considered primary in perceptual experience, even determining what the parts are. Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? Are wholes constructed from combinations of the parts? If so, to what extent are the combinations additive, what does superadditivity really mean, and how does it arise along the visual hierarchy? How much of the combination process occurs in incremental feedforward iterations or horizontal connections and at what stage does feedback from higher areas kick in? What happens to the representation of the lower-level parts when the higher-level wholes are perceived? Do they become enhanced or suppressed (�explained away�)? Or, are wholes occurring before the parts, as argued by Gestalt psychologists? But what does this global precedence really mean in terms of what happens where in the brain? Does the primacy of the whole only account for consciously perceived figures or objects, and are the more elementary parts still combined somehow during an unconscious step-wise processing stage? A century later, tools are available that were not at the Gestaltists� disposal to address these questions. In this symposium, we will take stock and try to provide answers from a diversity of approaches, including single-cell recordings from V4, posterior and anterior IT cortex in awake monkeys (Ed Connor, Johns Hopkins University), human fMRI (Scott Murray, University of Washington), human psychophysics (James Pomerantz, Rice University), and computational modeling (Jacob Feldman, Rutgers University). Johan Wagemans (University of Leuven) will introduce the theme of the symposium with a brief historical overview of the Gestalt tradition and a clarification of the conceptual issues involved. Shaul Hochstein (Hebrew University) will end with a synthesis of the current literature, in the framework of Reverse Hierarchy Theory. The scientific merit of addressing such a central issue, which has been around for over a century, from a diversity of modern perspectives and in light of the latest findings should be obvious. The celebration of the centennial anniversary of Gestalt psychology also provides an excellent opportunity to doing so. We believe our line-up of speakers, addressing a set of closely related questions, from a wide range of methodological and theoretical perspectives, promises to be attracting a large crowd, including students and faculty working in psychophysics, neurosciences and modeling. In comparison with other proposals taking this centennial anniversary as a window of opportunity, ours is probably more focused and allows for a more coherent treatment of a central Gestalt issue, which has been bothering vision science for a long time.

Presentations

Part-whole relationships in vision science: A brief historical review and conceptual analysis

Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Exactly 100 years ago, Wertheimer�s paper on phi motion (1912) effectively launched the Berlin school of Gestalt psychology. Arguing against elementalism and associationism, they maintained that experienced objects and relationships are fundamentally different from collections of sensations. Going beyond von Ehrenfels�s notion of Gestalt qualities, which involved one-sided dependence on sense data, true Gestalts are dynamic structures in experience that determine what will be wholes and parts. From the beginning, this two-sided dependence between parts and wholes was believed to have a neural basis. They spoke of continuous �whole-processes� in the brain, and argued that research needed to try to understand these from top (whole) to bottom (parts ) rather than the other way around. However, Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? In this talk, I will briefly review the Gestalt position and analyse the different notions of part and whole, and different views on part-whole relationships maintained in a century of vision science since the start of Gestalt psychology. This will provide some necessary background for the remaining talks in this symposium, which will all present contemporary views based on new findings.

Ventral pathway visual cortex: Representation by parts in a whole object reference frame

Charles E. Connor, Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Anitha Pasupathy, Scott L. Brincat, Yukako Yamane, Chia-Chun Hung

Object perception by humans and other primates depends on the ventral pathway of visual cortex, which processes information about object structure, color, texture, and identity.  Object information processing can be studied at the algorithmic, neural coding level using electrode recording in macaque monkeys.  We have studied information processing in three successive stages of the monkey ventral pathway:  area V4, PIT (posterior inferotemporal cortex), and AIT (anterior inferotemporal cortex).  At all three stages, object structure is encoded in terms of parts, including boundary fragments (2D contours, 3D surfaces) and medial axis components (skeletal shape fragments).  Area V4 neurons integrate information about multiple orientations to produce signals for local contour fragments.  PIT neurons integrate multiple V4 inputs to produce representations of multi-fragment configurations.  Even neurons in AIT, the final stage of the monkey ventral pathway, represent configurations of parts (as opposed to holistic object structure).  However, at each processing stage, neural responses are critically dependent on the position of parts within the whole object.  Thus, a given neuron may respond strongly to a specific contour fragment positioned near the right side of an object but not at all when it is positioned near the left.  This kind of object-centered position tuning would serve an essential role by representing spatial arrangement within a distributed, parts-based coding scheme. Object-centered position sensitivity is not imposed by top-down feedback, since it is apparent in the earliest responses at lower stages, before activity begins at higher stages.  Thus, while the brain encodes objects in terms of their constituent parts, the relationship of those parts to the whole object is critical at each stage of ventral pathway processing.

Long-range, pattern-dependent contextual effects in early human visual cortex

Scott O. Murray, Department of Psychology, University of Washington, Sung Jun Joo, Geoffrey M. Boynton

The standard view of neurons in early visual cortex is that they behave like localized feature detectors. We will discuss recent results that demonstrate that neurons in early visual areas go beyond localized feature detection and are sensitive to part-whole relationships in images. We measured neural responses to a grating stimulus (�target�) embedded in various visual patterns as defined by the relative orientation of flanking stimuli. We varied whether or not the target was part of a predictable sequence by changing the orientation of distant gratings while maintaining the same local stimulus arrangement. For example, a vertically oriented target grating that is flanked locally with horizontal flankers (HVH) can be made to be part of a predictable sequence by adding vertical distant flankers (VHVHV). We found that even when the local configuration (e.g. HVH) around the target was kept the same there was a smaller neural response when the target was part of a predictable sequence (VHVHV). Furthermore, when making an orientation judgment of a �noise� stimulus that contains no specific orientation information, observers were biased to �see� the orientation that deviates from the predictable orientation, consistent with computational models of primate cortical processing that incorporate efficient coding principles. Our results suggest that early visual cortex is sensitive to global patterns in images in a way that is markedly different from the predictions of standard models of cortical visual processing and indicate an important role in coding part-whole relationships in images.

The computational and cortical bases for configural superiority

James R. Pomerantz, Department of Psychology, Rice University, Anna I. Cragin, Department of Psychology, Rice University; Kimberley D. Orsten, Department of Psychology, Rice University; Mary C. Portillo, Department of Social Sciences, University of Houston�Downtown

In the configural superiority effect (CSE; Pomerantz et al., 1977; Pomerantz & Portillo, 2011), people respond more quickly to a whole configuration than to any one of its component parts, even when the parts added to create a whole contribute no information by themselves.  For example, people discriminate an arrow from a triangle more quickly than a positive from a negative diagonal even when those diagonals constitute the only difference between the arrows and triangles.  How can a neural or other computational system be faster at processing information about combinations of parts � wholes � than about parts taken singly?   We consider the results of Kubilius et al. (2011) and discuss three possibilities: (1) Direct detection of wholes through smart mechanisms that compute higher order information without performing seemingly necessary intermediate computations; (2) the �sealed channel hypothesis� (Pomerantz, 1978), which holds that part information is extracted prior to whole information in a feedforward manner but is not available for responses; and (3) a closely related reverse hierarchy model holding that conscious experience begins with higher cortical levels processing wholes, with parts becoming accessible to consciousness only after feedback to lower levels is complete (Hochstein & Ahissar, 2002).  We describe a number of CSEs and elaborate both on these mechanisms that might explain them and how they might be confirmed experimentally.

Computational integration of local and global form

Jacob Feldman, Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick, Manish Singh, Vicky Froyen

A central theme of perceptual theory, from the Gestaltists to the present, has been the integration of local and global image information. While neuroscience has traditionally viewed perceptual processes as beginning with local operators with small receptive fields before proceeding on to more global operators with larger ones, a substantial body of evidence now suggests that supposedly later processes can impose decisive influences on supposedly earlier ones, suggesting a more complicated flow of information. We consider this problem from a computational point of view. Some local processes in perceptual organization, like the organization of visual items into a local contour, can be well understood in terms of simple probabilistic inference models. But for a variety of reasons nonlocal factors such as global �form� resist such simple models. In this talk I’ll discuss constraints on how form- and region-generating probabilistic models can be formulated and integrated with local ones. From a computational point of view, the central challenge is how to embed the corresponding estimation procedure in a locally-connected network-like architecture that can be understood as a model of neural computation.

The rise and fall of the Gestalt gist

Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University, Merav Ahissar

Reviewing the current literature, one finds physiological bases for Gestalt-like perception, but also much that seems to contradict the predictions of this theory. Some resolution may be found in the framework of Reverse Hierarchy Theory, dividing between implicit processes, of which we are unaware, and explicit representations, which enter perceptual consciousness. It is the conscious percepts that appear to match Gestalt predictions � recognizing wholes even before the parts. We now need to study the processing mechanisms at each level, and, importantly, the feedback interactions which equally affect and determine the plethora of representations that are formed, and to analyze how they determine conscious perception. Reverse Hierarchy Theory proposes that initial perception of the gist of a scene � including whole objects, categories and concepts � depends on rapid bottom-up implicit processes, which seems to follow (determine) Gestalt rules. Since lower level representations are initially unavailable to consciousness � and may become available only with top-down guidance � perception seems to immediately jump to Gestalt conclusions. Nevertheless, vision at a blink of the eye is the result of many layers of processing, though introspection is blind to these steps, failing to see the trees within the forest. Later, slower perception, focusing on specific details, reveals the source of Gestalt processes � and destroys them at the same time. Details of recent results, including micro-genesis analyses, will be reviewed within the framework of Gestalt and Reverse Hierarchy theories.

 

What does fMRI tell us about brain homologies?

What does fMRI tell us about brain homologies?

Friday, May 11, 1:00 – 3:00 pm

Organizer: Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology

Presenters: Martin Sereno, Department of Cognitive Science, UC San Diego; David Van Essen,Department of Anatomy and Neurobiology, Washington University School of Medicine; Hauke Kolster,Laboratorium voor Neurofysiologie en Psychofysiologie, Katholieke Universiteit Leuven Medical School; Jonathan Winawer, Psychology Department, Stanford University; Reza Rajimehr,McGovern Institute for Brain Research, Massachusetts Institute of Technology

Symposium Description

Over the past 20 years, the functional magnetic resonance imaging (fMRI) has provided a great deal of knowledge about the functional organization of human visual cortex. In recent years, the development of the fMRI technique in non-human primates has enabled neuroscientists to directly compare the topographic organization and functional properties of visual cortical areas across species. These comparative studies have shown striking similarities (‘homologies’) between human and monkey visual cortex. Many visual cortical areas in human can be corresponded to homologous areas in monkey – though detailed cross-species comparisons have also shown specific variations in visual feature selectivity of cortical areas and spatial arrangement of visual areas on the cortical sheet. Comparing cortical structures in human versus monkey provides a framework for generalizing results from invasive neurobiological studies in monkeys to humans. It also provides important clues for understanding the evolution of cerebral cortex in primates. In this symposium, we would like to highlight recent fMRI studies on the organization of visual cortex in human versus monkey. We will have 5 speakers. Each speaker will give a 25-minute talk (including 5 minutes of discussion time). Martin Sereno will introduce the concept of brain homology, elaborate on its importance, and evaluate technical limitations in addressing the homology questions. He will then continue with some examples of cross-species comparison for retinotopic cortical areas. David Van Essen will describe recent progress in applying surface-based analysis and visualization methods that provide a powerful approach for comparisons among primate species, including macaque, chimpanzee, and human. Hauke Kolster will test the homology between visual areas in occipital cortex of human and macaque in terms of topological organization, functional characteristics, and population receptive field sizes. Jonathan Winawer will review different organizational schemes for visual area V4 in human, relative to those in macaque. Reza Rajimehr will compare object-selective cortex (including face and scene areas) in human versus macaque. The symposium will be of interest to visual neuroscientists (faculty and students) and a general audience who will benefit from a series of integrated talks on fundamental yet relatively ignored topic of brain homology.

Presentations

Evolution, taxonomy, homology, and primate visual areas

Martin Sereno, Department of Cognitive Science, UC San Diego

Evolution involves the repeated branching of lineages, some of which become extinct. The  problem of determining the relationship between cortical areas within the brains of  surviving branches (e.g., humans, macaques, owl monkeys) is difficult because of: (1)  missing evolutionary intermediates, (2) different measurement techniques, (3) body size  differences, and (4) duplication, fusion, and reorganization of brain areas. Routine  invasive experiments are carried out in very few species (one loris, several New and Old  World monkeys). The closest to humans are macaque monkeys. However, the last common  ancestor of humans and macaques dates to more than 30 million years ago. Since then, New  and Old World monkey brains have evolved independently from ape and human brains,  resulting in complex mixes of shared and unique features. Evolutionary biologists are  often interested in �shared derived� characters — specializations from a basal condition  that are peculiar to a species or grouping of species. These are important for  classification (e.g., a brain feature unique to macaque-like monkeys). Evolutionary  biologists also distinguish similarities due to inheritance (homology — e.g., MT), from  similarities due to parallel or convergent evolution (homoplasy — e.g., layer 4A  staining in humans and owl monkey. By contrast with taxonomists, neuroscientists are  usually interested in trying to determine which features are conserved across species  (whether by inheritance or parallel evolution), indicating that those features may have a  basic functional and/or developmental role. The only way to obtain either of these kinds  of information is to examine data from multiple species.

Surface-based analyses of human, macaque, and chimpanzee cortical organization

David Van Essen, Department of Anatomy and Neurobiology, Washington University School of Medicine

Human and macaque cortex differ markedly in surface area (nine-fold), in their pattern of convolutions, and in the relationship of cortical areas to these convolutions.  Nonetheless, there are numerous similarities and putative homologies in cortical organization revealed by architectonic and other anatomical methods and more recently by noninvasive functional imaging methods.  There are also differences in functional organization, particularly in regions of rapid evolutionary expansion in the human lineage.  This presentation will highlight recent progress in applying surface-based analysis and visualization methods that provide a powerful general approach for comparisons among primate species, including the macaque, chimpanzee, and human. One major facet involves surface-based atlases that are substrates for increasingly accurate cortical parcellations in each species as well as maps of functional organization revealed using resting-state and task-evoked fMRI. Additional insights into cortical parcellations as well as evolutionary relationships are provided by myelin maps that have been obtained noninvasively in each species.  Together, these multiple modalities provide new insights regarding visual cortical organization in each species.  Surface-based registration provides a key method for making objective interspecies comparisons, using explicit landmarks that represent known or candidate homologies between areas.  Recent algorithmic improvements in landmark-based registration, coupled with refinements in the available set of candidate homologies, provide a fresh perspective on primate cortical evolution and species differences in the pattern of evolutionary expansion.

Comparative mapping of visual areas in the human and macaque occipital cortex

Hauke Kolster, Laboratorium voor Neurofysiologie en Psychofysiologie, Katholieke Universiteit Leuven Medical School

The introduction of functional magnetic resonance imaging (fMRI) as a non-invasive imaging modality has enabled the study of human cortical processes with high spatial specificity and allowed for a direct comparison of the human and the macaque within the same modality. This presentation will focus on the phase-encoded retinotopic mapping technique, which is used to establish parcellations of cortex consisting of distinct visual areas. These parcellations may then be used to test for similarities between the cortical organizations of the two species. Results from ongoing work will be presented with regard to retinotopic organization of the areas as well as their characterizations by functional localizers and population receptive field (pRF) sizes. Recent developments in fMRI methodology, such as improved resolution and stimulus design as well as analytical pRF methods have resulted in higher quality of the retinotopic field maps and revealed visual field-map clusters as new organizational principles in the human and macaque occipital cortex. In addition, measurements of population-average neuronal properties have the potential to establish a direct link between fMRI studies in the human and single cell studies in the monkey. An inter-subject registration algorithm will be presented, which uses a spatial correlation of the retinotopic and the functional test data to directly compare the functional characteristics of a set of putative homologue areas across subjects and species. The results indicate strong similarities between twelve visual areas in occipital cortex of human and macaque in terms of topological organization, functional characteristics and pRF sizes.

The fourth visual area: A question of human and macaque homology

Jonathan Winawer, Psychology Department, Stanford University

The fourth visual area, V4, was identified in rhesus macaque and described in a series of anatomical and functional studies (Zeki 1971, 1978). Because of its critical role in seeing color and form, V4 has remained an area of intense study. The identification of a color-sensitive region on the ventral surface of human visual cortex, anterior to V3, suggested the possible homology between this area, labeled ‘Human V4’ or ‘hV4’ (McKeefry, 1997; Wade, 2002) and macaque V4 (mV4). Both areas are retinotopically organized. Homology is not uniformly accepted because of substantial differences in spatial organization, though these differences have been questioned (Hansen, 2007). MV4 is a split hemifield map, with parts adjacent to the ventral and dorsal portions of the V3 map. In contrast, some groups have reported that hV4 falls wholly on ventral occipital cortex. Over the last 20 years, several organizational schemes have been proposed for hV4 and surrounding maps. In this presentation I review evidence for the different schemes, with emphasis on recent findings showing that an artifact of functional MRI caused by the transverse sinus afflicts measurements of the hV4 map in many (but not all) hemispheres. By focusing on subjects where the hV4 map is relatively remote from the sinus artifact, we show that hV4 can be best described as a single, unbroken map on the ventral surface representing the full contralateral visual hemifield. These results support claims of substantial deviations from homology between human and macaque in the organization of the 4th visual map.

Spatial organization of face and scene areas in human and macaque visual cortex

Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology

The primate visual cortex has a specialized architecture for processing specific object categories such as faces and scenes. For instance, inferior temporal cortex in macaque contains a network of discrete patches for processing face images. Direct comparison between human and macaque category-selective areas shows that some areas in one species have missing homologues in the other species. Using fMRI, we identified a face-selective region in anterior temporal cortex in human and a scene-selective region in posterior temporal cortex in macaque, which correspond to homologous areas in the other species. A surface-based analysis of cortical maps showed a high degree of similarity in the spatial arrangement of face and scene areas between human and macaque. This suggests that neighborhood relations between functionally-defined cortical areas are evolutionarily conserved – though the topographic relation between the areas and their underlying anatomy (gyral/sulcal pattern) may vary from one species to another.

 

Pulvinar and Vision: New insights into circuitry and function

Pulvinar and Vision: New insights into circuitry and function

Friday, May 11, 1:00 – 3:00 pm

Organizer: Vivien A. Casagrande, PhD, Department of Cell & Developmental Biology, Vanderbilt Medical School Nashville, TN

Presenters: Gopathy Purushothaman, Department of Cell & Developmental Biology Vanderbilt Medical School; Christian Casanova,Univ. Montreal, CP 6128 Succ Centre-Ville, Sch Optometry, Montreal , Canada; Heywood M. Petry,Department of Psychological & Brain Sciences, University of Louisville, ; Robert H. Wurtz, NIH-NEI, Lab of Sensorimotor Research, Sabine Kastner, MD, Department of Psychology, Center for Study of Brain, Mind and Behavior, Green Hall, Princeton; David Whitney,Department of Psychology, The University of California, Berkeley

Symposium Description

The thalamus is considered the gateway to the cortex. Yet, even the late Ted Jones who wrote two huge volumes on the organization of the thalamus remarked that we know amazingly little about many of its components and their role in cortical function. This is despite the fact that a major two-way highway connects all areas of cortex with the thalamus. The pulvinar is the largest thalamic nucleus in mammals; it progressively enlarged during primate evolution, dwarfing the rest of the thalamus in humans. The pulvinar also remains the most mysterious of thalamic nucleus in terms of its function. This symposium brings together six speakers from quite different perspectives who, using tools from anatomy, neurochemistry, physiology, neuroimaging and behavior will highlight intriguing recent insights into the structure and function of the pulvinar.  The speakers will jointly touch on: 1) the complexity of architecture, connections and neurochemistry of the pulvinar, 2) potential species similarities and differences in pulvinar�s role in transmitting visual information from subcortical visual areas to cortical areas, 3) the role of pulvinar in eye movements and in saccadic suppression, 4) the role of pulvinar in regulating cortico-cortical communication between visual cortical areas and finally, 5)  converging ideas on the mechanisms that might explain the role of the pulvinar under the larger functional umbrella of visual salience and attention.  Specifically, the speakers will address the following issues.  Purushothaman and Casanova will outline contrasting roles for pulvinar in influencing visual signals in early visual cortex in primates and non- primates, respectively.  Petry and Wurtz will describe the organization and the potential role of retino-tectal inputs to the pulvinar, and that of pulvinar projections to the middle temporal (MT/V5) visual area in primate and its equivalent in non-primates. Wurtz also will consider the role of pulvinar in saccadic suppression.  Kastner will describe the role of the pulvinar in regulating information transfer between cortical areas in primates trained to perform an attention task. Whitney will examine the role of pulvinar in human visual attention and perceptual discrimination.    This symposium should attract a wide audience from Visual Science Society (VSS) participants as the function of the thalamus is key to understanding cortical organization.  Studies of the pulvinar and its role in vision have seen a new renaissance given the new technologies available to reveal its function.  The goal of this session will be to provide the VSS audience with a new appreciation of the role of the thalamus in vision.

Presentations

Gating of the Primary Visual Cortex by Pulvinar for Controlling Bottom-Up Salience

Gopathy Purushothaman, PhD, Department of Cell & Developmental Biology Vanderbilt, Roan Marion, Keji Li and Vivien A. Casagrande Vanderbilt University

The thalamic nucleus pulvinar has been implicated in the control of visual attention.  Its reciprocal connections with both frontal and sensory cortices can coordinate top-down and bottom-up processes for selective visual attention.  However, pulvino-cortical neural interactions are little understood.  We recently found that the lateral pulvinar (PL) powerfully controls stimulus-driven responses in the primary visual cortex (V1).  Reversibly inactivating PL abolished visual responses in supra-granular layers of V1.  Excitation of PL neurons responsive to one region of visual space increased 4-fold V1 responses to this region and decreased 3-fold V1 responses to the surrounding region.  Glutamate agonist injection in LGN increased V1 activity 8-fold and induced an excitotoxic lesion of LGN; subsequently injecting the glutamate agonist into PL increased V1 activity 14-fold.  Spontaneous activity in PL and V1 following visual stimulation were strongly coupled and selectively entrained at the stimulation frequency.  These results suggest that PL-V1 interactions are well-suited to control bottom-up salience within a competitive cortico-pulvino-cortical network for selective attention.

Is The Pulvinar Driving or Modulating Responses in the Visual Cortex?

Christian Casanova, PhD, Univ. Montreal, CP 6128 Succ Centre-Ville, Sch Optometry, Montreal , Canada, Matthieu Vanni & Reza F. Abbas & S�bastien Thomas. Visual Neuroscience Laboratory, School of Optometry, Universit� de Montr�al, Montreal, Canada

Signals from lower cortical areas are not only transferred directly to higher-order cortical areas via cortico-cortical connections but also indirectly through cortico-thalamo-cortical projections. One step toward the understanding of the role of transthalamic corticocortical pathways is to determine the nature of the signals transmitted between the cortex and the thalamus. Are they strictly modulatory, i.e. are they modifying the activity in relation to the stimulus context and the analysis being done in the projecting area, or are they used to establish basic functional characteristics of cortical cells?  While the presence of drivers and modulators has been clearly demonstrated along the retino-geniculo-cortical pathway, it is not known whether such distinction can be made functionally in pathways involving the pulvinar. Since drivers and modulators can exhibit a different temporal pattern of response, we measured the spatiotemporal dynamics of voltage sensitive dyes activation in the visual cortex following pulvinar electrical stimulation in cats and tree shrews. Stimulation of pulvinar induced fast and local responses in extrastriate cortex. In contrast, the propagated waves in the primary visual cortex (V1) were weak in amplitude and diffuse. Co-stimulating pulvinar and LGN produced responses in V1 that were weaker than the sum of the responses evoked by the independent stimulation of both nuclei. These findings support the presence of drivers and modulators along pulvinar pathways and suggest that the pulvinar can exert a modulatory influence in cortical processing of LGN inputs in V1 while it mainly provides driver inputs to extrastriate areas, reflecting the different connectivity patterns.

What is the role of the pulvinar nucleus in visual motion processing?

Heywood M. Petry, Department of Psychological & Brain Sciences, University of Louisville, Martha E. Bickford, Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine

To effectively interact with our environment, body movements must be coordinated with the perception of visual movement. We will present evidence that regions of the pulvinar nucleus that receive input from the superior colliculus (tectum) may be involved in this process. We have chosen the tree shrew (Tupaia belangeri, a prototype of early primates), as our animal model because tectopulvinar pathways are particularly enhanced in this species, and our psychophysical experiments have revealed that tree shrews are capable of accurately discriminating small differences in the speed and direction of moving visual displays. Using in vivo electrophysiological recording techniques to test receptive field properties, we found that pulvinar neurons are responsive to moving visual stimuli, and most are direction selective. Using anatomical techniques, we found that tectorecipient pulvinar neurons project to the striatum, amygdala, and temporal cortical areas homologous to the primate middle temporal area, MT/V5. Using in vitro recording techniques, immunohistochemistry and stereology, we found that tectorecipient pulvinar neurons express more calcium channels than other thalamic nuclei and thus display a higher propensity to fire with bursts of action potentials, potentially providing a mechanism to effectively coordinate the activity of cortical and subcortical pulvinar targets. Collectively, these results suggest that the pulvinar nucleus may relay visual movement signals from the superior colliculus to subcortical brain regions to guide body movements, and simultaneously to the temporal cortex to modify visual perception as we move though our environment.

One message the pulvinar sends to cortex

Robert H. Wurtz, NIH-NEI, Lab of Sensorimotor Research, Rebecca Berman, NIH-NEI, Lab of Sensorimotor Research

The pulvinar has long been recognized as a way station on a second visual pathway to the cerebral cortex. This identification has largely been based on the pulvinar�s connections, which are appropriate for providing visual information to multiple regions of visual cortex from subcortical areas. What is little known is what information pulvinar actually conveys especially in the intact functioning visual system.  We have identified one pathway through the pulvinar that extends from superior colliculus superficial visual layers though inferior pulvinar (principally PIm) to cortical area MT by using the techniques of combined anti- and orthodromic stimulation. We now have explored what this pathway might convey to cortex and have first concentrated on a modulation of visual processing first seen in SC, the suppression of visual responses during saccades.  We have been able to replicate the previous observations of the suppression in SC and in MT and now show that PIm neurons also are similarly suppressed.  We have then inactivated SC and shown that the suppression in MT is reduced. While we do not know all of the signals conveyed through this pathway to cortex, we do have evidence for one: the suppression of vision during saccades. This signal is neither a visual nor a motor signal but conveys the action of an internal motor signal on visual processing.  Furthermore combining our results in the behaving monkey with recent experiments in mouse brain slices (Phongphanphanee et al. 2011) provides a complete circuit from brainstem to cortex for conveying this suppression.

Role of the pulvinar in regulating information transmission between cortical areas

Sabine Kastner, MD, Department of Psychology, Center for Study of Brain, Mind and Behavior, Green Hall, Princeton, Yuri B. Saalman, Princeton Neuroscience Institute, Princeton University

Recent studies suggest that the degree of neural synchrony between cortical areas can modulate their information transfer according to attentional needs. However, it is not clear how two cortical areas synchronize their activities. Directly connected cortical areas are generally also indirectly connected via the thalamic nucleus, the pulvinar. We hypothesized that the pulvinar helps synchronize activity between cortical areas, and tested this by simultaneously recording from the pulvinar, V4, TEO and LIP of macaque monkeys performing a spatial attention task. Electrodes targeted interconnected sites between these areas, as determined by probabilistic tractography on diffusion tensor imaging data. Spatial attention increased synchrony between the cortical areas in the beta frequency range, in line with increased causal influence of the pulvinar on the cortex at the same frequencies. These results suggest that the pulvinar co-ordinates activity between cortical areas, to increase the efficacy of cortico-cortical transmission.

Visual Attention Gates Spatial Coding in the Human Pulvinar

David Whitney, The University of California, Berkeley, Jason Fischer, The University of California, Berkeley

Based on the pulvinar�s widespread connectivity with the visual cortex, as well as with putative attentional source regions in the frontal and parietal lobes, the pulvinar is suspected to play an important role in visual attention. However, there remain many hypotheses on the pulvinar�s specific function. One hypothesis is that the pulvinar may play a role in filtering distracting stimuli when they are actively ignored. Because it remains unclear whether this is the case, how this might happen, or what the fate of the ignored objects is, we sought to characterize the spatial representation of visual information in the human pulvinar for equally salient attended and ignored objects that were presented simultaneously. In an fMRI experiment, we measured the spatial precision with which attended and ignored stimuli were encoded in the pulvinar, and we found that attention completely gated position information: attended objects were encoded with high spatial precision, but there was no measurable spatial encoding of actively ignored objects. This is despite the fact that the attended and ignored objects were identical and present simultaneously, and both attended and ignored objects were represented with great precision throughout the visual cortex. These data support a role for the pulvinar in distractor filtering and reveal a possible mechanism: by modulating the spatial precision of stimulus encoding, signals from competing stimuli can be suppressed in order to isolate behaviorally relevant objects.

 

 

Vision Sciences Society