10th Annual Dinner and Demo Night

Monday, May 14, 2012, 7:00 – 10:00 pm

Buffet Dinner: 7:00 – 9:00 pm, Vista Ballroom, Sunset & Vista Decks, and Mangrove Pool
Demos: 7:30 – 10:00 pm, Royal Palm 4-5, Acacia Meeting Rooms, Cypress

Please join us Monday evening for the 10th Annual VSS Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year, Gideon Caplovitz, Arthur Shapiro, Dejan Todorovic, and Maryam Vaziri Pashkam are co-curators for Demo Night.

Exciting News: Two prizes will be given to the best demos, sponsored by the journal Perception. Please don’t forget to find Pete Thompson, Tim Meese, or Amye Kenall for a ballot and vote.

A buffet dinner is served in the Vista Ballroom and on the Sunset Deck and Mangrove Pool area. Demos are located upstairs on the ballroom level in the Royal Palm 4-5 and Acacia Meeting Rooms.

Some exhibitors have also prepared special demos for Demo Night.

Demo Night is free for all registered VSS attendees. Meal tickets are not required, but you must wear your VSS badge for entry to the Dinner Buffet. Guests and family members of all ages are welcome to attend the demos but must purchase a ticket for dinner. You can register your guests at any time during the meeting at the VSS Registration Desk, located in the Royal Palm Foyer. A desk will also be set up at the entrance to the dinner in the Vista Ballroom at 6:30 pm.

Guest prices:  Adults: $25, Youth (6-12 years old): $10, Children under 6: free

The Looking Glass Motion Effect

Kenneth Brecher, Boston University
A new subjective motion effect utilizing recently designed fully vectorized color images will be displayed. This effect is based on one of 9 screen prints originally created in 1966 by British artist Peter Sedgley that he called the ‘’Looking Glass Suite”.

The phantom spokes illusion

Jeffrey Mulligan, NASA Ames Research Center
When a regular array of small bright dots is rotated in the image plane, dark ephemeral spoke-like bands are seen, radiating from the instantaneous center of rotation. The effect is easily observed with a common plastic diffusing sheet for florescent lighting.

Spin the wheel and lose the spatial relationships.

Alex Holcombe, University of Sydney
With arrays of colored discs moving together, at very slow speeds it is easy to see which are adjacent. Up the speed to discover that at which you no longer can perceive the spatial relationship among the discs. Is this speed the same as your attentional tracking speed limit?

The Money Business Illusion

Anthony S. Barnhart, Arizona State University
The Money Business Illusion demonstrates how time-tested techniques employed in stage entertainment can be infused with standard psychophysical tasks from the laboratory to create ecologically valid stimuli for empirical research.

The Spinning Chair of Motion Perception

Kyle Gagnon, Michael Geuss, Jonathan Butner, Tom Malloy, Jeanine Stefanucci, University of Utah
We present a visual display of a flow of black and white dots. The dots appear to flow like a wave in one direction. After spinning in a chair in order to alter natural eye movements, we show that the dots appear to flow in the opposite direction. We suggest that spinning in the chair changes the natural frequency of eye movements, changing the coupling ratio between the eye movements and the retinal image, ultimately changing the direction and rate of perceived motion.

The Anorthoscope and Kinetic Anamorphosis

Patrick Mor, Gideon Paul Caplovitz, University of Nevada, Reno
Here we bring to life this classic apparatus and perceptual effect developed by Joseph Plateau in the 1830s.

Continuous Transilience Induced Blindness

Seiichiro Naito, Makoto Katsumura & Ryo Shohara, Human and Information Science, Tokai University
We demonstrate the Continuous Transilience Induced Blindness, an enhanced variant of Motion-Induced Blindness (MIB).

Efficiency of motion perception from dynamic stereo cues

Anshul Jain, Qasim Zaidi, Graduate Center for Vision Research, SUNY College of Optometry
Observers will be able to measure how efficient they are (compared to an optimal observer) at discriminating global rotation direction of a deforming disparity-defined 3D shape when the local motions are entirely in depth (orthogonal to rotation), plus when local motions are in the direction opposite to global shape rotation.

Beuchet Chair

Peter Thompson, Rob Stone, University of York
Make your friends look small – just sit them on the Beuchet chair. The demonstration is akin to the Ames room but much more compact. And our version is portable and ideal for classroom demonstrations.

Eyeglass Reversal

Songjoo Oh, Department of Psychology, Seoul National University
People are familiar with stimuli such as the Necker Cube that lead to perceptual reversals. Unfortunately, constructing physical versions of such stimuli can be challenging. I will show that one’s own eyeglasses are a very convenient object for experiencing perceptual reversals. In this demonstration, a pair of regular eyeglasses that are viewed inwardly are perceived as placed outwardly. Please bring your own eyeglasses and enjoy the fun!

The Magic Wand Illusion

Christopher Tyler, Smith-Kettlewell
The dynamic wand effect is the revelation of an image that is the same color as its background through wiping an object underneath it. It is a strictly dynamic illusion that requires the integration of the revealed contours over time in order to resolve the integrated image structure.

A display blank triggers a reversal of KDE

Masahiro Ishii, Sapporo City University
When a set of randomly positioned dots moves on a screen with motion paths that are projections of rigid 3D motion, we perceive an impression of depth. The object appears to reverse in depth at odd intervals, regardless of the consciousness. We demonstrate that a presentation blank triggers a reversal.

Key object feature dimensions modulate texture filling-in

Chao Chaang Mao, National Yang-Ming University, Institute of Neuroscience and Brain Research Center, Taipei, Taiwan
In this demo, we show that filling-in is faster when the background and target textures share the same key dimension features (‘same’ condition), versus when they have opposing features (‘different’).

‘Pub Vision’

Peter Thompson, Rob Stone, University of York
Simple hands-on demonstrations that you can do in the pub.

Stereopsis with one eye and a pencil

Dhanraj Vishwanath, University of St. Andrews
The impression of stereopsis is generated by viewing a photograph with one eye while fixating a pencil tip.

Controlling material appearance with spatial frequency manipulations

Martin Giesel, Qasim Zaidi, Graduate Center for Vision Research, SUNY College of Optometry
Observers will be able to interactively manipulate roughness, volume and thickness of fabrics and other materials by changing the energy in bands of image frequencies. They will also see how adaptation to noise filtered into specific spatial frequency bands changes the perception of corresponding material properties.

Carrots or Cheetos: Material appearance under monochromatic light

Bei Xiao, Hanhan Wei, Xiaodan Jia, Edward Adelson, Brain and Cognitive Sciences, Massachusetts Institute of Technology
In this demo, we display translucent objects under a monochromatic light source (low-pressure sodium light) or a broad-band light source. We show that a translucent object, such as a bar of soap, looks more opaque under monochromatic light than under broad-band light. In addition, we explore how material perception of various objects is distorted under monochromatic light.

An Aftereffect Based on Texture Element Ratios

Anna Kosovicheva, Benjamin Wolfe, University of California, Berkeley
We present an aftereffect based on adaptation to the ratio of two different types of texture elements. We show the effect for textures defined by color, luminance, motion, and simple figures.

General object constancy

Yury Petrov, Jiehui Qian, Northeastern University
We will present simultaneous illusions of size, contrast, and depth created by an optic flow. The illusions manifest what we call the phenomenon of general object constancy: brain accounts for viewing distance effects in order to create a perception of the object’s true appearance, including its size, contrast, and depth profile.

Attentional influences on bi-stable afterimages

Eric Reavis, Peter J. Kohler, Peter U. Tse, Dartmouth College
Attention constantly shapes our perceptual experience. See this for yourself, as you use your attention to modulate your perception of bistable afterimages.

Touching and interpreting hallucinated patterns in dynamic visual noise

Justin Jungé, Jordan Suchow, George Alvarez, Harvard University
We present a display of dynamic colorful noise that reliably produces several illusions. The display appears to interact directly with objects held and moved in front of it, across a range of stimulus properties and viewing distances (MacKay, 1965). Even without partial occlusion, the display triggers multiple interpretations that persist for long durations and which can be influenced by attention and intention.

Lack of volumetric stereo neon spreading and top-down defeating of stereo

Eric Altschuler (New Jersey Medical School), Abigail Huang(NJMS), Elizabeth Seckel (UCSD), Alice Hon (NJMS), Xintong Li (NJMS), VS Ramachandran (UCSD)
Using stereograms defined by illusory contours we show that there is no volumetric neon spreading in stereo even though stereo illusory contours and surfaces are seen. Furthermore the stereo can be subjectively destroyed by top-down imagery; a stereo illusory pyramid can be made to lose its apex simply by seeing the whole pyramid through illusory holes (‘’swiss cheese’’).

Motion from Structure in Stereograms

Benjamin Backus, Graduate Center for Vision Research, SUNY College of Optometry
You’ve probably noticed this yourself: in a stereogram, objects with different binocular disparities appear to move when you move your head. Near objects move with your head, as expected from geometry. Come to our talk and then explore details of this phenomenon yourself at the demo.

Diamonds Move Forever

Oliver Flynn, Arthur Shapiro, American University
A stationary diamond appears to move continuously in a single direction. The luminance levels of the stationary background and the stationary edges that surround the diamond modulate in time. The relative phase of modulation creates motion information.

Color wagon wheels

William Kistler, Arthur Shapiro, American University
We show a series of illusions that arise when colors are added to the wagon wheel illusion. The color wagon wheel demonstrates methods for separating different motion responses, and how these responses depend on the contrast between objects, and objects and background.

Explaining Brightness illusions with Adobe Photoshop’s high pass filter

Erica Dixon, Arthur Shapiro, American University
In brightness phenomena physically identical patches have different brightness levels depending on their respective backgrounds. Here I will use Adobe Photoshop’s high pass filter to demonstrate that most of the differences observed in brightness illusions correspond to physical properties of the image once low spatial frequency content is removed.

Your Mind’s Eye

Al Seckel, Elizabeth Seckel, UC San Diego
Your Mind’s Eye is an educational application featuring perceptual illusions for both mobile and tablet platforms. Come control critical parameters thereby revealing the hidden constraints of the perceptual system in a dramatic and informative way. The application is augmented by movies of perceptual effects, both artistic and scientific. Each illusion is accompanied with explanatory text. Ideal for researchers and teachers.

Consumer Priced Immersive Virtual Reality with Kinect and Sony 3D Goggles

Michael Schaletzki, Matthias Pusch, Paul Elliott, WorldViz
Experience a new high-quality consumer priced immersive standalone VR system. Based on the WorldViz Vizard VR software, the system comes with vivid OLED display technology, 1280×720 resolution per eye, 52 degrees field-of-view, Kinect and inertial body tracking, rapid app development tools, a fun app starter kit, support & training.

VPixx 3D Survivor

Peter April, VPixx
A demonstration of 3D video projection, adapted from our own response-time game from past years. We will be handing out passive 3D glasses as people enter the room, and will be giving away prizes to the players with the fastest reaction times.

A Nomadic HMD Experience Without Carrying a Computer

Yuval Boger, Meredith Zanelotti, Sensics
We will demonstrate a battery-operated, wireless high-def HMD together with in-band head tracking driven.

VSS@ARVO 2012

Visual Rehabilitation

Time: Wednesday, May 9, 2012, 12:00 – 1:30 pm, Room 315 (Fort Lauderdale Convention Center)
Chair: Pascal Mamassian, University of Glasgow
Speakers:
Dennis Levi, School of Optometry, University of California, Berkeley
Krystel R. Huxlin. Flaum Eye Institute, University of Rochester
Arash Sahraie. College of Life Sciences and Medicine, University of Aberdeen

Every year, VSS and ARVO collaborate in a symposium – VSS at ARVO or ARVO at
VSS – designed to highlight and present work from one society at the annual
meeting of the other. This year’s symposium is at ARVO.

Experience-dependent plasticity is closely linked with the development of sensory function. However, there is also growing evidence for plasticity in the adult visual system. This symposium re-examines the notions of critical period and sensitive period for a variety of visual functions. One critical issue is the extent to which alternative neural structures are recruited to restore these visual functions. Recent experimental and clinical evidence will be discussed for the rehabilitation of amblyopia and blindsight.

2012 Public Lecture – Terri Lewis

Terri Lewis

McMaster University in Hamilton, Ontario

Terri Lewis is a professor of Psychology, Neuroscience & Behaviour at McMaster University in Hamilton, Ontario, with appointments in Ophthalmology at the University of Toronto and at The Hospital for Sick Children in Toronto. Dr. Lewis is a world-renowned expert in babies’ vision, and is part of an international think tank on new approaches to improving poor vision in adults. She received her BA at the University of Toronto and her PhD at McMaster University, and has been invited to lecture about her work around the world. She has more than 80 publications in peer-reviewed journals and more than 200 presentations at scientific meetings. She is known for her lively and clear presentation style, and is frequently featured in the international media, including The New York Times and PBS television.

What Babies See

Saturday, May 12, 2012, 10:00 am – 12:00 pm, Renaissance Academy of Florida Gulf Coast University

When a newborn baby looks at her mother’s or grandmother’s face for the first time, what does she see? For a long time, people assumed that babies were blind at birth, seeing nothing more than vague shadows. But that assumption was based only on the knowledge that the newborn’s eyes and brain are very immature. In fact, babies can see much more than you might think. This lecture will describe how we can “ask” babies what they see, and how, by creating special “eye charts” for babies, we have discovered the finest detail that they can see, how well they can see color and motion, and even the age at which they might recognize their parents (and grandparents). I will dispel the myths, describe the facts, and uncover the surprises surrounding the amazing visual world of babies.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

Jointly sponsored by VSS and the Renaissance Academy of Florida Gulf Coast University.

2012 Student Workshops

VSS Workshop for PhD Students and Post-docs: Publish or Perish?

Sunday, May 13, 1:00 – 2:00 pm, Banyan 1-2

Chair: Jeremy Wolfe
Discussants: Cathleen Moore, Eli Brenner, and Li Zhaoping

Publications are the key to success in science. How important is it to be the first author? Should I go for one big paper or two separate, smaller publications? What is the importance of bibliometric indices like the h-factor? Are the reviewers the enemy or my best friends in the publication process?

These questions will be addressed in a one-hour session headed by Dr. Jeremy Wolfe. Dr. Wolfe will give a brief introduction, which will be followed by audience questions and discussion. Three panel members will participate, who are experienced editors in all fields of vision science.

Jeremy Wolfe

Jeremy Wolfe is the Editor-in-Chief of Attention, Perception & Psychophysics, one of the leading journals in the field of Vision Sciences. He received his undergraduate degree in Psychology from Princeton (’77) and his PhD on binocular single vision from the Massachusetts Institute of Technology (’81) where his doctoral advisor was Richard Held. He was on the faculty of MIT until 1991 when he moved to Brigham and Women’s Hospital and Harvard Medical School where he is Professor of Ophthalmology. His major areas of current research concern visual attention and its role in visual experience and visual behavior.

Cathleen Moore

Psychology
University of Iowa

Eli Brenner

Human Movement Sciences
Vrije Universiteit Amsterdam

Li Zhaoping

Computer Science
University College London

VSS Career Event for PhD Students and Post-docs: What’s Next!

Sunday, May 13, 1:00 – 2:00 pm, Acacia 4-6

Chairs: Adriane Seiffert and Jason Droll
Discussants: Ione Fine, George Alvarez, and David Burr

What will be your next step in your life? Will you pursue an academic career as a basic scientist at a university? Or do you plan on working in business? Maybe you want to combine both! And how do you combine your ambition with a partner and a family? Do women have the same opportunities as men?

These burning questions will be addressed in a one hour session with short introductions by Drs. Adriane Seiffert (Vanderbilt) and Jason Droll (MEA Forensic). After these introductions there will be a lively discussion with the audience and a small panel with Ione Fine, David Burr and George Alvarez.

Adriane Seiffert

Adriane Seiffert received her PhD from Harvard (Cavanagh & Nakayama lab). Her research is directed towards understanding how visual information that changes over time is assimilated into mental representations that direct actions. For this special VSS event, she will share candid advice on the issues of entering academia, balancing family and career, and solving the two-body problem.

Jason Droll

Jason Droll received his PhD in Brain and Cognitive Science from the University of Rochester in 2005 and pursued postdoctoral research at UC Santa Barbara through 2008. His research has focused on how task demands influence eye movements and visual attention. Curious to explore alternate career opportunities, he has since worked both at Exponent and MEA Forensic as a scientist in human factors. Often retained as an expert witness for litigation, Jason applies academic principles of vision to answer questions regarding the use of vision during daily tasks such as driving.

Ione Fine

University of Washington

George Alvarez

Harvard University

David Burr

CNR – Institute for Neuroscienc
Pisa, Italy

Neuromodulation of Visual Perception

Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 6-8
Organizers: Jutta Billino, Justus-Liebig-University Giessen and Ulrich Ettinger, Rheinische Friedrich-Wilhelms-Universität Bonn
Presenters: Anita A. Disney, Alexander Thiele, Behrad Noudoost, Ariel Rokem, Ulrich Ettinger, Patrick J. Bennett

< Back to 2012 Symposia

Symposium Description

Over the last decades insights into the neurobiological mechanisms of visual perception have accumulated an impressive knowledge base. However, only recently research has started to uncover how different neurotransmitters affect visual processing. Advances in this research area expand our understanding of the complex regulation of sensory and sensorimotor processes. They moreover shed light on the mechanisms underlying individual differences in visual perception and oculomotor control that have been repeatedly observed, but are still insufficiently understood. The symposium aims to bring together experts in the field that complement each other with regard to different neurotransmitter systems, methods, and implications of their findings. Thus, the audience will be provided with an up-to-date overview of our knowledge on neuromodulation of visual perception. The symposium will start with presentations on physiological data showing the complexity of neuromodulation in early visual cortex. Anita Disney (Salk Institute) has worked together with Mike Hawken (New York University) on cholinergic mechanisms in macaque V1. Their findings show that nicotinergic receptors for acetylcholine are involved in gain modulation. The effects of nicotine application resemble those of attention in the awake monkey. Thus, it has been suggested that attentional effects in V1 activity might be partly mediated by acetylcholine. The presentation by Alexander Thiele and colleagues (Newcastle University) will tie in with the focus on attention. They have studied differential contributions of acetylcholine and glutamate to attentional modulation in V1. They were able to show that both neurotransmitters independently influence firing characteristics of V1 neurons associated with enhanced attention. The work of Behrad Noudoost and Tirin Moore (Stanford University) addresses prefrontal control of visual cortical signals mediated by dopamine. Their findings reveal that dopaminergic manipulation in the frontal eye fields does not only affect saccadic target selection, but also modulates response characteristics of V4 neurons. In the second part of the symposium presentations are supposed to bridge the gap between insights from physiology and behavioral data in humans. Ariel Rokem (Stanford University) and Michael Silver (UC Berkeley) pharmacologically enhanced cholinergic transmission in healthy humans and studied perceptual learning. Results support that acetylcholine increases the effects of perceptual learning which points to its role in regulation of neural plasticity. Ulrich Ettinger (Ludwig-Maximilians-University Munich) will summarize his work on the modulation of oculomotor control by cholinergic and dopaminergic challenges. He has studied effects of pharmacological manipulation as well as of functional genetics on saccadic eye movements. His methods also include imaging and clinical neuropsychology. The symposium will be completed by a presentation of Patrick Bennett and Allison Sekuler (McMaster University) on age-related changes in visual perception and how these can be modeled by altered neurotransmitter activity. The symposium on neuromodulation of visual perception will attract a broad audience because it offers a comprehensive and interdisciplinary overview of recent advances in this innovative research area. Presentations cover fundamental mechanisms of visual processing as well as implications for perception and visuomotor control. Attendees with diverse backgrounds will benefit and will be inspired to apply insights into neuromodulation to their own research field.

Presentations

Modulating visual gain: cholinergic mechanisms in macaque V1

Anita A. Disney, Salk Institute

Michael J. Hawken, Center for Neural Science, New York University

Cholinergic neuromodulation has been suggested to underlie arousal and attention in mammals. Acetylcholine (ACh) is released in cortex by volume transmission and so specificity in its effects must largely be conferred by selective expression of ACh receptors (AChRs). To dissect the local circuit action of ACh, we have used both quantitative anatomy and in vivo physiology and pharmacology during visual stimulation in macaque primary visual cortex (V1). We have shown that nicotinic AChRs are found presynaptically at thalamocortical synapses arriving at spiny neurons in layer 4c of V1 and that nicotine acts in this layer to enhance the gain of visual neurons. Similar evidence for nicotinic enhancement of thalamocortical transmission has been found in the primary cortices of other species and across sensory systems. In separate experiments we have shown that, amongst intrinsic V1 neurons, a higher proportion of GABAergic – in particular parvalbumin-immunoreactive – neurons express muscarinic AChRs than do excitatory neurons. We have also shown that ACh strongly suppresses visual responses outside layer 4c of macaque V1 and that this suppression can be blocked using a GABAa receptor antagonist. Suppression by ACh has been demonstrated in other cortical model systems but is often found to be mediated by reduced glutamate release rather than enhanced release of GABA. Recent anatomical data on AChR expression in the extrastriate visual cortex of the macaque and in V1 of rats, ferrets, and humans, suggest that there may be variation in the targeting of muscarinic mechanisms across neocortical model systems

Differential contribution of cholinergic and glutamatergic receptors to attentional modulation in V1

Alexander Thiele, Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, United Kingdom, Jose Herreo, Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, United Kingdom; Alwin Gieselmann, Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, United Kingdom

In V1, attentional modulation of firing rates is dependent on cholinergic (muscarinic) mechanisms (Herrero et al., 2008). Modelling suggests that appropriate ACh drive enables top-down feedback from higher cortical areas to exert its influence (Deco & Thiele, 2011). The implementation of such feedback at the transmitter/receptor level is poorly understood, but it is generally assumed that feedback relies on ionotropic glutamatergic (iGluR) mechanisms. We investigated this possibility by combining iontophoretic pharmacological analysis with V1 cell recordings while macaques performed a spatial attention task. Blockade or activation of iGluR did not alter attention-induced increases in firing rate, when compared to attend away conditions. However, attention reduced firing rate variance as previously reported in V4 (Mitchell, Sundberg, Reynolds, 2007), and this reduction depended on functioning iGluRs. Attention also reduced spike coherence between simultaneously recorded neurons in V1 as previously demonstrated for V4 (Cohen & Maunsell, 2009; Mitchell et al., 2007). Again, this reduction depended on functional iGluR. Thus overall excitatory drive (probably aided by feedback), increased the signal to noise ratio (reduced firing rate variance) and reduced redundancy of information transmission (noise correlation) in V1. Conversely, attention induced firing rate differences are enabled by the cholinergic system. These studies identify independent contributions of different neurotransmitter systems to attentional modulation in V1.

Dopamine-mediated prefrontal control of visual cortical signals

Behrad Noudoost, Department of Neurobiology, Stanford University School of Medicine, Tirin Moore, Department of Neurobiology, Stanford University School of Medicine & Howard Hughes Medical Institute, Stanford University School of Medicine

Prefrontal cortex (PFC) is believed to play a crucial role in executive control of cognitive functions. Part of this control is thought to be achieved by control of sensory signals in posterior sensory cortices. Dopamine is known to play a role in modulating the strength of signals within the PFC. We tested whether this neurotransmitter is involved in PFC’s top-down control of signals within posterior sensory areas. We recorded responses of neurons in visual cortex (area V4) before and after infusion of the D1 receptor (D1R)-antagonist SCH23390 into the frontal eye field (FEF) in monkeys performing visual fixation and saccadic target selection tasks. Visual stimuli were presented within the shared response fields of simultaneously studied V4 and FEF sites. We found that modulation of D1R-mediated activity within the FEF enhances the strength of visual signals in V4 and increases the monkeys’ tendency to choose targets presented within the affected part of visual space. Similar to the D1R manipulation, modulation of D2R-mediated activity within the FEF also increased saccadic target selection. However, it failed to alter visual responses within area V4. The observed effects of D1Rs in mediating the control of visual cortical signals and the selection of visual targets, coupled with its known role in working memory, suggest PFC dopamine as a key player in the control of cognitive functions.

Cholinergic enhancement of perceptual learning in the human visual system

Ariel Rokem, Department of Psychology, Stanford University, Michael A. Silver, Helen Wills Neuroscience Institute and School of Optometry, University of California, Berkeley

Learning from experience underlies our ability to adapt to novel tasks and unfamiliar environments. But how does the visual system know when to adapt and change and when to remain stable? The neurotransmitter acetylcholine (ACh) has been shown to play a critical role in cognitive processes such as attention and learning. Previous research in animal models has shown that plasticity in sensory systems often depends on the task relevance of the stimulus, but experimentally increasing ACh in cortex can replace task relevance in inducing experience-dependent plasticity. Perceptual learning (PL) is a specific and persistent improvement in performance of a perceptual task with training. To test the role of ACh in PL of visual discrimination, we pharmacologically enhanced cholinergic transmission in the brains of healthy human participants by administering the cholinesterase inhibitor donepezil (trade name: Aricept), a commonly prescribed treatment for Alzheimer’s disease. To directly evaluate the effect of cholinergic enhancement, we conducted a double-blind, placebo-controlled cross-over study, in which each subject participated in a course of training under placebo and a course of training under donepezil. We found that, relative to placebo, donepezil increased the magnitude and specificity of the improvement in perceptual performance following PL. These results suggest that ACh plays a role in highlighting occasions in which learning should occur. Specifically, ACh may regulate neural plasticity by selectively increasing responses of neurons to behaviorally relevant stimuli.

Pharmacological Influences on Oculomotor Control in Healthy Humans

Ulrich Ettinger, Rheinische Friedrich-Wilhelms-Universität Bonn

Oculomotor control can be studied as an important model system for our understanding of how the brain implements visually informed (reflexive and voluntary) movements. A number of paradigms have been developed to investigate specific aspects of the cognitive and sensorimotor processes underlying this fascinating ability of the brain. For example, saccadic paradigms allow the specific and experimentally controlled study of response inhibition as well as temporo-spatial prediction. In this talk I will present recent data from studies investigating pharmacological influences on saccadic control in healthy humans. Findings from nicotine studies point to improvements of response inhibition and volitional response generation through this cholinergic agonist. Evidence from methylphenidate on the other hand suggests that oculomotor as well as motor response inhibition is unaffected by this dopaminergic manipulation, whereas the generation of saccades to temporally predictive visual targets is improved. These findings will be integrated with our published and ongoing work on the molecular genetic correlates of eye movements as well as their underlying brain activity. I will conclude by (1) summarising the pharmacological mechanisms underlying saccadic control and (2) emphasising the role that such oculomotor tasks may play in the evaluation of potential cognitive enhancing compounds, with implications for neuropsychiatric conditions such as ADHD, schizophrenia and dementia.

The effects of aging on GABAergic mechanisms and their influence on visual perception

Patrick J. Bennett and Allison B. Sekuler, Department of Psychology, Neuroscience & Behaviour McMaster University

The functional properties of visual mechanisms, such as the tuning properties of visual cortical neurons, are thought to emerge from an interaction among excitatory and inhibitory neural mechanisms. Hence, changing the balance between excitation and inhibition should lead, at least in some cases, to measurable changes in these mechanisms and, presumably, visual perception. Recent evidence suggests that aging is associated with changes in GABAergic signaling (Leventhal et al., 2003; Pinto et al., 2010), however it remains unclear how these changes manifest themselves in performance in psychophysical tasks. Specifically, some psychophysical studies (Betts et al., 2005; Wilson et al., 2011), but not all, are consistent with the idea that certain aspects of age-related changes in vision are caused by a reduction in the effectiveness of cortical inhibitory circuits. In my talk I will review the evidence showing that aging is related to changes in GABAergic mechanisms and the challenges associated with linking such changes to psychophysical performance.

< Back to 2012 Symposia

Human visual cortex: from receptive fields to maps to clusters to perception

Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 4-5
Organizer: Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands
Presenters: Serge O. Dumoulin, Koen V. Haak, Alex R. Wade, Mark M. Schira, Stelios M. Smirnakis, Alyssa A. Brewer

< Back to 2012 Symposia

Symposium Description

The organization of the visual system can be described at different spatial scales. At the smallest scale, the receptive field is a property of individual neurons and summarizes the region of the visual field where visual stimulation elicits a response. These receptive fields are organized into visual field maps, where neighboring neurons process neighboring parts of the visual field. Many visual field maps exist, suggesting that every map contains a unique representation of the visual field. This notion relates the visual field maps to the idea of functional specialization, i.e. separate cortical regions are involved in different processes. However, the computational processes within a visual field map do not have to coincide with perceptual qualities. Indeed most perceptual functions are associated with multiple visual field maps and even multiple cortical regions. Visual field maps are organized in clusters that share a similar eccentricity organization. This has lead to the proposal that perceptual specializations correlate with clusters rather than individual maps. This symposium will highlight current concepts of the organization of visual cortex and their relation to perception and plasticity. The speakers have used a variety of neuroimaging techniques with a focus on conventional functional magnetic resonance imaging (fMRI) approaches, but also including high-resolution fMRI, electroencephalography (EEG), subdural electrocorticography (ECoG), and invasive electrophysiology. We will describe data-analysis techniques to reconstruct receptive field properties of neural populations, and extend them to visual field maps and clusters within human and macaque visual cortex. We describe the way these receptive field properties vary within and across different visual field maps. Next, we extend conventional stimulus-referred notions of the receptive field to neural-referred properties, i.e. cortico-cortical receptive fields that capture the information flow between visual field maps. We also demonstrate techniques to reveal extra-classical receptive field interactions similar to those seen in classical psychophysical “surround suppression” in both S-cone and achromatic pathways. Next we will consider the detailed organization within the foveal confluence, and model the unique constraints that are associated with this organization. Furthermore, we will consider how these neural properties change with the state of chronic visual deprivation due to damage to the visual system, and in subjects with severely altered visual input due to prism-adaptation. The link between visual cortex’ organization, perception and plasticity is a fundamental part of vision science. The symposium highlights these links at various spatial scales. In addition, the attendees will gain insight into a broad spectrum of state-of-the-art data-acquisition and data-analyses neuroimaging techniques. Therefore, we believe that this symposium will be of interest to a wide range of visual scientists, including students, researchers and faculty.

Presentations

Reconstructing human population receptive field properties

Serge O. Dumoulin, Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, Netherlands, B.M. Harvey, Experimental Psychology, Utrecht University, Netherlands

We describe a method that reconstructs population receptive field (pRF) properties in human visual cortex using fMRI. This data-analysis technique is able to reconstruct several properties of the underlying neural population, such as quantitative estimates of the pRF position (maps), size as well as suppressive surrounds. PRF sizes increase with increasing eccentricity and up the visual hierarchy. In the same human subject, fMRI pRF measurements are comparable to those derived from subdural electrocorticography (ECoG).   Furthermore, we describe a close relationship of pRF sizes to the cortical magnification factor (CMF). Within V1, interhemisphere and subject variations in CMF, pRF size, and V1 surface area are correlated. This suggests a constant processing unit shared between humans. PRF sizes increase between visual areas and with eccentricity, but when expressed in V1 cortical surface area (i.e., cortico-cortical pRFs), they are constant across eccentricity in V2 and V3. Thus, V2, V3, and to some degree hV4, sample from a constant extent of V1. This underscores the importance of V1 architecture as a reference frame for subsequent processing stages and ultimately perception.

Cortico-cortical receptive field modeling using functional magnetic resonance imaging (fMRI)

Koen V. Haak, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands, J. Winawer, Psychology, Stanford University; B.M. Harvey, Experimental Psychology, Utrecht University; R. Renken, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands; S.O. Dumoulin, Experimental Psychology, Utrecht University, Netherlands; B.A. Wandell, Psychology, Stanford University; F.W. Cornelissen, Laboratory for Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Netherlands

The traditional way to study the properties of cortical visual neurons is to measure responses to visually presented stimuli (stimulus-referred). A second way to understand neuronal computations is to characterize responses in terms of the responses in other parts of the nervous system (neural-referred).   A model that describes the relationship between responses in distinct cortical locations is essential to clarify the network of cortical signaling pathways. Just as a stimulus-referred receptive field predicts the neural response as a function of the stimulus contrast, the neural-referred receptive field predicts the neural response as a function of responses elsewhere in the nervous system. When applied to two cortical regions, this function can be called the population cortico-cortical receptive field (CCRF), and it can be used to assess the fine-grained topographic connectivity between early visual areas. Here, we model the CCRF as a Gaussian-weighted region on the cortical surface and apply the model to fMRI data from both stimulus-driven and resting-state experimental conditions in visual cortex to demonstrate that 1) higher order visual areas such as V2, V3, hV4 and the LOC show an increase in the CCRF size when sampling from the V1 surface, 2) the CCRF size of these higher order visual areas is constant over the V1 surface, 3) the method traces inherent properties of the visual cortical organization, 4) it probes the direction of the flow of information.

Imaging extraclassical receptive fields in early visual cortex

Alex R. Wade, Department of Psychology University of York, Heslington, UK, B. Xiao, Department of Brain and Cognitive Sciences, MIT; J. Rowland, Department of Art Practise, UC Berkeley

Psychophysically, apparent color and contrast can be modulated by long-range contextual effects. In this talk I will describe a series of neuroimaging experiments that we have performed to examine the effects of spatial context on color and contrast signals in early human visual cortex.   Using fMRI we first show that regions of high contrast in the fovea exert a long-range suppressive effect across visual cortex that is consistent with a contrast gain control mechanism. This suppression is weaker when using stimuli that excite the chromatic pathways and may occur relatively early in the visual processing stream (Wade, Rowland, J Neurosci, 2010).   We then used high-resolution source imaged EEG to examine the effects of context on V1 signals initiated in different chromatic and achromatic precortical pathways (Xiao and Wade, J Vision, 2010). We found that contextual effects similar to those seen in classical psychophysical “surround suppression” were present in both S-cone and achromatic pathways but that there was little contextual interaction between these pathways – either in our behavioral or in our neuroimaging paradigms.   Finally, we used fMRI multivariate pattern analysis techniques to examine the presence of chromatic tuning in large extraclassical receptive fields (ECRFs). We found that ECRFs have sufficient chromatic tuning to enable classification based solely on information in suppressed voxels that are not directly excited by the stimulus. In many cases, performance using ECRFs was as accurate as that using voxels driven directly by the stimulus.

The human foveal confluence and high resolution fMRI

Mark M. Schira, Neuroscience Research Australia (NeuRA), Sydney & University of New South Wales, Sydney, Australia

After remaining terra incognita for 40 years, the detailed organization of the foveal confluence has just recently been described in humans. I will present recent high resolution mapping results in human subjects and introduce current concepts of its organization in human and other primates (Schira et al., J. Nsci, 2009). I will then introduce a new algebraic retino-cortical projection function that accurately models the V1-V3 complex to the level of our knowledge about the actual organization (Schira et al. PLoS Comp. Biol. 2010). Informed by this model I will discuss important properties of foveal cortex in primates. These considerations demonstrate that the observed organization though surprising at first hand is in fact a good compromise with respect to cortical surface and local isotropy, proving a potential explanation for this organization. Finally, I will discuss recent advances such as multi-channel head coils and parallel imaging which have greatly improved the quality and possibilities of MRI. Unfortunately, most fMRI research is still essentially performed in the same old 3 by 3 by 3 mm style – which was adequate when using a 1.5T scanner and a birdcage head coil. I will introduce simple high resolution techniques that allow fairly accurate estimates of the foveal organization in research subjects within a reasonable timeframe of approximately 20 minutes, providing a powerful tool for research of foveal vision.

Population receptive field measurements in macaque visual cortex

Stelios M. Smirnakis, Departments of Neurosci. and Neurol., Baylor Col. of Med., Houston, TX, G.A. Keliris, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany; Y. Shao, A. Papanikolaou, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany;   N.K. Logothetis, Max Planck Inst. For Biol. Cybernetics, Tuebingen, Germany, Div. of Imaging Sci. and Biomed. Engin., Univ. of Manchester, United Kingdom

Visual receptive fields have dynamic properties that may change with the conditions of visual stimulation or with the state of chronic visual deprivation. We used 4.7 Tesla functional magnetic resonance imaging (fMRI) to study the visual cortex of two normal adult macaque monkeys and one macaque with binocular central retinal lesions due to a form of juvenile macular degeneration (MD). FMRI experiments were performed under light remifentanyl induced anesthesia (Logothetis et al. Nat. Neurosci. 1999). Standard moving horizontal/vertical bar stimuli were presented to the subjects and the population receptive field (pRF) method (Dumoulin and Wandell, Neuroimage 2008) was used to measure retinotopic maps and pRF sizes in early visual areas.   FMRI measurements of normal monkeys agree with published electrophysiological results, with pRF sizes and electrophysiology measurements showing similar trends. For the MD monkey, the size and location of the lesion projection zone (LPZ) was consistent with the retinotopic projection of the retinal lesion in early visual areas. No significant BOLD activity was seen within the V1 LPZ, and the retinotopic organization of the non-deafferented V1 periphery was regular without distortion. Interestingly, area V5/MT of the MD monkey showed more extensive activation than area V5/MT of control monkeys which had part of their visual field obscured (artificial scotoma) to match the scotoma of the MD monkey. V5/MT PRF sizes of the MD monkey were on average smaller than controls. PRF estimation methods allow us to measure and follow in vivo how the properties of visual areas change as a function of cortical reorganization. Finally, if there is time, we will discuss a different method of pRF estimation that yields additional information.

Functional plasticity in human parietal visual field map clusters: Adapting to reversed visual input

Alyssa A. Brewer, Department of Cognitive Sciences University of California, Irvine Irvine, CA, B. Barton, Department of Cognitive Sciences University of California, Irvine; L. Lin, AcuFocus, Inc., Irvine

Knowledge of the normal organization of visual field map clusters allows us to study potential reorganization within visual cortex under conditions that lead to a disruption of the normal visual inputs. Here we exploit the dynamic nature of visuomotor regions in posterior parietal cortex to examine cortical functional plasticity induced by a complete reversal of visual input in normal adult humans. We also investigate whether there is a difference in the timing or degree of a second adaptation to the left-right visual field reversal in adult humans after long-term recovery from the initial adaptation period. Subjects wore left-right reversing prism spectacles continuously for 14 days and then returned for a 4-day re-adaptation to the reversed visual field 1-9 months later. For each subject, we used population receptive field modeling fMRI methods to track the receptive field alterations within the occipital and parietal visual field map clusters across time points. The results from the first 14-day experimental period highlight a systematic and gradual shift of visual field coverage from contralateral space into ipsilateral space in parietal cortex throughout the prism adaptation period. After the second, 4-day experimental period, the data demonstrate a faster time course for both behavioral and cortical re-adaptation. These measurements in subjects with severely altered visual input allow us to identify the cortical regions subserving the dynamic remapping of cortical representations in response to altered visual perception and demonstrate that the changes in the maps produced by the initial long prism adaptation period persist over an extended time.

< Back to 2012 Symposia

Distinguishing perceptual shifts from response biases

Time/Room: Friday, May 11, 3:30 – 5:30 pm, Royal Ballroom 1-3
Organizer: Joshua Solomon, City University London
Presenters: Sam Ling, Vanderbilt; Keith Schneider, York University; Steven Hillyard, UCSD; Donald MacLeod, UCSD; Michael Morgan, City University London, Max Planck Institute for Neurological Research, Cologne; Mark Georgeson, Aston University

< Back to 2012 Symposia

Symposium Description

Sensory adaptation was originally considered a low-level phenomenon involving measurable changes in sensitivity, but has been extended to include many cases where a change in sensitivity has yet to be demonstrated. Examples include adaptation to blur, temporal duration and face identity.  It has also been claimed that adaptation can be affected by attention to the adapting stimulus, and even that adaptation can be caused by imaging the adapting stimulus.  The typical method of measurement in such studies involves a shift in the mean (p50) point of a psychometric function, obtained by the Method of Single Stimuli.  In Signal Detection Theory, the mean is determined by a decision rule, as opposed to the slope which is set by internal noise. The question that arises is how we can distinguish shifts in mean due to a genuine adaptation process from shifts due to a change in the observer’s decision rule.  This was a hot topic in the 60’s, for example, in discussion between Restle and Helson over Adaptation Level Theory, but it has become neglected, with the result that any shift in the mean of a psychometric function is now accepted as evidence for a perceptual shift.  We think that it is time to revive this issue, given the theoretical importance of claims about adaptation being affected by imagination and attention, and the links that are claimed with functional brain imaging.

Presentations

Attention alters appearance

Sam Ling, Vanderbilt University

Maintaining veridicality seems to be of relatively low priority for the human brain; starting at the retina, our neural representations of the physical world undergo dramatic transformations, often forgoing an accurate depiction of the world in favor of augmented signals that are more optimal for the task at hand. Indeed, visual attention has been suggested to play a key role in this process, boosting the neural representations of attended stimuli, and attenuating responses to ignored stimuli. What, however, are the phenomenological consequences of attentional modulation?  I will discuss a series of studies that we and others have conducted, all converging on the notion that attention can actually change the visual appearance of attended stimuli across a variety of perceptual domains, such as contrast, spatial frequency, and color. These studies reveal that visual attention not only changes our neural representations, but that it can actually affect what we think we see.

Attention increases salience and biases decisions but does not alter appearance.

Keith Schneider, York University

Attention enhances our perceptual abilities and increases neural activity.  Still debated is whether an attended object, given its higher salience and more robust representation, actually looks any different than an otherwise identical but unattended object.  One might expect that this question could be easily answered by an experiment in which an observer is presented two stimuli differing along one dimension, contrast for example, to one of which attention has been directed, and must report which stimulus has the higher apparent contrast.  The problem with this sort of comparative judgment is that in the most informative case, that in which the two stimuli are equal, the observer is also maximally uncertain and therefore most susceptible to extraneous influence.  An intelligent observer might report, all other things being equal, that the stimulus about which he or she has more information is the one with higher contrast.  (And it doesn’t help to ask which stimulus has the lower contrast, because then the observer might just report the less informed stimulus!)  In this way, attention can bias the decision mechanism and confound the experiment such that it is not possible for the experimenter to differentiate this bias from an actual change in appearance.  It has been over ten years since I proposed a solution to this dilemma – an equality judgment task in which observers report whether the two stimuli are equal in appearance or not.  This paradigm has been supported in the literature and has withstood criticisms.  Here I will review these findings.

Electrophysiological Studies of the Locus of Perceptual Bias

Steven Hillyard, UCSD

The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century.  Recent psychophysical studies have reported that attention increases the apparent contrast of visual stimuli, but there is still a controversy as to whether this effect is due to the biasing of decisions as opposed to the altering of perceptual representations and changes in subjective appearance.  We obtained converging neurophysiological evidence while observers judged the relative contrast of Gabor patch targets presented simultaneously to the left and right visual fields following a lateralized cue (auditory or visual).  This non-predictive cueing boosted the apparent contrast of the Gabor target on the cued side in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset.  The magnitude of the enhanced neural response in ventral extrastriate visual cortex was positively correlated with perceptual reports of the cued-side target being higher in contrast.  These results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

Adaptive sensitivity regulation in detection and appearance

Donald MacLeod, UCSD

The visual system adapts to changing levels of stimulation with alterations of sensitivity that are expressed both in in changes in detectability, and in changes of appearance. The connection between these two aspects of sensitivity regulation is often taken for granted but need not be simple. Even the proportionality between ‘thresholds’ obtained by self-setting and threshold based on reliability of detection (e.g. forced-choice) is not generally expected except under quite restricted conditions and unrealistically simple models of the visual system. I review some of the theoretical possibilities in relation to available experimental evidence. Relatively simple mechanistic models provide opportunity for deviations from proportionality, especially if noise can enter into the neural representation at multiple stages. The extension to suprathreshold appearance is still more precarious;  yet remarkably, under some experimental conditions, proportionality with threshold sensitivities holds, in the sense that equal multiples of threshold match.

Observers can voluntarily shift their psychometric functions without losing sensitivity

Michael Morgan, City University London, Max Planck Institute for Neurological Research, Cologne, Barbara Dillenburger, Sabine Raphael, Max Planck; Joshua A. Solomon, City University

Psychometric sensory discrimination functions are usually modeled by cumulative Gaussian functions with just two parameters, their central tendency and their slope. These correspond to Fechner’s “constant” and “variable” errors, respectively. Fechner pointed out that even the constant error could vary over space and time and could masquerade as variable error. We wondered whether observers could deliberately introduce a constant error into their performance without loss of precision. In three-dot vernier and bisection tasks with the method of single stimuli, observers were instructed to favour one of the two responses when unsure of their answer. The slope of the resulting psychometric function was not significantly changed, despite a significant change in central tendency. Similar results were obtained when altered feedback was used to induce bias. We inferred that observers can adopt artificial response criteria without any significant increase in criterion fluctuation. These findings have implications for some studies that have measured perceptual “illusions” by shifts in the psychometric functions of sophisticated observers.

Sensory, perceptual and response biases: the criterion concept in perception

Mark Georgeson, Aston University

Signal detection theory (SDT) established in psychophysics a crucial distinction between sensitivity (or discriminability, d’) and bias (or criterion) in the analysis of performance in sensory judgement tasks. SDT itself is agnostic about the origins of the criterion, but there seems to be a broad consensus favouring “response bias” or “decision bias”. And yet, perceptual biases exist and are readily induced. The motion aftereffect is undoubtedly perceptual  – compelling motion is seen on a stationary pattern – but its signature in psychophysical data is a shift in the psychometric function, indistinguishable from “response bias”.  How might we tell the difference? I shall discuss these issues in relation to some recent experiments and modelling of adaptation to blur (Elliott, Georgeson & Webster, 2011).  A solution might lie in dropping any hard distinction between perceptual shifts and decision biases. Perceptual mechanisms make low-level decisions. Sensory, perceptual and  response criteria might be represented neurally in similar ways at different levels of the visual hierarchy, by biasing signals that are set by the task and by the history of stimuli and responses (Treisman & Williams, 1984). The degree of spatial localization over which the bias occurs might reflect its level in the visual hierarchy. Thus, given enough data, the dilemma (are aftereffects perceptual or due to response bias?) might be resolved in favour of such a multi-level model.

< Back to 2012 Symposia

Part-whole relationships in visual cortex

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 6-8
Organizer: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven
Presenters: Johan Wagemans, Charles E. Connor, Scott O. Murray, James R. Pomerantz, Jacob Feldman, Shaul Hochstein

< Back to 2012 Symposia

Symposium Description

With his famous paper on phi motion, Wertheimer (1912) launched Gestalt psychology, arguing that the whole is different from the sum of the parts. In fact, wholes were considered primary in perceptual experience, even determining what the parts are. Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? Are wholes constructed from combinations of the parts? If so, to what extent are the combinations additive, what does superadditivity really mean, and how does it arise along the visual hierarchy? How much of the combination process occurs in incremental feedforward iterations or horizontal connections and at what stage does feedback from higher areas kick in? What happens to the representation of the lower-level parts when the higher-level wholes are perceived? Do they become enhanced or suppressed (“explained away”)? Or, are wholes occurring before the parts, as argued by Gestalt psychologists? But what does this global precedence really mean in terms of what happens where in the brain? Does the primacy of the whole only account for consciously perceived figures or objects, and are the more elementary parts still combined somehow during an unconscious step-wise processing stage? A century later, tools are available that were not at the Gestaltists’ disposal to address these questions. In this symposium, we will take stock and try to provide answers from a diversity of approaches, including single-cell recordings from V4, posterior and anterior IT cortex in awake monkeys (Ed Connor, Johns Hopkins University), human fMRI (Scott Murray, University of Washington), human psychophysics (James Pomerantz, Rice University), and computational modeling (Jacob Feldman, Rutgers University). Johan Wagemans (University of Leuven) will introduce the theme of the symposium with a brief historical overview of the Gestalt tradition and a clarification of the conceptual issues involved. Shaul Hochstein (Hebrew University) will end with a synthesis of the current literature, in the framework of Reverse Hierarchy Theory. The scientific merit of addressing such a central issue, which has been around for over a century, from a diversity of modern perspectives and in light of the latest findings should be obvious. The celebration of the centennial anniversary of Gestalt psychology also provides an excellent opportunity to doing so. We believe our line-up of speakers, addressing a set of closely related questions, from a wide range of methodological and theoretical perspectives, promises to be attracting a large crowd, including students and faculty working in psychophysics, neurosciences and modeling. In comparison with other proposals taking this centennial anniversary as a window of opportunity, ours is probably more focused and allows for a more coherent treatment of a central Gestalt issue, which has been bothering vision science for a long time.

Presentations

Part-whole relationships in vision science: A brief historical review and conceptual analysis

Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Exactly 100 years ago, Wertheimer’s paper on phi motion (1912) effectively launched the Berlin school of Gestalt psychology. Arguing against elementalism and associationism, they maintained that experienced objects and relationships are fundamentally different from collections of sensations. Going beyond von Ehrenfels’s notion of Gestalt qualities, which involved one-sided dependence on sense data, true Gestalts are dynamic structures in experience that determine what will be wholes and parts. From the beginning, this two-sided dependence between parts and wholes was believed to have a neural basis. They spoke of continuous “whole-processes” in the brain, and argued that research needed to try to understand these from top (whole) to bottom (parts ) rather than the other way around. However, Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? In this talk, I will briefly review the Gestalt position and analyse the different notions of part and whole, and different views on part-whole relationships maintained in a century of vision science since the start of Gestalt psychology. This will provide some necessary background for the remaining talks in this symposium, which will all present contemporary views based on new findings.

Ventral pathway visual cortex: Representation by parts in a whole object reference frame

Charles E. Connor, Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Anitha Pasupathy, Scott L. Brincat, Yukako Yamane, Chia-Chun Hung

Object perception by humans and other primates depends on the ventral pathway of visual cortex, which processes information about object structure, color, texture, and identity.  Object information processing can be studied at the algorithmic, neural coding level using electrode recording in macaque monkeys.  We have studied information processing in three successive stages of the monkey ventral pathway:  area V4, PIT (posterior inferotemporal cortex), and AIT (anterior inferotemporal cortex).  At all three stages, object structure is encoded in terms of parts, including boundary fragments (2D contours, 3D surfaces) and medial axis components (skeletal shape fragments).  Area V4 neurons integrate information about multiple orientations to produce signals for local contour fragments.  PIT neurons integrate multiple V4 inputs to produce representations of multi-fragment configurations.  Even neurons in AIT, the final stage of the monkey ventral pathway, represent configurations of parts (as opposed to holistic object structure).  However, at each processing stage, neural responses are critically dependent on the position of parts within the whole object.  Thus, a given neuron may respond strongly to a specific contour fragment positioned near the right side of an object but not at all when it is positioned near the left.  This kind of object-centered position tuning would serve an essential role by representing spatial arrangement within a distributed, parts-based coding scheme. Object-centered position sensitivity is not imposed by top-down feedback, since it is apparent in the earliest responses at lower stages, before activity begins at higher stages.  Thus, while the brain encodes objects in terms of their constituent parts, the relationship of those parts to the whole object is critical at each stage of ventral pathway processing.

Long-range, pattern-dependent contextual effects in early human visual cortex

Scott O. Murray, Department of Psychology, University of Washington, Sung Jun Joo, Geoffrey M. Boynton

The standard view of neurons in early visual cortex is that they behave like localized feature detectors. We will discuss recent results that demonstrate that neurons in early visual areas go beyond localized feature detection and are sensitive to part-whole relationships in images. We measured neural responses to a grating stimulus (“target”) embedded in various visual patterns as defined by the relative orientation of flanking stimuli. We varied whether or not the target was part of a predictable sequence by changing the orientation of distant gratings while maintaining the same local stimulus arrangement. For example, a vertically oriented target grating that is flanked locally with horizontal flankers (HVH) can be made to be part of a predictable sequence by adding vertical distant flankers (VHVHV). We found that even when the local configuration (e.g. HVH) around the target was kept the same there was a smaller neural response when the target was part of a predictable sequence (VHVHV). Furthermore, when making an orientation judgment of a “noise” stimulus that contains no specific orientation information, observers were biased to “see” the orientation that deviates from the predictable orientation, consistent with computational models of primate cortical processing that incorporate efficient coding principles. Our results suggest that early visual cortex is sensitive to global patterns in images in a way that is markedly different from the predictions of standard models of cortical visual processing and indicate an important role in coding part-whole relationships in images.

The computational and cortical bases for configural superiority

James R. Pomerantz, Department of Psychology, Rice University, Anna I. Cragin, Department of Psychology, Rice University; Kimberley D. Orsten, Department of Psychology, Rice University; Mary C. Portillo, Department of Social Sciences, University of Houston-Downtown

In the configural superiority effect (CSE; Pomerantz et al., 1977; Pomerantz & Portillo, 2011), people respond more quickly to a whole configuration than to any one of its component parts, even when the parts added to create a whole contribute no information by themselves.  For example, people discriminate an arrow from a triangle more quickly than a positive from a negative diagonal even when those diagonals constitute the only difference between the arrows and triangles.  How can a neural or other computational system be faster at processing information about combinations of parts – wholes – than about parts taken singly?   We consider the results of Kubilius et al. (2011) and discuss three possibilities: (1) Direct detection of wholes through smart mechanisms that compute higher order information without performing seemingly necessary intermediate computations; (2) the “sealed channel hypothesis” (Pomerantz, 1978), which holds that part information is extracted prior to whole information in a feedforward manner but is not available for responses; and (3) a closely related reverse hierarchy model holding that conscious experience begins with higher cortical levels processing wholes, with parts becoming accessible to consciousness only after feedback to lower levels is complete (Hochstein & Ahissar, 2002).  We describe a number of CSEs and elaborate both on these mechanisms that might explain them and how they might be confirmed experimentally.

Computational integration of local and global form

Jacob Feldman, Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick, Manish Singh, Vicky Froyen

A central theme of perceptual theory, from the Gestaltists to the present, has been the integration of local and global image information. While neuroscience has traditionally viewed perceptual processes as beginning with local operators with small receptive fields before proceeding on to more global operators with larger ones, a substantial body of evidence now suggests that supposedly later processes can impose decisive influences on supposedly earlier ones, suggesting a more complicated flow of information. We consider this problem from a computational point of view. Some local processes in perceptual organization, like the organization of visual items into a local contour, can be well understood in terms of simple probabilistic inference models. But for a variety of reasons nonlocal factors such as global “form” resist such simple models. In this talk I’ll discuss constraints on how form- and region-generating probabilistic models can be formulated and integrated with local ones. From a computational point of view, the central challenge is how to embed the corresponding estimation procedure in a locally-connected network-like architecture that can be understood as a model of neural computation.

The rise and fall of the Gestalt gist

Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University, Merav Ahissar

Reviewing the current literature, one finds physiological bases for Gestalt-like perception, but also much that seems to contradict the predictions of this theory. Some resolution may be found in the framework of Reverse Hierarchy Theory, dividing between implicit processes, of which we are unaware, and explicit representations, which enter perceptual consciousness. It is the conscious percepts that appear to match Gestalt predictions – recognizing wholes even before the parts. We now need to study the processing mechanisms at each level, and, importantly, the feedback interactions which equally affect and determine the plethora of representations that are formed, and to analyze how they determine conscious perception. Reverse Hierarchy Theory proposes that initial perception of the gist of a scene – including whole objects, categories and concepts – depends on rapid bottom-up implicit processes, which seems to follow (determine) Gestalt rules. Since lower level representations are initially unavailable to consciousness – and may become available only with top-down guidance – perception seems to immediately jump to Gestalt conclusions. Nevertheless, vision at a blink of the eye is the result of many layers of processing, though introspection is blind to these steps, failing to see the trees within the forest. Later, slower perception, focusing on specific details, reveals the source of Gestalt processes – and destroys them at the same time. Details of recent results, including micro-genesis analyses, will be reviewed within the framework of Gestalt and Reverse Hierarchy theories.

< Back to 2012 Symposia

What does fMRI tell us about brain homologies?

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 4-5
Organizer: Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology
Presenters: Martin Sereno, David Van Essen, Hauke Kolster, Jonathan Winawer, Reza Rajimehr

< Back to 2012 Symposia

Symposium Description

Over the past 20 years, the functional magnetic resonance imaging (fMRI) has provided a great deal of knowledge about the functional organization of human visual cortex. In recent years, the development of the fMRI technique in non-human primates has enabled neuroscientists to directly compare the topographic organization and functional properties of visual cortical areas across species. These comparative studies have shown striking similarities (‘homologies’) between human and monkey visual cortex. Many visual cortical areas in human can be corresponded to homologous areas in monkey – though detailed cross-species comparisons have also shown specific variations in visual feature selectivity of cortical areas and spatial arrangement of visual areas on the cortical sheet. Comparing cortical structures in human versus monkey provides a framework for generalizing results from invasive neurobiological studies in monkeys to humans. It also provides important clues for understanding the evolution of cerebral cortex in primates. In this symposium, we would like to highlight recent fMRI studies on the organization of visual cortex in human versus monkey. We will have 5 speakers. Each speaker will give a 25-minute talk (including 5 minutes of discussion time). Martin Sereno will introduce the concept of brain homology, elaborate on its importance, and evaluate technical limitations in addressing the homology questions. He will then continue with some examples of cross-species comparison for retinotopic cortical areas. David Van Essen will describe recent progress in applying surface-based analysis and visualization methods that provide a powerful approach for comparisons among primate species, including macaque, chimpanzee, and human. Hauke Kolster will test the homology between visual areas in occipital cortex of human and macaque in terms of topological organization, functional characteristics, and population receptive field sizes. Jonathan Winawer will review different organizational schemes for visual area V4 in human, relative to those in macaque. Reza Rajimehr will compare object-selective cortex (including face and scene areas) in human versus macaque. The symposium will be of interest to visual neuroscientists (faculty and students) and a general audience who will benefit from a series of integrated talks on fundamental yet relatively ignored topic of brain homology.

Presentations

Evolution, taxonomy, homology, and primate visual areas

Martin Sereno, Department of Cognitive Science, UC San Diego

Evolution involves the repeated branching of lineages, some of which become extinct. The  problem of determining the relationship between cortical areas within the brains of  surviving branches (e.g., humans, macaques, owl monkeys) is difficult because of: (1)  missing evolutionary intermediates, (2) different measurement techniques, (3) body size  differences, and (4) duplication, fusion, and reorganization of brain areas. Routine  invasive experiments are carried out in very few species (one loris, several New and Old  World monkeys). The closest to humans are macaque monkeys. However, the last common  ancestor of humans and macaques dates to more than 30 million years ago. Since then, New  and Old World monkey brains have evolved independently from ape and human brains,  resulting in complex mixes of shared and unique features. Evolutionary biologists are  often interested in “shared derived” characters — specializations from a basal condition  that are peculiar to a species or grouping of species. These are important for  classification (e.g., a brain feature unique to macaque-like monkeys). Evolutionary  biologists also distinguish similarities due to inheritance (homology — e.g., MT), from  similarities due to parallel or convergent evolution (homoplasy — e.g., layer 4A  staining in humans and owl monkey. By contrast with taxonomists, neuroscientists are  usually interested in trying to determine which features are conserved across species  (whether by inheritance or parallel evolution), indicating that those features may have a  basic functional and/or developmental role. The only way to obtain either of these kinds  of information is to examine data from multiple species.

Surface-based analyses of human, macaque, and chimpanzee cortical organization

David Van Essen, Department of Anatomy and Neurobiology, Washington University School of Medicine

Human and macaque cortex differ markedly in surface area (nine-fold), in their pattern of convolutions, and in the relationship of cortical areas to these convolutions.  Nonetheless, there are numerous similarities and putative homologies in cortical organization revealed by architectonic and other anatomical methods and more recently by noninvasive functional imaging methods.  There are also differences in functional organization, particularly in regions of rapid evolutionary expansion in the human lineage.  This presentation will highlight recent progress in applying surface-based analysis and visualization methods that provide a powerful general approach for comparisons among primate species, including the macaque, chimpanzee, and human. One major facet involves surface-based atlases that are substrates for increasingly accurate cortical parcellations in each species as well as maps of functional organization revealed using resting-state and task-evoked fMRI. Additional insights into cortical parcellations as well as evolutionary relationships are provided by myelin maps that have been obtained noninvasively in each species.  Together, these multiple modalities provide new insights regarding visual cortical organization in each species.  Surface-based registration provides a key method for making objective interspecies comparisons, using explicit landmarks that represent known or candidate homologies between areas.  Recent algorithmic improvements in landmark-based registration, coupled with refinements in the available set of candidate homologies, provide a fresh perspective on primate cortical evolution and species differences in the pattern of evolutionary expansion.

Comparative mapping of visual areas in the human and macaque occipital cortex

Hauke Kolster, Laboratorium voor Neurofysiologie en Psychofysiologie, Katholieke Universiteit Leuven Medical School

The introduction of functional magnetic resonance imaging (fMRI) as a non-invasive imaging modality has enabled the study of human cortical processes with high spatial specificity and allowed for a direct comparison of the human and the macaque within the same modality. This presentation will focus on the phase-encoded retinotopic mapping technique, which is used to establish parcellations of cortex consisting of distinct visual areas. These parcellations may then be used to test for similarities between the cortical organizations of the two species. Results from ongoing work will be presented with regard to retinotopic organization of the areas as well as their characterizations by functional localizers and population receptive field (pRF) sizes. Recent developments in fMRI methodology, such as improved resolution and stimulus design as well as analytical pRF methods have resulted in higher quality of the retinotopic field maps and revealed visual field-map clusters as new organizational principles in the human and macaque occipital cortex. In addition, measurements of population-average neuronal properties have the potential to establish a direct link between fMRI studies in the human and single cell studies in the monkey. An inter-subject registration algorithm will be presented, which uses a spatial correlation of the retinotopic and the functional test data to directly compare the functional characteristics of a set of putative homologue areas across subjects and species. The results indicate strong similarities between twelve visual areas in occipital cortex of human and macaque in terms of topological organization, functional characteristics and pRF sizes.

The fourth visual area: A question of human and macaque homology

Jonathan Winawer, Psychology Department, Stanford University

The fourth visual area, V4, was identified in rhesus macaque and described in a series of anatomical and functional studies (Zeki 1971, 1978). Because of its critical role in seeing color and form, V4 has remained an area of intense study. The identification of a color-sensitive region on the ventral surface of human visual cortex, anterior to V3, suggested the possible homology between this area, labeled ‘Human V4’ or ‘hV4’ (McKeefry, 1997; Wade, 2002) and macaque V4 (mV4). Both areas are retinotopically organized. Homology is not uniformly accepted because of substantial differences in spatial organization, though these differences have been questioned (Hansen, 2007). MV4 is a split hemifield map, with parts adjacent to the ventral and dorsal portions of the V3 map. In contrast, some groups have reported that hV4 falls wholly on ventral occipital cortex. Over the last 20 years, several organizational schemes have been proposed for hV4 and surrounding maps. In this presentation I review evidence for the different schemes, with emphasis on recent findings showing that an artifact of functional MRI caused by the transverse sinus afflicts measurements of the hV4 map in many (but not all) hemispheres. By focusing on subjects where the hV4 map is relatively remote from the sinus artifact, we show that hV4 can be best described as a single, unbroken map on the ventral surface representing the full contralateral visual hemifield. These results support claims of substantial deviations from homology between human and macaque in the organization of the 4th visual map.

Spatial organization of face and scene areas in human and macaque visual cortex

Reza Rajimehr, McGovern Institute for Brain Research, Massachusetts Institute of Technology

The primate visual cortex has a specialized architecture for processing specific object categories such as faces and scenes. For instance, inferior temporal cortex in macaque contains a network of discrete patches for processing face images. Direct comparison between human and macaque category-selective areas shows that some areas in one species have missing homologues in the other species. Using fMRI, we identified a face-selective region in anterior temporal cortex in human and a scene-selective region in posterior temporal cortex in macaque, which correspond to homologous areas in the other species. A surface-based analysis of cortical maps showed a high degree of similarity in the spatial arrangement of face and scene areas between human and macaque. This suggests that neighborhood relations between functionally-defined cortical areas are evolutionarily conserved – though the topographic relation between the areas and their underlying anatomy (gyral/sulcal pattern) may vary from one species to another.

< Back to 2012 Symposia

Pulvinar and Vision: New insights into circuitry and function

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 1-3
Organizer: Vivien A. Casagrande, PhD, Department of Cell & Developmental Biology, Vanderbilt Medical School Nashville, TN
Presenters: Gopathy Purushothaman, Christian Casanova, Heywood M. Petry, Robert H. Wurtz, Sabine Kastner, David Whitney

< Back to 2012 Symposia

Symposium Description

The thalamus is considered the gateway to the cortex. Yet, even the late Ted Jones who wrote two huge volumes on the organization of the thalamus remarked that we know amazingly little about many of its components and their role in cortical function. This is despite the fact that a major two-way highway connects all areas of cortex with the thalamus. The pulvinar is the largest thalamic nucleus in mammals; it progressively enlarged during primate evolution, dwarfing the rest of the thalamus in humans. The pulvinar also remains the most mysterious of thalamic nucleus in terms of its function. This symposium brings together six speakers from quite different perspectives who, using tools from anatomy, neurochemistry, physiology, neuroimaging and behavior will highlight intriguing recent insights into the structure and function of the pulvinar.  The speakers will jointly touch on: 1) the complexity of architecture, connections and neurochemistry of the pulvinar, 2) potential species similarities and differences in pulvinar’s role in transmitting visual information from subcortical visual areas to cortical areas, 3) the role of pulvinar in eye movements and in saccadic suppression, 4) the role of pulvinar in regulating cortico-cortical communication between visual cortical areas and finally, 5)  converging ideas on the mechanisms that might explain the role of the pulvinar under the larger functional umbrella of visual salience and attention.  Specifically, the speakers will address the following issues.  Purushothaman and Casanova will outline contrasting roles for pulvinar in influencing visual signals in early visual cortex in primates and non- primates, respectively.  Petry and Wurtz will describe the organization and the potential role of retino-tectal inputs to the pulvinar, and that of pulvinar projections to the middle temporal (MT/V5) visual area in primate and its equivalent in non-primates. Wurtz also will consider the role of pulvinar in saccadic suppression.  Kastner will describe the role of the pulvinar in regulating information transfer between cortical areas in primates trained to perform an attention task. Whitney will examine the role of pulvinar in human visual attention and perceptual discrimination.    This symposium should attract a wide audience from Visual Science Society (VSS) participants as the function of the thalamus is key to understanding cortical organization.  Studies of the pulvinar and its role in vision have seen a new renaissance given the new technologies available to reveal its function.  The goal of this session will be to provide the VSS audience with a new appreciation of the role of the thalamus in vision.

Presentations

Gating of the Primary Visual Cortex by Pulvinar for Controlling Bottom-Up Salience

Gopathy Purushothaman, PhD, Department of Cell & Developmental Biology Vanderbilt, Roan Marion, Keji Li and Vivien A. Casagrande Vanderbilt University

The thalamic nucleus pulvinar has been implicated in the control of visual attention.  Its reciprocal connections with both frontal and sensory cortices can coordinate top-down and bottom-up processes for selective visual attention.  However, pulvino-cortical neural interactions are little understood.  We recently found that the lateral pulvinar (PL) powerfully controls stimulus-driven responses in the primary visual cortex (V1).  Reversibly inactivating PL abolished visual responses in supra-granular layers of V1.  Excitation of PL neurons responsive to one region of visual space increased 4-fold V1 responses to this region and decreased 3-fold V1 responses to the surrounding region.  Glutamate agonist injection in LGN increased V1 activity 8-fold and induced an excitotoxic lesion of LGN; subsequently injecting the glutamate agonist into PL increased V1 activity 14-fold.  Spontaneous activity in PL and V1 following visual stimulation were strongly coupled and selectively entrained at the stimulation frequency.  These results suggest that PL-V1 interactions are well-suited to control bottom-up salience within a competitive cortico-pulvino-cortical network for selective attention.

Is The Pulvinar Driving or Modulating Responses in the Visual Cortex?

Christian Casanova, PhD, Univ. Montreal, CP 6128 Succ Centre-Ville, Sch Optometry, Montreal , Canada, Matthieu Vanni & Reza F. Abbas & Sébastien Thomas. Visual Neuroscience Laboratory, School of Optometry, Université de Montréal, Montreal, Canada

Signals from lower cortical areas are not only transferred directly to higher-order cortical areas via cortico-cortical connections but also indirectly through cortico-thalamo-cortical projections. One step toward the understanding of the role of transthalamic corticocortical pathways is to determine the nature of the signals transmitted between the cortex and the thalamus. Are they strictly modulatory, i.e. are they modifying the activity in relation to the stimulus context and the analysis being done in the projecting area, or are they used to establish basic functional characteristics of cortical cells?  While the presence of drivers and modulators has been clearly demonstrated along the retino-geniculo-cortical pathway, it is not known whether such distinction can be made functionally in pathways involving the pulvinar. Since drivers and modulators can exhibit a different temporal pattern of response, we measured the spatiotemporal dynamics of voltage sensitive dyes activation in the visual cortex following pulvinar electrical stimulation in cats and tree shrews. Stimulation of pulvinar induced fast and local responses in extrastriate cortex. In contrast, the propagated waves in the primary visual cortex (V1) were weak in amplitude and diffuse. Co-stimulating pulvinar and LGN produced responses in V1 that were weaker than the sum of the responses evoked by the independent stimulation of both nuclei. These findings support the presence of drivers and modulators along pulvinar pathways and suggest that the pulvinar can exert a modulatory influence in cortical processing of LGN inputs in V1 while it mainly provides driver inputs to extrastriate areas, reflecting the different connectivity patterns.

What is the role of the pulvinar nucleus in visual motion processing?

Heywood M. Petry, Department of Psychological & Brain Sciences, University of Louisville, Martha E. Bickford, Department of Anatomical Sciences and Neurobiology, University of Louisville School of Medicine

To effectively interact with our environment, body movements must be coordinated with the perception of visual movement. We will present evidence that regions of the pulvinar nucleus that receive input from the superior colliculus (tectum) may be involved in this process. We have chosen the tree shrew (Tupaia belangeri, a prototype of early primates), as our animal model because tectopulvinar pathways are particularly enhanced in this species, and our psychophysical experiments have revealed that tree shrews are capable of accurately discriminating small differences in the speed and direction of moving visual displays. Using in vivo electrophysiological recording techniques to test receptive field properties, we found that pulvinar neurons are responsive to moving visual stimuli, and most are direction selective. Using anatomical techniques, we found that tectorecipient pulvinar neurons project to the striatum, amygdala, and temporal cortical areas homologous to the primate middle temporal area, MT/V5. Using in vitro recording techniques, immunohistochemistry and stereology, we found that tectorecipient pulvinar neurons express more calcium channels than other thalamic nuclei and thus display a higher propensity to fire with bursts of action potentials, potentially providing a mechanism to effectively coordinate the activity of cortical and subcortical pulvinar targets. Collectively, these results suggest that the pulvinar nucleus may relay visual movement signals from the superior colliculus to subcortical brain regions to guide body movements, and simultaneously to the temporal cortex to modify visual perception as we move though our environment.

One message the pulvinar sends to cortex

Robert H. Wurtz, NIH-NEI, Lab of Sensorimotor Research, Rebecca Berman, NIH-NEI, Lab of Sensorimotor Research

The pulvinar has long been recognized as a way station on a second visual pathway to the cerebral cortex. This identification has largely been based on the pulvinar’s connections, which are appropriate for providing visual information to multiple regions of visual cortex from subcortical areas. What is little known is what information pulvinar actually conveys especially in the intact functioning visual system.  We have identified one pathway through the pulvinar that extends from superior colliculus superficial visual layers though inferior pulvinar (principally PIm) to cortical area MT by using the techniques of combined anti- and orthodromic stimulation. We now have explored what this pathway might convey to cortex and have first concentrated on a modulation of visual processing first seen in SC, the suppression of visual responses during saccades.  We have been able to replicate the previous observations of the suppression in SC and in MT and now show that PIm neurons also are similarly suppressed.  We have then inactivated SC and shown that the suppression in MT is reduced. While we do not know all of the signals conveyed through this pathway to cortex, we do have evidence for one: the suppression of vision during saccades. This signal is neither a visual nor a motor signal but conveys the action of an internal motor signal on visual processing.  Furthermore combining our results in the behaving monkey with recent experiments in mouse brain slices (Phongphanphanee et al. 2011) provides a complete circuit from brainstem to cortex for conveying this suppression.

Role of the pulvinar in regulating information transmission between cortical areas

Sabine Kastner, MD, Department of Psychology, Center for Study of Brain, Mind and Behavior, Green Hall, Princeton, Yuri B. Saalman, Princeton Neuroscience Institute, Princeton University

Recent studies suggest that the degree of neural synchrony between cortical areas can modulate their information transfer according to attentional needs. However, it is not clear how two cortical areas synchronize their activities. Directly connected cortical areas are generally also indirectly connected via the thalamic nucleus, the pulvinar. We hypothesized that the pulvinar helps synchronize activity between cortical areas, and tested this by simultaneously recording from the pulvinar, V4, TEO and LIP of macaque monkeys performing a spatial attention task. Electrodes targeted interconnected sites between these areas, as determined by probabilistic tractography on diffusion tensor imaging data. Spatial attention increased synchrony between the cortical areas in the beta frequency range, in line with increased causal influence of the pulvinar on the cortex at the same frequencies. These results suggest that the pulvinar co-ordinates activity between cortical areas, to increase the efficacy of cortico-cortical transmission.

Visual Attention Gates Spatial Coding in the Human Pulvinar

David Whitney, The University of California, Berkeley, Jason Fischer, The University of California, Berkeley

Based on the pulvinar’s widespread connectivity with the visual cortex, as well as with putative attentional source regions in the frontal and parietal lobes, the pulvinar is suspected to play an important role in visual attention. However, there remain many hypotheses on the pulvinar’s specific function. One hypothesis is that the pulvinar may play a role in filtering distracting stimuli when they are actively ignored. Because it remains unclear whether this is the case, how this might happen, or what the fate of the ignored objects is, we sought to characterize the spatial representation of visual information in the human pulvinar for equally salient attended and ignored objects that were presented simultaneously. In an fMRI experiment, we measured the spatial precision with which attended and ignored stimuli were encoded in the pulvinar, and we found that attention completely gated position information: attended objects were encoded with high spatial precision, but there was no measurable spatial encoding of actively ignored objects. This is despite the fact that the attended and ignored objects were identical and present simultaneously, and both attended and ignored objects were represented with great precision throughout the visual cortex. These data support a role for the pulvinar in distractor filtering and reveal a possible mechanism: by modulating the spatial precision of stimulus encoding, signals from competing stimuli can be suppressed in order to isolate behaviorally relevant objects.

< Back to 2012 Symposia

Vision Sciences Society