6th Annual Dinner and Demo Night

Monday, May 12, 2008, 6:30 – 9:30 pm

BBQ 6:30 – 8:30 pm Vista Ballroom, Vista Terrace and Sunset Deck
Demos 7:30 – 9:30 pm Royal Palm foyer, Acacia Meeting Rooms

Please join us Monday night for the 6th Annual VSS Demo Night, a spectacular night of imaginative demos, social interaction and delectable food. This year’s BBQ will be held on the beautiful Sunset Terrace and Vista Deck overlooking the Naples Grande main pool. Demos will be located upstairs on the ballroom level in the Royal Ballroom foyer and Acacia Meeting Rooms.

Richard O. Brown, Arthur Shapiro and Shin Shimojo have curated 21 demonstrations of visual phenomena by VSS members, highlighting the important roles demonstrations play in vision research and education.

Demo Night is free for all registered VSS attendees. Meal tickets are not required, but you must wear your VSS badge for entry to the BBQ. Guests and family members of all ages are welcome to attend the demos, but must purchase a ticket for the BBQ . You can register your guests at any time during the meeting at the VSS Registration Desk located in the Royal Ballroom foyer. A desk will also be set up at the entrance to the BBQ in the Vista Ballroom beginning at 6:00 pm on Monday night.

Guest prices: Adults: $30, Youth (6-12 years old): $15, Children under 6: free

Wide field of view HMD walking experience in Virtual Reality

Bryce Armstrong and Matthias Pusch; WorldViz LLC
New demo worlds by WorldViz will immerse participants at higher levels with a new high-speed wide area tracking system and new wide FOV HMD setup with improved resolution.

LITE Vision Demonstrations

Kenneth Brecher; Boston University
I will present the most recent Project LITE vision demonstrations (including ones not yet posted on the web) – both computer software and new physical objects.

The Blue Arcs – functional imaging of neural activity in your own retina

Richard O. Brown; The Exploratorium
A simple demonstration of the Blue Arcs of the Retina, a beautiful entoptic phenomenon with a long history (Purkinje 1825, Moreland 1968), which deserves to be more widely known.

An opti-mechanical demonstration of differential chromatic and achromatic flicker fusion

Gideon P. Caplovitz and Howard C. Hughes; Dartmouth College
We will present a classic dynamic demonstration of differential flicker fusion rates for achromatic and chromatic flicker, using birefringent materials and polarized light.

Stereo rotation standstill

Max R. Dürsteler; Zurich University Hospital
A rotating spoked wheel defined only by disparity cues appears stationary when fixating the center of rotation. With peripheral fixation, one can infer the wheel’s rotation by tracking single spokes.

Sal, an embodied robotic platform for real-time visual attention, object recognition and manipulation

Lior Elazary, Laurent Itti, Rob Peters and Kai Chang; USC
An integrated robotic head/arm system, controlled by a pair of laptop computers (“dorsal” and “ventral”), will be able to locate, learn, recognize and grasp visual objects in real time.

“The impossible but possible transparency” and other new illusions

Simone Gori and Daniela Bressanelli; University of Trieste and University of Verona
We will demonstrate new motion illusions , including a new effect of transparency that arises in a special condition in which the colors combination contradicts the transparency rules.

A novel method for eye movement detection and fixation training

Parkson Leung, Emmanuel Guzman, Satoru Suzuki, Marcia Grabowecky and Steve Franconeri; Northwestern University
We will demonstrate a rapid contrast-reversing display of random-dots which appears uniform during fixation, but in which the random-dot pattern is perceived during eye movements or blinks.

3D shape recovery from a single 2D image

Yunfeng Li, Tadamasa Sawada, Yll Haxhimusa, Stephen Sebastian and Zygmunt Pizlo; Purdue University
We will demonstrate software that can take a single 2D image of a 3D scene and recover 3D shapes of objects in the scene, based on contours of the objects extracted by hand or automatically.

Rolling perception without rolling motion

Songjoo Oh and Maggie Shiffrar; Rutgers-Newark
We will show that contextual cues systematically trigger the perception of illusory rotation in optically ambiguous, moving homogeneous circles, in which visual cues to rotation are absent.

Pip and pop

Chris Olivers, Erik van der Burg, Jan Theeuwes and Adelbert Bronkhorst; Vrije Universiteit Amsterdam
In dynamic, cluttered displays, a spatially non-specific sound (“pip”) dramatically improves detection and causes “pop out” of a visual stimulus that is otherwise very difficult to spot.

The Phantom Pulse Effect Revisited

David Peterzel; UCSD, SDSU, VA Hospital
The “phantom pulse effect”, in which rapid mirror reversals of one’s body can evoke powerful and unusual visual-tactile, has been optimized and will be demonstrated by two distinct methods.

Mega suppression (aka Granny Smith illusion)

Dr Yury Petrov and Olga Meleshkevich; Northeastern University
A brief change of an object’s color is completely masked when an object of a matching color is simultaneously flashed nearby, when presented in the visual periphery.

Strong percepts of motion through depth without strong percepts of position in depth

Bas Rokers and Thad Czuba; The University of Texas at Austin
Binocularly anticorrelated random dot displays yield poor or nonexistent percepts of depth, but motion through depth percepts for the same stimuli are relatively unaffected.

Perpetual collision, long-range argyles, and other illusions

Arthur Shapiro and Emily Knight; Bucknell University
We will show novel interactive visual effects. Perpetual collisions illustrate global motion percepts from local changes at boundaries. Long-range argyles show strong lightness/brightness differences over large distances.

Illusions that illustrate fundamental differences between foveal and peripheral vision

Emily Knight, Arthur Shapiro and Zhong-Lin Lu; Bucknell University and USC
We will present a series of new interactive displays designed to test the hypothesis that peripheral vision contains less precise spatial and temporal phase information than foveal vision.

Smile Maze:Real-time Expression Recognition Game

Jim Tanaka, Jeff Cockburn, Matt Pierce, Javier Movellan and Marni Bartlett; University of Victoria
Smile Maze is an interactive face training exercise, incorporating the Computer Expression Recognition Toolbox developed at UCSD, in which players must produce target facial expressions to advance.

The Rubber Pencil Illusion

Lore Thaler; The Ohio State University
I will demonstrate the Rubber Pencil Illusion. When a pencil is held loosely and wiggled up and down in a combination of translatory and rotational motion, it appears to bend.

Edgeless filling-in and paradoxical edge suppression

Christopher Tyler; Smith-Kettlewell Eye Research Institute
I will demonstrate that ‘edgeless’ afterimages (Gaussian blobs) appear much more readily than sharp-edged ones, which exhibit a prolonged appearance delay. This is the reverse of edge-based filling-in.

Perception of depth determines the illusory motion of subjective surfaces within a wire cube

Albert Yonas; University of Minnesota
When 3 sides of a concave wire cube are viewed monocularly in front of a surface with minimal texture, it most often appears convex. When the viewer moves, both the cube and the surface appear to rotate.

2008 Young Investigator – David Whitney

Dr. David Whitney

Department of Psychology and Center for Mind & Brain, University of California, Davis

Dr. David Whitney has been chosen as this year�s recipient of the VSS Young Investigator Award in recognition of the extraordinary breadth and quality of his research. Using behavioral and fMRI measures in human subjects, Dr. Whitney has made significant contributions to the study of motion perception, perceived object location, crowding and the visual control of hand movements. His research is representative of the diversity and creativity associated with the best work presented at VSS.

The YIA award will be presented at the Keynote Address on Saturday, May 10, at 7:00 pm.

 

2008 and Older

2008

Edward Callaway, Ph.D., Systems Neurobiology Laboratories, Salk Institute

Unraveling fine-scale and cell-type specificity of visual cortical circuits

Audio and slides from the 2008 Keynote Address are available on the Cambridge Research Systems website.

2007

Larry Abbott, Co-Director, Center for Theoretical Neuroscience, Columbia University School of Medicine

Enhancement of visual processing by spontaneous neural activity

Audio and slides from the 2007 Keynote Address are available on the Cambridge Research Systems website.

2006

David R. Williams, Ph.D., William G. Allyn Professor of Medical Optics; Director, Center for Visual Science, University of Rochester

The Limits of Human Vision

2005

Irene Pepperberg, Department of Psychology, Brandeis University

Surface material perception

Surface material perception

Friday, May 9, 2008, 3:30 – 5:30 pm Royal Palm 6-8

Organizer: Roland W Fleming (Max Planck Institute for Biological Cybernetics, T�bingen, Germany)

Presenters: Roland W. Fleming (Max Planck Institute for Biological Cybernetics, T�bingen, Germany), Melvyn A. Goodale (The University of Western Ontario), Isamu Motoyoshi (NTT Communication Science Laboratories), Daniel Kersten (University of Minnesota), Laurence T Maloney (New York University), Edward H Adelson (MIT)

Symposium Description

When we look at an everyday object we gain information about its location and shape and also about the material it is made of. The apparent color of an orange signals whether it is ripe; its apparent gloss and mesoscale texture inform us whether it is fresh. All of these judgments are visual judgments about the physical chemistry of surfaces, their material properties. In the past few years, researchers have begun to study the visual assessment of surface material properties, notably gloss and mesoscale texture (�roughness�). Their research has been facilitated by advances in computer graphics, statistical methodology, and experimental methods and also by a growing realization that the visual system is best studied using stimuli that approximate the environment we live in. This symposium concerns recent research in material perception presented by six researchers in computer science, neuroscience and visual perception.

The successive mappings from surface property to retinal image to neural state to material judgments are evidently complex. Coming to understand how each step leads to the next is a fascinating series of challenges that crosses disciplines. An initial challenge is to work out how changes in surface material properties are mirrored in changes in retinal information, to identify the cues that could potentially signal a surface material property such as gloss or roughness.

A second challenge is to determine which cues are actually used by the visual system in assessing material properties. Of particular interest are recent claims that very simple image statistics contain considerable information relevant to assessing surface material properties. A third challenge concerns the neural encoding of surface properties and what we can learn from neuroimaging, a fourth, how variations in one surface material property affect perception of a second.

A final � and fundamental — challenge is to work out how the organism learns to use visual estimates of material properties to guide everyday actions — to decide which oranges to eat and which to avoid.

The symposium is likely to be of interest to a very wide range or researchers in computer vision, visual neuroscience and visual perception, especially perception of color. lightness and texture.

Abstracts

Perception of materials that transmit light

Roland W. Fleming, Max Planck Institute for Biological Cybernetics, T�bingen, Germany

Many materials that we commonly encounter, such as ice, marmalade and wax, transmit some proportion of incident light. Broadly, these can be separated into transparent and translucent materials. Transparent materials (e.g. gemstones, water) are dominated by specular reflection and refraction, leading to a characteristic glistening, pellucid appearance. Translucent materials (e.g. marble, cheese) exhibit sub-surface light scattering, in which light bleeds diffusely through the object creating a distinctive soft or glowing appearance. Importantly, both types of material are poorly approximated by Metelli�s episcotister or other models of thin neutral density filters that have shaped our understanding of transparency to date. I will present various psychophysical and theoretical studies that we have performed using physically based computer simulations of light transport through solid transmissive objects. One important observations is that these materials do not exhibit many image features traditionally thought to be central to transparency perception (e.g. X-junctions). However, they compensate with a host of novel cues, which I will describe. I will discuss the perceptual scales of refractive index and translucency and report systematic failures of constancy across changes in illumination, 3D shape and context. I will discuss conditions under which various low-level image statistics succeed and fail to predict material appearance. I will also discuss the difficulties posed by transmissive materials for the estimation of 3D shape. Under many conditions, human vision appears to use simple image heuristics rather than correctly inverting the physics. I will show how this can be exploited to create illusions of material appearance.

How we see stuff: fMRI and behavioural studies of visual routes to the material properties of objects

Melvyn A. Goodale

Almost all studies of visual object recognition have focused on the geometric structure of objects rather than their material properties (as revealed by surface-based visual cues such as colour and texture). But recognizing the material from which an object is made can assist in its identification � and can also help specify the forces required to pick up that object. In two recent fMRI studies (Cant & Goodale, 2007; Cant et al., submitted), we demonstrated that the processing of object form engages more lateral regions of the ventral stream such as area LO whereas the processing of an object�s surface properties engages more medial regions in the ventral stream, particularly areas in the lingual, fusiform, and parahippocampal cortex. These neuroimaging data are consistent with observations in neurological patients with visual form agnosia (who can still perceive colour and visual texture) and patients with cerebral achromatopsia (who can still perceive form). The former often have lesions in area LO and the latter in more medial ventral-stream areas. In a behavioural study with healthy observers (Cant et al., in press), we showed that participants were able to ignore form while making surface-property classifications, and to ignore surface properties while making form classifications � even though we could demonstrate mutual interference between different form cues. Taken together, these findings suggest that the perception of the material properties depends on medial occipito-temporal areas that are anatomically and functionally distinct from more lateral occipital areas involved in the perception of object shape.

Histogram skewness and glossiness perception

Isamu Motoyoshi

Human can effortlessly judge the glossiness of natural surfaces with complex mesostructure. The visual system may utilize simple statistics of the image to achieve this ability (Motoyoshi, Sharan, Nishida & Adelson, 2007a; Motoyoshi, Nishizawa & Uchikawa, 2007b). We have shown that the perceived glossiness of various surfaces is highly correlated with the skewness (3rd-order moment) of the luminance histogram, and that this image property can be easily computed by the known early visual mechanisms. Our ‘skewness aftereffect’ demonstrated the existence of such skewness detectors and their link to the perceived glossiness. However, simple skewness detectors are not very sensitive to image spatial structures. They might not be able to distinguish a glossy surface from, say, a matte surface covered with white dusts while humans can do. These unsolved issues and questions will be discussed together with our latest psychophysical data. Our glossiness study suggests that the perception of material properties may be generally based on simple ‘pictorial cues’ in the 2D image, rather than on complex inverse optics computations. This hypothesis is supported by the finding that simple image manipulation techniques can dramatically alter the apparent surface qualities including translucency and metallicity (Motoyoshi, Nishida & Adelson, 2005).

Object lightness and shininess

Daniel Kersten

Under everyday viewing conditions, observers can determine material properties at a glance–such as whether an object has light or dark pigmentation, or whether it is shiny or matte. How do we do this? The first problem–lightness perception–has a long history in perception research, yet many puzzles remain, such as the nature of the neural mechanisms for representing and combining contextual information. The second–“shininess”–has a shorter history, and seems to pose even stiffer challenges to our understanding of how vision arrives at determinations of material properties. I will describe results from two approaches to these two problems. For the first problem, I will describe neuroimaging results showing that cortical MR activity in retinotopic areas, including V1, is correlated with context-dependent lightness variations, even when local luminance remains constant. Further, responses to these lightness variations, measured with a dynamic version of the Craik-O’Brien illusion, are resistant to a distracting attentional task. For the second problem, I will describe an analysis of natural constraints that determine human perception of shininess given surface curvature, and given object motion. One set of demonstrations show that apparent shininess is a function of how statistical patterns of natural illumination interact with surface curvature. A second set of demonstrations illustrates how the visual system is sensitive to the way that specularities slide across a surface.

Multiple surface material properties, multiple visual cues

Laurence T. Maloney

Previous research on visual perception of surface material has typically focused on single material properties and single visual cues, with no consideration of possible interactions. I�ll first describe recent work in which we examined how multiple visual cues contribute to visual perception of a single material property, the roughness of 3D rendered surfaces, viewed binocularly. We found that the visual system made substantial use of visual cues that were in fact useless in estimating roughness under the conditions of our experiments. I�ll discuss what the existence of pseudo-cues implies about surface material perception. In a separate experiment, we used a conjoint measurement design to determine how observers represent perceived 3D texture (�bumpiness�) and specularity (�glossiness�) and modeled how each of these two surface material properties affects perception of the other. Observers made judgments of �bumpiness� and �glossiness� of surfaces that varied in both surface texture and specularity. We found that a simple additive model captures visual perception of texture and specularity and their interactions. We quantify how changes in each surface material property affect judgments of the other. Conjoint measurement is potentially a powerful tool for analyzing surface material perception in realistic environments.

What is material perception good for?

Edward H. Adelson

What are the essential ways in which vision helps us interface with the physical world? What is the special role of material perception? One way to approach this question is: 1. Marry a vision scientist. 2. Have children with her. 3. Take videos of your children interacting with the world. 4. Study these videos, taking note of the essential tasks children must master. 5. Make your colleagues watch these videos. For some tasks (e.g., learning the alphabet or recognizing giraffes) material perception is relatively unimportant, but for others (e.g., eating, walking, getting dressed, playing outside, taking a bath) it is critical. The mastery of materials — the way they look, feel, and respond to manipulation — is one of the main tasks of childhood. Why, then, is so little known about material perception, as compared to, say, object recognition? One of the issues seems to be that material perception is embedded in procedural knowledge (knowing how to do), whereas object recognition is embedded in declarative knowledge (knowing how to describe). This suggests that material perception should be approached from multiple modalities including vision, touch, and motor control. It suggests that the brain might contain mechanisms devoted to the joint visual/haptic analysis of stiffness, slipperiness, roughness, and the like. In pursuit of this program, we have recently been showing our home videos to colleagues in other fields.

 

 

The past, present, and future of the written word

The past, present, and future of the written word

Friday, May 9, 2008, 3:30 – 5:30 pm Royal Palm 5

Organizers: Frederic Gosselin (Universit� de Montr�al) and Bosco S. Tjan (University of Southern California)

Presenters: Susana T.L. Chung (University of Houston), Dennis M. Levi (University of California, Berkeley), Denis G. Pelli (New York University), Gordon E. Legge (University of Minnesota), Mark A. Changizi (Rensselaer Polytechnic Institute), Marlene Behrmann (Carnegie Mellon University)

Symposium Description

Gutenberg�s invention has democratized the written word: It is estimated that an average English reader will be exposed to over 100 million printed words before the age of 25. The scientific investigation of reading pioneered by Cattell in the 19th century was largely focused on single word recognition through the study of its cognitive, linguistic, and other high-level determinants (e.g., lexical frequency). Accordingly, in most of the influential theories of reading, the front-end visual processing remains unspecified, except with the assumption that it provides the abstract letter identities. This approach to reading greatly underestimates the complexity and the critical role of vision. Text legibility is strongly determined by the ease with which letters can be identified (Pelli et al., 2003), but it appears that standard fonts (e.g., Arial, Times) may be suboptimal as visual stimuli. For instance, the discriminability of a letter from the remainder of the alphabet, as indexed by identification accuracy with brief presentations, is inversely correlated with letter frequency, such that the letters most frequently encountered in texts are among the least discriminable. There is also a significant mismatch between the diagnostic spatial frequency spectra of letters and the human contrast sensitivity function, such that a large proportion of stimulus information is of poor use for the visual system (Chung et al., 2002; Majaj et al., 2002; Poder, 2003; Solomon & Pelli, 1994). Is there room for improvement? Previous attempts to improve reading speed in individuals with low-vision by bandpassing word images in the mid to high spatial frequency range led to equivocal results (Fine & Peli, 1995). However, we have recently witnessed significant advances in our understanding of foveal and peripheral vision and the mechanisms for letter identification and reading. Can this novel knowledge be applied to the development of fonts optimized for normal and impaired visual systems (e.g., developmental, letter-by-letter, or deep dyslexia, macular degeneration, cataract, diabetic retinopathy)? This is the challenge that the organizers of this symposium are submitting to the participants. We hope that this will be the first step toward vision science leading the way to a second Gutenberg-like revolution: Instant speed reading for all!

Abstracts

Enhancing letter recognition and word reading performance

Susana T.L. Chung

This talk will provide an overview of our efforts in enhancing letter recognition and word reading performance in the normal periphery and in patients with central vision loss.

Letter recognition, crowding and reading in amblyopia

Dennis M. Levi, Denis G. Pelli and Shuang Song

Crowding, not letter recognition acuity, limits reading in the amblyopic visual system.

Legibility

Denis G. Pelli and Hannes F. Famira

“Legibility” means different things to visual scientists and type designers, and type design affects the different kinds of legibility in different ways.

The eyes have it: Sensory factors limit reading speed

Gordon E. Legge

Sensory constraints influence reading speed for normally sighted young adults, children, senior citizens, people with low vision and blind Braille readers.

The structures of letters and symbols throughout human history are selected to match those found in objects in natural scenes

Mark A. Changizi

New research supports the hypothesis that human visual signs look like nature, because that is what we have evolved over millions of years to be good at seeing.

Cognitive and neural mechanisms of face and word processing: Common principles

Marlene Behrmann and David Plaut

Through joint empirical studies (with normal and brain-damaged individuals) and computational investigations, we will argue that face and word recognition are mediated by a highly distributed and interactive cortical network whose organization is strongly shaped and modified by experience rather than by discrete modules, each dedicated to specific, narrowly-defined function.

 

 

Action for perception: functional significance of eye movements for vision

Action for perception: functional significance of eye movements for vision

Friday, May 9, 2008, 3:30 – 5:30 pm Orchid 1

Organizers: Anna Montagnini (Institut de Neurosciences Cognitives de la M�diterran�e) and Miriam Spering (Justus-Liebig University Giessen, Germany)

Presenters: Maria Concetta Morrone (Facolt� di Psicologia, Universit� Vita-Salute S Raffaele, Milano, Italy), Tirin Moore (Stanford University School of Medicine, USA), Michele Rucci (Boston University), Miriam Spering (Justus-Liebig University Giessen, Germany; New York University), Ziad Hafed (Systems Neurobiology Laboratory, Salk Institute), Wilson S. Geisler (University of Texas, Austin)

Symposium Description

When we view the world around us, our eyes are constantly in motion.

Different types of eye movements are used to bring the image of an object of interest onto the fovea, to keep it stable on this high-resolution area of the retina, or to avoid visual fading. Moment by moment, eye movements change the retinal input to the visual system of primates, thereby determining what we see. This critical role of eye movements is now widely acknowledged, and closely related to a research program termed �Active Vision� (Findlay & Gilchrist, 2003).

While eye movements improve vision, they might also come at a cost.

Voluntary eye movements can impair perception of objects, space and time, and affect attentional processing. When using eye movements as a sensitive tool to infer visual and cognitive processing, these constraints have to be taken into account.

The proposed symposium responds to an increasing interest in vision sciences to use eye movements. The aims of the symposium are (i) to review and discuss findings related to perceptual consequences of eye movements, (ii) to introduce new methodological approaches that take into account these consequences, and (iii) to encourage vision scientists to focus on the dynamic interplay between vision and oculomotor behavior.

The symposium spans a wide area of research on visuomotor interaction, and brings to the table junior and senior researchers from different disciplines, studying different types of eye movements and perceptual behaviors. All speakers are at the forefront of research in vision and brain sciences and have made significant contributions to the understanding of the questions at hand, using a variety of methodological approaches.

Concetta Morrone (Universit� Vita-Salute, Italy) reviews findings on the perisaccadic compression of space and time, and provides a Bayesian model for these perceptual phenomena. Tirin Moore (Stanford University, USA) discusses the neural mechanisms of perisaccadic changes in visual and attentional processing. Michele Rucci (Boston University, USA) argues for an increase in spatial sensitivity due to involuntary miniature eye movements during fixation, which are optimized for the statistics of natural scenes.

Miriam Spering (University of Giessen, Germany) focuses on the relationship between smooth pursuit eye movements and the ability to perceive and predict visual motion. Ziad Hafed (Salk Institute, USA) discusses the effect of eye movements on object perception, pointing out an intriguing role of oculomotor control for visual optimization. Wilson Geisler (University of Texas, USA) uses ideal-observer analysis to model the selection of fixation locations across a visual scene, demonstrating the high degree of efficiency in human visuomotor strategy.

The topic of this symposium is at the same time of general interest and of specific importance. It should attract at least three groups of VSS attendants � those interested in low-level visual perception, in motor behavior, and those using eye movements as a tool. We expect to attract both students, seeking an introduction to the topic, and faculty, looking for up-to date insights. It will be beneficial for VSS to include a symposium devoted to the dynamic and interactive link between visual perception and oculomotor behavior.

Abstracts

Perception of space and time during saccades: a Bayesian explanation for perisaccadic distortions

Maria Concetta Morrone, Paola Binda and David Burr

During a critical period around the time of saccades, briefly presented stimuli are grossly mislocalized in space and time and both relative distances and durations appear strongly compressed. We investigated whether the Bayesian hypothesis of optimal sensory fusion could account for some of the mislocalizations, taking advantage of the fact that auditory stimuli are unaffected by saccades. For spatial localization, vision usually dominates over audition during fixation (the �ventriloquist effect�); but during perisaccadic presentations, auditory localization becomes relatively more important, so the mislocalized visual stimulus is seen closer to its veridical position. Both the perceived position of the bimodal stimuli and the time-course of spatial localization were well-predicted by assuming optimal Bayesian-like combination of visual and auditory signals. For time localization, acoustic signals always dominate. However, this dominance does not affect the dynamics of saccadic mislocalization, suggesting that audio-visual capture occurs after saccadic remapping. Our model simulates the time-course data, assuming that position in external space is given by the sum of retinal position and a noisy eye-position signal, obtained by integrating the output of two neural populations, one centered at the current point of gaze, the other centered at the future point of gaze. Only later the output signal is fused with the auditory signal, demonstrating that some saccadic distortions take place very early in visual analysis.

This model not only accounts for the bizarre perceptual phenomena caused by saccades, but provides a novel vision-based account of peri-saccadic remapping of space.

Neural mechanisms and correlates of perisaccadic changes in visual perception

Tirin Moore

The changes in visual perception that accompany saccadic eye movements, including shifts of attention and saccadic suppression, are well documented in psychophysical studies. However, the neural basis of these changes is poorly understood. Recent evidence suggests that interactions of oculomotor mechanisms with visual cortical representations may provide a basis for modulations of visual signals and visual perception described during saccades. I will discuss some recent neurophysiological experiments that address the impact of oculomotor mechanisms, and of saccade preparation, on the filtering of visual signals within cortex. Results from these experiments relate directly to the observed enhancement and suppression of visual perception during saccades.

Fixational eye movements, natural image statistics, and fine spatial vision

Michele Rucci

During visual fixation, small eye movements continually displace the stimulus on the retina. It is known that visual percepts tend to fade when retinal image motion is eliminated in the laboratory. However, it has long been debated whether, during natural viewing, fixational eye movements have other functions besides preventing the visual scene from fading. In this talk, I will summarize a theory for the existence of fixational eye movements, which links the physiological instability of visual fixation to the statistics of natural scenes. According to this theory, fixational eye movements contribute to the neural encoding of natural scenes by attenuating input redundancy and emphasizing the elements of the stimulus that cannot be predicted from the statistical properties of natural images. To test some of the predictions of this theory, we developed a new method of retinal image stabilization, which enables selective elimination of the motion of the retinal image during natural intersaccadic fixation. We show that fixational eye movements facilitate the discrimination of high spatial frequency patterns masked by low spatial frequency noise, as predicted by our theory.

These results suggest a contribution of fixational eye movements in the processing of spatial detail, a proposal originally speculated by Hering in 1899.

Motion perception and prediction during smooth pursuit eye movements

Miriam Spering, Alexander C. Sch�tz and Karl R. Gegenfurtner

Smooth pursuit eye movements are slow, voluntary movements of the eyes that serve to hold the retinal image of a moving object close to the fovea. Most research on the interaction of visual perception and oculomotor action has focused on the question what visual input drives the eye best, and what this tells us about visual processing for eye movement control. Here we take a different route and discuss findings on perceptual consequences of pursuit eye movements. Our recent research has particularly focused on the interaction between pursuit eye movements and motion sensitivity in different tasks and visual contexts. (i) We report findings from a situation that particularly requires the dissociation between retinal image motion due to eye movements and retinal object motion. A moving object has to be tracked across a dynamically changing moving visual context, and object motion has to be estimated. (ii) The ability to predict the trajectory of a briefly presented moving object is compared during pursuit and fixation for different target presentation durations. (iii) We compare the sensitivity to motion perturbations in the peripheral visual context during pursuit and fixation. Results imply that pursuit consequences are optimally adapted to contextual requirements.

Looking at visual objects

Ziad Hafed

Much of our understanding about the brain mechanisms for controlling how and where we look derives from minimalist behavioral tasks relying on simple spots of light as the potential targets. However, visual targets in natural settings are rarely individual, point-like sources of light. Instead, they are typically larger visual objects that may or may not contain explicit features to look at. In this presentation, I will argue that the use of more complex, and arguably more “natural”, visual stimuli than is commonly used in oculomotor research is important for learning the extent to which eye movements can serve visual perception. I will provide an example of this by describing a behavioral phenomenon in which the visual system consistently fails in interpreting a retinal stimulus as containing coherent objects when this stimulus is not accompanied by an ongoing eye movement. I will then shed light on an important node in the brain circuitry involved in the process of looking at visual objects. Specifically, I will show that the superior colliculus (SC), best known for its motor control of saccades, provides a neural “pointer” for the location of a visual object, independent of the object’s individual features and distinct from the motor commands associated with this brain structure. Such a pointer allows the oculomotor system to precisely direct gaze, even in the face of large extended objects.

More importantly, because the SC also provides ascending signals to sensory areas, such a pointer may also be involved in modulating object-based attention and perception.

Mechanisms of fixation selection evaluated using ideal observer analysis

Wilson S. Geisler

The primate visual system combines a wide field of view with a high resolution fovea and uses saccadic eye movements to direct the fovea at potentially relevant locations in visual scenes. This is a sensible design for a visual system with limited neural resources. However, to be effective this design requires sophisticated task-dependent mechanisms for selecting fixation locations. I will argue that in studying the brain mechanisms that control saccadic eye movements in specific tasks, it can be very useful to consider how fixations would be selected by an ideal observer. Such an ideal-observer analysis provides: (i) insight into the information processing demands of the task, (ii) a benchmark against which to evaluate the actual eye movements of the organism, (iii) a starting point for formulating hypotheses about the underlying brain mechanisms, and (iv) a benchmark against which to evaluate the efficiency of hypothesized brain mechanisms. In making the case, I will describe recent examples from our lab concerning naturalistic visual-search tasks and scene-encoding tasks.

 

 

Bayesian models applied to perceptual behavior

Bayesian models applied to perceptual behavior

Friday, May 9, 2008, 3:30 – 5:30 pm Royal Palm 4

Organizer: Peter Battaglia (University of Minnesota)

Presenters: Alan Yuille (University of California Los Angeles), David Knill (University of Rochester), Paul Schrater (University of Minnesota), Tom Griffiths (University of California, Berkeley), Konrad Koerding (Northwestern University), Peter Battaglia (University of Minnesota)

Symposium Description

This symposium will provide information and methodological tools for researchers who are interested in modeling perception as probabilistic inference, but are unfamiliar with the practice of such techniques.   In the last 20 years, scientists characterizing perception as Bayesian inference have produced a number of robust models that explain observed perceptual behaviors and predict new, unobserved behaviors.   Such successes are due to the formal, universal language of Bayesian models and the powerful hypothesis-evaluation tools they allow.   Yet many researchers who attempt to build and test Bayesian models feel overwhelmed by the potentially steep learning curve and abandon their attempts after stumbling over unintuitive obstacles.   It is important that those scientists who recognize the explanatory power of Bayesian methods and wish to implement the framework in their own research have the tools, and know-how to use them, at their disposal.   This symposium will provide a gentle introduction to the most important elements of Bayesian models of perception, while avoiding the nuances and subtleties that are not critical.   The symposium will be geared toward senior faculty and students alike, and will require no technical prerequisites to understand the major concepts, and only knowledge of basic probability theory and experimental statistics to apply the methods. Those comfortable with Bayesian modeling may find the symposium interesting, but the target audience will be the uninitiated.

The formalism of Bayesian models allows a principled description of the processes that allow organisms to recover scene properties from sensory measurements, thereby enabling a clear statement of experimental hypotheses and their connections with related theories. Many people believe Bayesian modeling is primarily for fitting unpleasant data using a prior: this is a misconception that will be dealt with!   In previous attempts to correct such notions, most instruction about probabilistic models of perception falls into one of two categories:   qualitative, abstract description, or quantitative, technical application. This symposium constitutes a hybrid of these categories by phrasing qualitative descriptions in quantitative formalism.   Intuitive and familiar examples will be used so the connection between abstract and practical issues remains clear.

The goals of this symposium are two-fold: to present the most current and important ideas involving probabilistic perceptual models, and provide hands-on experience working with them.   To accomplish these goals, our speakers will address topics such as the history and motivation for probabilistic models of perception, the relation between sensory uncertainty and probability-theoretic representations of variability, the brain�s assumptions about how the world causes sensory measurements, how to investigate the brain�s internal knowledge of probability, framing psychophysical tasks as perceptually-guided decisions, and hands-on modeling tutorials presented as Matlab scripts that will be made available for download beforehand so those with laptops can follow along. Each talk will link the conceptual material to the scientific interests of the audience by presenting primary research and suggesting perceptual problems that are ripe for the application of Bayesian methods.

Abstracts

Modeling Vision as Bayesian Inference: Is it Worth the Effort?

Alan Yuille

The idea of perception as statistical inference grew out of work in the 1950s in the context of a general theory of auditory and visual signal detectability. Signal detection theory from the start used concepts and tools from Bayesian Statistical Decision theory that are with us today:   1) a generative model that specifies the probability of sensory data conditioned on signal states; 2) prior probabilities of those states; 3) the utility of decisions or actions as they depend on those states.   By the 1990s, statistical inference models   were being extended to an increasingly wider set of problems, including object and motion perception, perceptual organization, attention, reading, learning, and motor control. These applications have relied in part on the development of new concepts and computational methods to analyze and model more   realistic visual tasks. I will provide an overview of current   work, describing some of the success stories. I will try to identify future challenges for testing and modeling theories of visual behavior–research that will require learning, and computing probabilities on more complex, structured representations.

Bayesian modeling in the context of robust cue integration

David Knill

Building Bayesian models of visual perception is becoming increasingly popular in our field.   Those of us who make a living constructing and testing Bayesian models are often asked the question, “What good are models that can be fit to almost any behavioral data?” I will address this question in two ways:   first by acknowledging the ways in which Bayesian modeling can be misused, and second by outlining how Bayesian modeling, when properly applied, can enhance our understanding of perceptual processing. I will use robust cue integration as an example to illustrate some ways in which Bayesian modeling helps organize our understanding of the factors that determine perceptual performance, makes predictions about performance, and generates new and interesting questions about perceptual processes.   Robust cue integration characterizes the problem of how the brain integrates information from different sensory cues that have unnaturally large conflicts. To build a Bayesian model of cue integration, one must explicitly model the world processes that give rise to such conflicting cues.   When combined with models of internal sensory noise, such models predict behaviors that are consistent with human performance.   While we can “retro-fit” the models to the data, the real test of our models is whether they agree with what we know about sensory processing and the structure of the environment (though mismatches may invite questions ripe for future research). At their best, such models help explain how perceptual behavior relates to the computational structure of the problems observers face and the constraints imposed by sensory mechanisms.

Bayesian models for sequential decisions

Paul Schrater

Performing common perceptually-guided actions, like saccades and reaches, requires our brains to overcome uncertainty about the objects and geometry relevant to our actions (world state), potential consequences of our actions, and individual rewards attached to these consequences.   A principled approach to such problems is termed “stochastic-optimal control”, and uses Bayesian inference to simultaneously update beliefs about the world state, action consequences, and individual rewards.   Rational agents seek rewards, and since rewards depend on the consequences of actions, and those consequences depend on the world state, updating beliefs about all three is necessary to acquire the most reward possible.

Consider the example of reaching to grasp your computer mouse while viewing your monitor.   Some strategies and outcomes for guiding your reach include:   1.) keeping your eyes fixed, moving quickly, and probably missing the mouse, 2.) keeping your eyes fixed, moving slowly, and wasting time reaching, 3.) turning your head, staring at the mouse, wasting time moving your head, or 4.) quickly saccading toward the mouse, giving you enough positional information to make a fast reach without wasting much time.   This example highlights the kind of balance perceptually-guided actions strike thousands of times a day:   scheduling information-gathering and action-execution when there are costs (i.e. time, missing the target) attached. Using the language of stochastic-optimal control, tradeoffs like these can be formally characterized and explain otherwise opaque behavioral decisions.   My presentation will introduce stochastic-optimal control theory, and show how applying the basic principles offer a powerful framework for describing and evaluating perceptually-guided action.

Exploring subjective probability distributions using Bayesian statistics

Tom Griffiths

Bayesian models of cognition and perception express the expectations of learners and observers in terms of subjective probability distributions – priors and likelihoods. This raises an interesting psychological question: if human inferences adhere to the principles of Bayesian statistics, how can we identify the subjective probability distributions that guide these inferences? I will discuss two methods for exploring subjective probability distributions. The first method is based on evaluating human judgments against distributions provided by the world. The second substitutes people for elements in randomized algorithms that are commonly used to generate samples from probability distributions in Bayesian statistics. I will show how these methods can be used to gather information about the priors and likelihoods that seem to characterize human judgments.

Causal inference in multisensory perception

Konrad Koerding

Perceptual events derive their significance to an animal from their meaning about the world, that is from the information they carry about their causes. The brain should thus be able to efficiently infer the causes underlying our sensory events. Here we use multisensory cue combination to study causal inference in perception. We formulate an ideal-observer model that infers whether two sensory cues originate from the same location and that also estimates their location(s). This model accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks. The results show that indeed humans can efficiently infer the causal structure as well as the location of causes. By combining insights from the study of causal inference with the ideal-observer approach to sensory cue combination, we show that the capacity to infer causal structure is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception.

How to:   Applying a Bayesian model to a perceptual question

Peter Battaglia

Bayesian models provide a powerful language for describing and evaluating hypotheses about perceptual behaviors. When implemented properly they allow strong conclusions about the brain�s perceptual solutions in determining what caused incoming sensory information. Unfortunately, constructing a Bayesian model may seem challenging and perhaps �not worth the trouble� to those who are not intimately familiar with the practice. Even with a clear Bayesian model, it is not always obvious how experimental data should be used to evaluate the model�s parameters.   This presentation will demystify the process by walking through the modeling and analysis using a simple, relevant example of a perceptual behavior.

First I will introduce a familiar perceptual problem and describe the choices involved in formalizing it as a Bayesian model. Next, I will explain how standard experimental data can be exploited to reveal model parameter values and how the results of multiple experiments may be unified to fully evaluate the model. The presentation will be structured as a tutorial that will use Matlab scripts to simulate the generation of sensory data, the brain�s hypothetical inference procedure, and the quantitative analysis of this hypothesis.   The scripts will be made available beforehand so the audience has the option of downloading and following along to enhance the hands-on theme.   My goal is that interested audience members will be able to explore the scripts at a later time to familiarize themselves more thoroughly with a tractable modeling and analysis process.

 

 

Visual Memory and the Brain

Visual Memory and the Brain

Friday, May 9, 2008, 1:00 – 3:00 pm Orchid 1

Organizer: Marian Berryhill (University of Pennsylvania)

Presenters: Lynn C. Robertson (University of CA, Berkeley & VA), Yaoda Xu (Yale University), Yuhong Jiang (University of Minnesota), Vincent Walsh (University College London), Marian Berryhill (University of Pennsylvania)

Symposium Description

Focus:

Visual memory describes the relationship between perceptual processing and the storage and retrieval of the resulting neural representations. Visual memory occurs over a broad time range of scenes across eye movements – to years – in order to visually navigate to a previously visited location or to recognize an old friend. How does the brain encode, store, and retrieve these representations? What neural mechanism limits the capacity and resolution of visual memory? Do the same neural areas participate in short-term and long-term visual memory? Do particular neural regions, such as the intraparietal sulcus, participate only in visual memory, or does it have a more generally role in attentionally demanding tasks such as binding and multi-object tracking? Are different brain areas critically involved in storing different visual materials, such as simple colors or complex scenes? These topics have only begun to be studied; the purpose of this symposium is to discuss the latest research and current problems facing our understanding of visual memory. Investigators in this area of research employ a variety of techniques such as the lesion method (neuropsychology and TMS), neuroimaging (fMRI, ERP), and behavioral studies.

Timeliness:

The finding that the intraparietal sulcus may limit the capacity of visual short-term memory is an example of a topic that has been published in prominent journals, thereby fueling new studies and generating broad interest. Moreover, this general topic of the neural basis of visual memory relates to several other timely topics in the visual cognition literature including: neural areas involved in multi-object tracking, attention, scene perception, navigation, and long-term memory.

Audience:

This symposium would be accessible to a broad VSS Audience as it includes both perceptual and cognitive processing. Furthermore, by including speakers who come from a variety of methodological backgrounds, including neuropsychology and neuroimaging. Both students and seasoned researchers will find it of interest. The audience will gain a better understanding of visual cognition and of current methodological techniques being used to understand brain-behavior relationships.

Abstracts

Forms of visual representation in unattended space: neuropsychological evidence

Lynn C. Robertson, Thomas Van Vleet, UC Berkeley, VA

Although there is a great deal of evidence that undetected information can affect subsequent performance (e.g., priming), the nature of the memory representation that produces this effect is not well understood. In a series of studies with patients who suffer from left sided neglect and/or extinction from right hemisphere damage, we show that feature displays prime a subsequent central target equally well whether the features were more or less likely to be detected. Conversely, conjunction displays prime more when they are more likely to be detected. These results will be discussed as they relate to visual storage of undetected stimuli and how memory representations differ with attention.

Dissociable parietal mechanisms supporting visual short-term memory for objects

Yaoda Xu, Yale University

In this talk, I will show that visual short-term memory (VSTM) storage is mediated by distinctive posterior brain mechanisms, such that VSTM capacity is determined both by a fixed number of objects and by object complexity. These findings not only advance our understanding of the neural mechanisms underlying VSTM, but also have interesting implications to theories on visual object perception.

Speaker 3

Yuhong Jiang, U. Minnesota

Dr. Jiang will discuss behavioral and fMRI data on visual short-term memory, with an emphasis on synthesis of findings.

Migrating Memories: Remembering what comes next

Vincent Walsh, UCL

Memory, along with attention, imagery, learning, getting grants and awareness is sometimes assumed to be a high level function. There is, however, an increasing “migration” of functions from higher to lower areas as we ask more diffiucult questions of the sensory cortex. For example, what were once considered “cognitive” contours with neural correlates in IT can be inferred from the responses of V1 or V2 neurons and visual imagery and visual awareness require V1. It is becoming increasingly clear that a similar migration of complexity is occuring in memory and we can now rightly speak about sensory memory in visual cortex. I will discuss experiments which explore the role of visual areas in short term memory and visual priming. Specifically I will discuss the effects of interfering with memory processes by applying TMS over visual area V5, the frontal eye fields and the parietal cortex.

When was I Where?

Marian E. Berryhill & Ingrid R. Olson, U. Pennsylvania, Temple University

The perceptual deficits following dorsal stream damage are well-known, i.e. hemispatial neglect, Balint’s syndrome. However, accumulating evidence suggests that these same cortical regions are involved in processing ‘when’ as well as ‘where’. In a series of studies examining unilateral and bilateral parietal patients we have observed visual, spatial working memory as well as autobiographical and constructive memory impairments. These data suggest that these patients have cognitive deficits that parallel their perceptual deficits. In this talk, we will discuss the effects of dorsal stream damage on visual perception as well as the effects on stored representations in short-term and long-term memory.

 

 

Crowding

Crowding

Friday, May 9, 2008, 1:00 – 3:00 pm Royal Palm 5

Organizer: Denis G. Pelli (New York University)

Presenters: Patrick Cavanagh (Harvard University and LPP, Universit� Paris Descartes), Brad C. Motter (Veterans Affairs Medical Center and SUNY Upstate Medical University), Yury Petrov (Northeastern University), Joshua A. Solomon (City University, London), Katharine A. Tillman (New York University)

Symposium Description

Crowding is a breakdown of object recognition. It happens when the visual system inappropriately integrates features over too large an area, coming up with an indecipherable jumble instead an object. An explosion of new experiments exploit crowding to study object recognition by breaking it. The five speakers will review past work, providing a tutorial introduction to crowding, and will describe the latest experiments seeking to define the limits of crowding and object recognition. The general question, including �integration�, �binding�, �segmentation�, �grouping,� �contour integration�, and �selective attention�, is a burning issue for most members of VSS.

Abstracts

Crowding: When grouping goes wrong

Patrick Cavanagh

Early visual processes work busily to construct accurate representations of edges, colors and other features that appear within their receptive fields, dutifully posting their details across the retinotopic landscape of early cortices. Then the fat hand of attention makes a grab at a target and comes up with an indecipherable stew of everything in the region. Well, that�s one model of crowding. There are others. Whatever the model of crowding, it is clear that the phenomenon provides a rare window onto the mid-level process of feature integration. I will present results on nonretinotopic crowding and anticrowding that broaden the range of phenomena we include in the category of crowding.

Correlations between visual search and crowding

Brad C. Motter

Visual search through simple stimulus arrays can be described as a linear function of the angular separation between the target and surrounding items after scaling for cortical magnification. Maximum reading speeds as a function of eccentricity also appear to be bound by a cortical magnification factor. If crowding can explain these visual behaviors, what is the role of focal attention in these findings?

Locus of spatial attention determines inward-outward anisotropy in crowding

Yury Petrov

I show that the locus of spatial attention strongly affects crowding, inducing inward-outward anisotropy in some conditions, removing or reversing it in others. It appears that under normal viewing conditions attention is mislocalized outward of the target, which may explain stronger crowding by an outward mask.

Context-induced acuity loss for tilt: If it is not crowding, what is it?

Joshua A. Solomon and Michael J. Morgan

When other objects are nearby, it becomes more difficult to determine whether a particular object is tilted, for example, clockwise or anti-clockwise of vertical. “Crowding” is similar: when other letters are nearby, it becomes more difficult to determine the identity of a particular letter or whether it is, for example, upside down or mirror-reversed. There is one major difference between these two phenomena. The former occurs with big objects in the centre of the visual field; the latter does not. We call the former phenomenon “squishing.” Two mechanisms have been proposed to explain it: lateral inhibition and stochastic re-calibration. Simple models based on lateral inhibition cannot explain why nearby objects do not impair contrast discrimination as well as tilt acuity, but a new comparison of acuities measured with the Method of Single Stimuli and 2-Alternative Forced-Choice do not support models based on stochastic re-calibration. Lateral inhibition deserves re-consideration. Network simulations suggest that many neurones capable of contrast discrimination have little to contribute towards tilt identification and vice versa.

The uncrowded window for object recognition

Katharine A. Tillman and Denis G. Pelli

It has been known throughout history that we cannot see things that are too small. However, it is now emerging that vision is usually not limited by object size, but by spacing. The visual system recognizes an object by detecting and then combining its features. When objects are too close together, the visual system combines features from them all, producing a jumbled percept. This phenomenon is called crowding. Critical spacing is the smallest distance between objects that avoids crowding. We review the explosion of studies of crowding � in grating discrimination, letter and face recognition, visual search, and reading � to reveal a universal law, the Bouma law: Critical spacing is proportional to distance from fixation, depending only on where (not what) the object is. Observers can identify objects only in the uncrowded window within which object spacing exceeds critical spacing. The uncrowded window limits reading rate and explains why we can recognize a face only if we look directly at it. Visual demonstrations allow the audience to verify key experimental results.

 

 

Perceptual expectations and the neural processing of complex images

Perceptual expectations and the neural processing of complex images

Friday, May 9, 2008, 1:00 – 3:00 pm Royal Palm 6-8

Organizer: Bharathi Jagadeesh (University of Washington)

Presenters: Moshe Bar (Harvard Medical School), Bharathi Jagadeesh (University of Washington), Nicholas Furl (University College London), Valentina Daelli (SISSA), Robert Shapley (New York University)

Symposium Description

The processing of complex images occurs within the context of prior expectations and of current knowledge about the world. A clue about an image, “think of an elephant”, for example, can cause an otherwise nonsensical image to transform into a meaningful percept. The informative clue presumably activates the neural substrate of an expectation about the scene that allows the visual stimulus representation to be more readily interpreted. In this symposium we aim to discuss the neural mechanisms that underlie the use of clues and context to assist in the interpretation of ambiguous stimuli. The work of five laboratories, using imaging, single-unit recording, MEG, psychophysics, and network models of visual processes all show evidence of the impact of prior knowledge on the processing of visual stimuli.

In the work of Bar, we see evidence that a short latency neural response may be induced in higher level cortical areas by complex signals traveling through a fast visual pathway. This pathway may provide the neural mechanism that modifies the processing of visual stimuli as they stream through the brain. In the work of Jagadeesh, we see a potential effect of that modified processing: neural selectivity in inferotemporal cortex is sufficient to explain performance in a classification task with difficult to classify complex images, but only when the images are evaluated in a particular framed context: Is the image A or B (where A or B are photographs, for example a horse and a giraffe). In the work of Furl, human subjects were asked to classify individual exemplars of faces along a particular dimension (emotion), and had prior experience with the images in the form of an adapting stimulus. In this context, classification is shifted away from the adapting stimulus. Simultaneously recorded MEG activity shows evidence reentrant signal, induced by the prior experience of the prime, that could explain the shift in classification. In the work of Treves, we see examples of networks that reproduce the observed late convergence of neural activity onto the response to an image stored in memory, and that can simulate mechanisms possibly underlying predictive behavior. Finally, in the work of Shapley, we see that simple cells in layer 2/3 of V1 (a major input layer for intra-cortical connections) paradoxically show dynamic nonlinearities.

The presence of a dynamic nonlinearity in the responses of V1 simple cells indicates that first-order analyses often capture only a fraction of neuronal behavior, a consideration with wide ranging implications for the analysis in visual responses in more advanced cortical areas. Signals provided by expectation might influence processing throughout the visual system to bias the perception and neural processing of the visual stimulus in the context of that expectation.

The work to be described is of significant scientific merit and reflects recent work in the field; it is original, forcing re-examination of the traditional view of vision as a method of extracting information from the visual scene in the absence of contextual knowledge, a topic of broad interest to those studying visual perception.

Abstracts

The proactive brain: using analogies and associations to generate predictions

Moshe Bar

Rather than passively ‘waiting’ to be activated by sensations, it is proposed that the human brain is continuously busy generating predictions that approximate the relevant future. Building on previous work, this proposal posits that rudimentary information is extracted rapidly from the input to derive analogies linking that input with representations in memory.

The linked stored representations then activate the associations that are relevant in the specific context, which provides focused predictions. These predictions facilitate perception and cognition by pre-sensitizing relevant representations. Predictions regarding complex information, such as those required in social interactions, integrate multiple analogies. This cognitive neuroscience framework can help explain a variety of phenomena, ranging from recognition to first impressions, and from the brain’s ‘default mode’ to a host of mental disorders.

Neural selectivity in inferotemporal cortex during active classification of photographic images

Bharathi Jagadeesh

Images in the real world are not classified or categorized in the absence of expectations about what we are likely to see. For example, giraffes are quite unlikely to appear in one’s environment except in Africa. Thus, when an image is viewed, it is viewed within the context of possibilities about what is likely to appear. Classification occurs within limited expectations about what has been asked about the images. We have trained monkeys to answer questions about ambiguous images in a constrained context: is the image A or B, where A and B are pictures from the visual world, like a giraffe or a horse and recorded responses in inferotemporal cortex while the task is performed, and while the same images are merely viewed. When we record neural responses to these images, while the monkey is required to ask (and answer) a simple question, neural selectivity in IT is sufficient to explain behavior. When the monkey views the same stimuli, in the absence of this framing context, the neural responses are insufficiently selective to explain the separately collected behavior. These data suggest that when the monkey is asked a very specific and limited question about a complex image, IT cortex is selective in exactly the right way to perform the task well. We propose this match between the needs of the task, and the responses in IT results from predictions, generated in other brain areas, which enhance the relevant IT representations.

Experience-based coding in categorical face perception

Nicholas Furl

One fundamental question in vision science concerns how neural activity produces everyday perceptions. We explore the relationship between neural codes capturing deviations from experience and the perception of visual categories. An intriguing paradigm for studying the role of short-term experience in categorical perception is face adaptation aftereffects – where perception of ambiguous faces morphed between two category prototypes (e.g., two facial identities or expressions) depends on which category was experienced during a recent adaptation period. One might view this phenomenon as a perceptual bias towards novel categories – i.e., those mismatching recent experience. Using fMRI, we present evidence consistent with this viewpoint, where perception of nonadapted categories is associated with medial temporal activity, a region known to subserve novelty processing. This raises a possibility, consistent with models of face perception, that face categories are coded with reference to a representation of experience, such as a norm or top-down prediction. We investigated this idea using MEG by manipulating the deviation in emotional expression between the adapted and morph stimuli. We found signals coding for these deviations arising in the right superior temporal sulcus – a region known to contribute to observation of actions and, notably, face expressions. Moreover, adaptation in the right superior temporal sulcus was also predictive of the magnitude of behavioral aftereffects. The relatively late onset of these effects is suggestive of a role for backwards connections or top-down signaling. Overall, these data are consistent with the idea that face perception depends on a neural representation of the deviation of short-term experience.

Categorical perception may reveal cortical adaptive dynamics

Valentina Daelli, Athena Akrami, Nicola J van Rijsbergen and Alessandro Treves, SISSA

The perception of faces and of the social signals they display is an ecologically important process, which may shed light on generic mechanisms of cortically mediated plasticity. The possibility that facial expressions may be processed also along a sub-cortical pathway, leading to the amygdala, offers the potential to single out uniquely cortical contributions to adaptive perception. With this aim, we have studied adaptation aftereffects, psychophysically, using faces morphed between two expressions. These are perceptual changes induced by adaptation to a priming stimulus, which biases subjects to see the non-primed expression in the morphs. We find aftereffects even with primes presented for very short periods, or with faces low-pass filtered to favor sub-cortical processing, but full cortical aftereffects are much larger, suggesting a process involving conscious comparisons, perhaps mediated by cortical memory attractors, superimposed on a more automatic process, perhaps expressed also subcortically. In a modeling project, a simple network model storing discrete memories can in fact explain such short term plasticity effects in terms of neuronal firing rate adaptation, acting against the rigidity of the boundaries between long-term memory attractors. The very same model can be used, in the long-term memory domain, to account for the convergence of neuronal responses, observed by the Jagadeesh lab in monkey inferior temporal cortex.

Contrast-sign specificity built into the primary visual cortex, V1

Williams and Shpaley

We (Wlliams & Shapley 2007) found that in different cell layers in the macaque primary visual cortex, V1, simple cells have qualitatively different responses to spatial patterns. In response to a stationary grating presented for 100ms at the optimal spatial phase (position), V1 neurons produce responses that rise quickly and then decay before stimulus offset. For many simple cells in layer 4, it was possible to use this decay and the assumption of linearity to predict the amplitude of the response to the offset of a stimulus of the opposite-to-optimal spatial phase. However, the linear prediction was not accurate for neurons in layer 2/3 of V1, the main cortico-cortical output from V1. Opposite-phase responses from simple cells in layer 2/3 were always near zero. Even when a layer 2/3 neuron’s optimal-phase response was very transient, which would predict a large response to the offset of the opposite spatial phase, opposite-phase responses were small or zero. The suppression of opposite-phase responses could be an important building block in the visual perception of surfaces.

Simple cells like those found in layer 4 respond to both contrast polarities of a given stimulus (both brighter and darker than background, or opposite spatial phases). But unlike layer 4 neurons, layer 2/3 simple cells code unambiguously for a single contrast polarity. With such polarity sensitivity, a neuron can represent “dark-left – bright-right” instead of just an unsigned boundary.

 

 

Vision Sciences Society