7th Annual Dinner and Demo Night

Monday, May 11, 2009, 6:00 – 9:00 pm

Dinner: 6:00 – 8:00 pm, Vista Terrace and Sunset Deck
Demos: 7:00 – 9:00 pm, Royal Palm Ballroom 4-5 Ballroom and Acacia Meeting Rooms

Please join us Monday evening for the 7th Annual VSS Demo Night, a spectacular night of imaginative demos solicited from VSS members, delectable food, and social interaction. This year’s dinner theme is Caribbean Night!

The demos highlight the important role of visual displays in vision research and education. This year, Arthur Shapiro and Bart Anderson are co-curators for Demo Night, and Gideon Caplovitz is assistant curator.

The Caribbean-themed buffet dinner will be held on the Sunset Terrace and Vista Deck overlooking the Naples Grande main pool. Demos will be located upstairs on the ballroom level in the Royal 4-5 Ballroom and Acacia Meeting Rooms.

Demo Night is free for all registered VSS attendees. Meal tickets are not required, but you must wear your VSS badge for entry to the Dinner Buffet. Guests and family members of all ages are welcome to attend the demos, but must purchase a ticket for dinner. You can register your guests at any time during the meeting at the VSS Registration Desk located in the Royal Ballroom foyer. At 6:00 pm Monday, a desk will also be set up at the entrance to the dinner in the Vista Ballroom.

Guest prices: Adults: $25, Youth (6-12 years old): $10, Children under 6: free

Immersive Virtual Reality

Bryce Armstrong, Edzard Ulrichs and Matthias Pusch; WorldViz
We will use a 6DOF tracked environment to immerse users in virtual environments. Our goal is to show some of the VSS members experiments to demonstrate the relevance of using VR for vision science research.

Unbound Rivalry

Derek Arnold, Holly Erskine, Warrick Roseboom and Tom Wallis; The University of Queensland
We will demonstrate that exposure to a coherent moving stimulus can induce a dynamic competition for perceptual dominance involving illusory forms signaled by motion streaks and direction-sensitive mechanisms.

LITE Vision Demonstrations

Kenneth Brecher; Boston University
I will present the most recent Project LITE vision demonstrations (including ones not yet posted on the web) – both computer software and new physical objects.

The Bar Cross Ellipse Illusion

Gideon Caplovitz and Peter Tse; Princeton University and Dartmouth College
A quad-stable stimulus leading to drastically different percepts based on differential figure-ground segmentation, assignment and integration of motion sources.

Bypassing V1: Motion through depth from monocular pattern motions

Thaddeus B. Czuba, Bas Rokers, Lawrence K. Cormack and Alex C. Huk; The University of Texas at Austin
We show that percepts of motion through depth are supported by stimuli that effectively bypass significant binocular processing in primary visual cortex (V1).

Helmholtz/Zanforlin illusion

Peter Thompson and Rob Stone; University of York
Asked to make a pile of coins as high as it is wide, subjects make it up to 30% too low. Simple demo with no computer! Interactive for subject. Cheap.

Perceptual Conduits for Attentional Flow: Contour Interpolation Exerts Automatic Effects on Multiple Object Tracking

Brian P. Keane, Everett Mettler, Vicky Tsoi and Phil J. Kellman; UCLA
We explore multiple object tracking in which moving items do or do not form interpolated connections with one another. Our demonstrations show that the ability to track clearly depends on interpolation.

Subjective disappearance of targets induced by flickering illumination

Sung-Ho Kim; Rutgers University
Under flickering illumination, peripherally presented target lines or dots disappear.

Failure of slope constancy

Zhi Li and Frank Durgin; Swarthmore College
Viewed from the top, the downward slope of a hill or ramp appears shallower when standing at the edge and steeper when standing back from the edge. The surface can appear to rotate upward as the observer approaches it.

Growing and Shrinking: The Body-Based Rescaling of Apparent Size

Sally Linkenauger and Jessica Witt; University of Virginia
We will demonstrate that apparent size is judged relative to one’s body. Using magnification and minification goggles, we will show this using a newly discovered visual illusion to disrupt the relationship between physical object size and body size.

Marilyn-go-round: the moving hybrid-image

Takao Sato and Kenchi Hosokawa; University of Tokyo
Hybrid-images combine high and low spatial frequency components from two separate images. We remove the low spatial frequency content from hypbrid images by spinning them along a curved orbit. The demo is interactive and amusing.

Motion induces overestimation (MIO)

Maryam Vaziri Pashkam and Arash Afraz; Harvard University
We will demonstrate the motion-induced overestimation illusion. On a rotating spoked disk,as the rotation speed increases, the perceived number of spokes increases.

Binocular shape, unlike binocular space, is perceived veridically

Tadamasa Sawada, Yunfeng Li, Zygmunt Pizlo and Robert M. Steinman; Purdue University
It is widely believed that binocular space perception is inaccurate and unreliable. We will show that this applies only to depth perception, not to the perception of complex 3D shapes. The geometry responsible for this useful accomplishment will be explained.

Dynamic Object Formation: Perceptual Reality Combines the Visible and Recently Visible

Tandra Ghose, Evan Palmer, Brian P. Keane and Phil J. Kellman, UCLA
We demonstrate perceptual completion in dynamically occluded and illusory stimuli. We explore the conditions favoring spatiotemporal completion and demonstrate the effects of component processes leading to object formation, including illusions resulting from non-veridical updating of occluded object position.

The break of the curveball, rolling rolls, and other illusions

Arthur Shapiro; American University
I will demonstrate new visual effects involving “rotation from shading,” differences between peripheral and foveal processing, and a variant of hybrid images.

Smooth pursuit suppresses motion processing

Peter Tse; Dartmouth College
When smoothly pursuing a moving fixation spot, real motion in the background is suppressed.

Slant stereomotion from modulation of interocular spatial frequency difference

Christopher Tyler and Lora Likova; Smith-Kettlewell Eye Research Institute
If gratings are presented with an interocular spatial-frequency difference (ISFD), modulating the ISFD over time generates strong percepts of slant stereomotion, even when orientation or velocity differences exclude the use of conventional binocular disparity cues.

2009 Young Investigator – Frank Tong

Dr. Frank Tong

Vanderbilt University, Department of Psychology

This year’s winner of the VSS Young Investigator Award is Frank Tong, Associate Professor of Psychology at Vanderbilt University. In the nine years since receiving his PhD from Harvard, Frank has established himself as one of the most creative, productive young vision scientists in our field. His research artfully blends psychophysics and brain imaging to address important questions about the neural bases of awareness and object recognition. He has published highly influential papers that have been instrumental in shaping current thinking about the neural bases of multistable perception, including binocular rivalry. Moreover, Frank has played a central role in the development and refinement of powerful analytic technique for deriving reliable population signals from fMRI data, signals that can predict perceptual states currently being experienced by an individual. Using these pattern classification techniques, Frank and his students have identified brain areas that contain patterns of neural responses sufficient to support orientation perception, motion perception and working memory.

The YIA award will be presented at the Keynote Address on Saturday, May 9, at 7:30 pm.

2009 Keynote – Robert H. Wurtz

Robert H. Wurtz

Robert H. Wurtz

Laboratory of Sensorimotor Research, National Eye Institute, NIH, Bethesda, MD
NIH Distinguished Scientist and Chief of the Section on Visuomotor Integration at the National Eye Institute

Audio and slides from the 2009 Keynote Address are available on the Cambridge Research Systems website.

Brain Circuits for Stable Visual Perception

Saturday, May 9, 2009, 7:30 pm, Royal Palm Ballroom

In the 19th century von Helmholtz detailed the need for signals in the brain that provide information about each impending eye movement.  He argued that such signals could interact with the visual input from the eye to preserve stable visual perception in spite of the incessant saccadic eye movements that continually displace the image of the visual world on the retina.  In the 20th century, Sperry as well as von Holst and Mittelstaedt provided experimental evidence in fish and flies for such signals for the internal monitoring of movement, signals they termed corollary discharge or efference copy, respectively.  Experiments in the last decade (reviewed by Sommer and Wurtz, 2008) have established a corollary discharge pathway in the monkey brain that accompanies saccadic eye movements.  This corollary activity originates in the superior colliculus and is transmitted to frontal cortex through the major thalamic nucleus related to frontal cortex, the medial dorsal nucleus.  The corollary discharge has been demonstrated to contribute to the programming of saccades when visual guidance is not available. It might also provide the internal movement signal invoked by Helmholtz to produce stable visual perception.  A specific neuronal mechanism for such stability was proposed by Duhamel, Colby, and Goldberg (1992) based upon their observation that neurons in monkey frontal cortex shifted the location of their maximal sensitivity with each impending saccade.  Such shifting receptive fields must depend on input from a corollary discharge, and this is just the input to frontal cortex recently identified.  Inactivating the corollary discharge to frontal cortex at its thalamic relay produced a reduction in the shift.  This dependence of the shifting receptive fields on an identified corollary discharge provides direct experimental evidence for modulation of visual processing by a signal within the brain related to the generation of movement – an interaction proposed by Helmholtz for maintaining stable visual perception.

Biography

Robert H. Wurtz is a NIH Distinguished Scientist and Chief of the Section on Visuomotor Integration at the National Eye Institute. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and has received many awards. His work is centered on the visual and oculomotor system of the primate brain that controls the generation of rapid or saccadic eye movements, and the use of the monkey as a model of human visual perception and the control of movement. His recent work has concentrated on the inputs to the cerebral cortex that underlie visual attention and the stability of visual perception.

Modern Approaches to Modeling Visual Data

Modern Approaches to Modeling Visual Data

Friday, May 8, 3:30 – 5:30 pm
Royal Ballroom 6-8

Organizer: Kenneth Knoblauch (Inserm, U846, Stem Cell and Brain Research Institute, Bron, France)

Presenters: Kenneth Knoblauch (Inserm, U846, Bron, France), David H. Foster (University of Manchester, UK), Jakob H Macke (Max-Planck-Institut für biologische Kybernetik, Tübingen), Felix A. Wichmann (Technische Universität Berlin & Bernstein Center for Computational Neuroscience Berlin, Germany), Laurence T. Maloney (NYU)

Symposium Description

A key step in vision research is comparison of experimental data to models intended to predict the data. Until recently, limitations on computer power and lack of availability of appropriate software meant that the researcher’s tool kit was limited to a few generic techniques such as fitting individual psychometric functions. Use of these models entails assumptions such as the exact form of the psychometric function that are rarely tested. It is not always obvious how to compare competing models, to show that one describes the data better than another or to estimate what percentage of ‘variability’ in the responses of the observers is really captured by the model. Limitations on the models that researchers are able to fit translate into limitations on the questions they can ask and, ultimately, the perceptual phenomena that can be understood. Because of recent advances in statistical algorithms and the increased computer power available to all researchers, it is now possible to make use of a wide range of computer-intensive parametric and nonparametric approaches based on modern statistical methods. These approaches allow the experimenter to make more efficient use of perceptual data, to fit a wider range of perceptual data, to avoid unwarranted assumptions, and potentially to consider more complex experimental designs with the assurance that the resulting data can be analyzed. Researchers are likely familiar with nonparametric resampling methods such as bootstrapping (Efron, 1979; Efron & Tibshirani, 1993). We review a wider range of recent developments in statistics in the past twenty years including results from the machine learning and model selection literatures. Knoblauch introduces the symposium and describes how a wide range of psychophysical procedures (including fitting psychophysical functions, estimating classification images, and estimating the parameters of signal detection theory) share a common mathematical structure that can be readily addressed by modern statistical approaches. He also shows how to extend these methods to model more complex experimental designs and also discusses modern approaches to smoothing data. Foster describes how to relax the typical assumptions made in fitting psychometric functions and instead use the data itself to guide fitting of psychometric functions. Macke describes a technique—decision-images— for extracting critical stimulus features based on logistic regression and how to use the extracted critical features to generate optimized stimuli for subsequent psychophysical experiments. Wichmann describes how to use “inverse” machine learning techniques to model visual saliency given eye movement data. Maloney discusses the measurement and modeling of super-threshold differences to model appearance and gives several examples of recent applications to surface material perception, surface lightness perception, and image quality. The presentations will outline how these approaches have been adapted to specific psychophysical tasks, including psychometric-function fitting, classification, visual saliency, difference scaling, and conjoint measurement. They show how these modern methods allow experimenters to make better use of data to gain insight into the operation of the visual system than hitherto possible.

Abstracts

Generalized linear and additive models for psychophysical data

Kenneth Knoblauch

What do such diverse paradigms as classification images, difference scaling and additive conjoint measurement have in common?  We introduce a general framework that permits modeling and evaluating experiments covering a broad range of psychophysical tasks. Psychophysical data are considered within a signal detection model in which a decision variable, d, which is some function, f, of the stimulus conditions, S, is related to the expected probability of response, E[P], through a psychometric function, G: E[P] = G(f(d(S))). In many cases, the function f is linear, in which case the model reduces to E[P] = G(Xb), where X is a design matrix describing the stimulus configuration and b a vector of weights indicating how the observer combines stimulus information in the decision variable. By inverting the psychometric function, we obtain a Generalized Linear Model (GLM).  We demonstrate how this model, which has previously been applied to calculation of signal detection theory parameters and fitting the psychometric function, is extended to provide maximum likelihood solutions for three tasks: classification image estimation, difference scaling and additive conjoint measurement.  Within the GLM framework, nested hypotheses are easily set-up in a manner resembling classical analysis of variance.  In addition, the GLM is easily extended to fitting and evaluating more flexible (nonparametric) models involving arbitrary smooth functions of the stimulus. In particular, this approach permits a principled approach to fitting smooth classification images.

Model-free estimation of the psychometric function

David H. Foster, K. Zychaluk

The psychometric function is central to the theory and practice of psychophysics. It describes the relationship between stimulus level and a subject’s response, usually represented by the probability of success in a certain number of trials at that stimulus level. The psychometric function itself is, of course, not directly accessible to the experimenter and must be estimated from observations. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data except that the function is smooth. The critical role of the bandwidth is explained, and a method described for estimating its optimum value by cross-validation. A  wide range of data sets were fitted by the local linear method and, for comparison, by several parametric models. The local linear method usually performed better and never worse than the parametric ones. As a matter of principle, a correct parametric model will always do better than a nonparametric model, simply because the parametric model assumes more about the data, but given an experimenter’s ignorance of the correct model, the local linear method provides an impartial and consistent way of addressing this uncertainty.

Estimating Critical Stimulus Features from Psychophysical Data: The Decision-Image Technique Applied to Human Faces

Jakob H. Macke, Felix A. Wichmann

One of the main challenges in the sensory sciences is to identify the stimulus features on which the sensory systems base their computations: they are a pre-requisite for computational models of perception. We describe a technique—decision-images— for extracting critical stimulus features based on logistic regression. Rather than embedding the stimuli in noise, as is done in classification image analysis, we want to infer the important features directly from physically heterogeneous stimuli.  A Decision-image not only defines the critical region-of-interest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decision-images thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face discrimination experiment. We show that decision-images are able to predict human responses not only in terms of overall percent correct but are able to predict, for individual observers, the probabilities with which individual faces are (mis-) classified. We then test the predictions of the models using optimized stimuli. Finally, we discuss possible generalizations of the approach and its relationships with other models.

Non-linear System Identification: Visual Saliency Inferred from Eye-Movement Data

Felix A. Wichmann, Wolf Kienzle, Bernhard Schölkopf, Matthias Franz

For simple visual patterns under the experimenter’s control we impose which information, or features, an observer can use to solve a given perceptual task. For natural vision tasks, however, there are typically a multitude of potential features in a given visual scene which the visual system may be exploiting when analyzing it: edges, corners, contours, etc. Here we describe a novel non-linear system identification technique based on modern machine learning methods that allows the critical features an observer uses to be inferred directly from the observer’s data. The method neither requires stimuli to be embedded in noise nor is it limited to linear perceptive fields (classification images). We demonstrate our technique by deriving the critical image features observers fixate in natural scenes (bottom-up visual saliency). Unlike previous studies where the relevant structure is determined manually—e.g. by selecting Gabors as visual filters—we do not make any assumptions in this regard, but numerically infer number and properties them from the eye-movement data. We show that center-surround patterns emerge as the optimal solution for predicting saccade targets from local image structure. The resulting model, a one-layer feed-forward network with contrast gain-control, is surprisingly simple compared to previously suggested saliency models. Nevertheless, our model is equally predictive. Furthermore, our findings are consistent with neurophysiological hardware in the superior colliculus. Bottom-up visual saliency may thus not be computed cortically as has been thought previously.

Measuring and modeling visual appearance of surfaces

Laurence T. Maloney

Researchers studying visual perception have developed numerous experimental methods for probing the perceptual system. The range of techniques available to study performance near visual threshold is impressive and rapidly growing and we have a good understanding of what physical differences in visual stimuli are perceptually discriminable. A key remaining challenge for visual science is to develop models and psychophysical methods that allow us to evaluate how the visual system estimates visual appearance. Using traditional methods, for example, it is easy to determine how large a change in the parameters describing a surface is needed to produce a visually discriminable surface. It is less obvious how to evaluate the contributions of these same parameters to perception of visual qualities such as color, gloss or roughness. In this presentation, I’ll describe methods for modeling judgments of visual appearance that go beyond simple rating methods and describe how to model them and evaluate the resulting models experimentally. I’ll describe three applications. The first concerns how illumination and surface albedo contribute to the rated dissimilarity of illuminated surfaces in three-dimensional scenes. The second concerns modeling of super-threshold differences in image quality using difference scaling, and the third concerns application of additive conjoint measurement to evaluating how observers perceive gloss and meso-scale surface texture (‘bumpiness’) when both are varied.

 

Retinotopic and Non-retinotopic Information Representation and Processing in Human Vision

Retinotopic and Non-retinotopic Information Representation and Processing in Human Vision

Friday, May 8, 3:30 – 5:30 pm
Royal Ballroom 1-3

Organizers: Haluk Ogmen (University of Houston) and Michael H. Herzog (Laboratory of Psychophysics, BMI, EPFL, Switzerland)

Presenters: Doug Crawford (Centre for Vision Research, York University, Toronto, Ontario, Canada), David Melcher (Center for Mind/Brain Sciences and Department of Cognitive Sciences University of Trento, Italy), Patrick Cavanagh (LPP, Université Paris Descartes, Paris, France), Shin’ya Nishida (NTT Communication Science Labs, Atsugi, Japan), Michael H. Herzog (Laboratory of Psychophysics, BMI, EPFL, Switzerland)

Symposium Description

Due to the movements of the eyes and those of the objects in the environment, natural vision is highly dynamic. An understanding of how the visual system can cope with such complex inputs requires an understanding of reference frames, used in the computations of various stimulus attributes. It is well known that the early visual system has a retinotopic organization. It is generally thought that the retinotopic organization of the early visual system is insufficient to support the fusion of visual images viewed at different eye positions. Moreover, metacontrast masking and anorthoscopic perception show that a retinotopic image is neither sufficient nor necessary for the perception of spatially extended form. How retinotopic representations are transformed into more complex non-retinotopic representations has been long-standing and often controversial question. The classical paradigm to study this question has been the study of memory across eye movements. As we shift our gaze from one fixation to another one, the retinotopic representation of the environment undergoes drastic shifts, yet phenomenally our environment appears stable. How is this phenomenal stability achieved? Does the visual system integrate information across eye movements and if so how? A variety of theories ranging from purely retinotopic representations without information integration to detailed spatiotopic representations with point-by-point information integration have been proposed. Talks in this symposium (Crawford, Melcher, Cavanagh) will address the nature of trans-saccadic memory, the role of extra-retinal signals, retinotopic, spatiotopic, and objectopic representations for information processing and integration during and across eye movements. In addition to the challenge posed by eye movements on purely retinotopic representations, recent studies suggest that, even under steady fixation, computation of moving form requires non-retinotopic representations. This is because objects in the environment often move with complex trajectories and do not stimulate sufficiently retinotopically anchored receptive fields. Moreover, occlusions can “blank out” retinotopic information for a significant time period. These failures to activate sufficiently retinotopically anchored neurons, in turn, suggest that some form of non-retinotopic information analysis and integration should take place. Talks in this symposium (Nishida, Herzog) will present recent findings that show how shape and color information for moving objects can be integrated according to non-retinotopic reference frames. Taken together, the talks at the symposium aim to provide a recent perspective to the fundamental problem of reference frames utilized by the visual system and present techniques to study these representations during both eye movement and fixation periods. The recent convergence of a variety of techniques and stimulus paradigms in elucidating the roles of non-retinotopic representations provides timeliness for the proposed symposium. Since non-retinotopic representations have implications for a broad range of visual functions, we expect our symposium to be of interest to the general VSS audience including students and faculty.

Abstracts

Cortical Mechanisms for Trans-Saccadic Memory of Multiple Objects

Doug Crawford, Steven Prime

Humans can retain the location and appearance of 3-4 objects in visual working memory, independent of whether a saccade occurs during the memory interval.  Psychophysical experiments show that, in the absence of retinal cues, extra-retinal signals are sufficient to update trans-saccadic memory, but where and how do these signals enter the visual system? It is know that ‘dorsal stream’ areas like the parietal eye fields update motor plans by remapping them in gaze-centered coordinates, but the equivalent neural mechanisms for updating object features across saccades are less understood. We investigated the possible role of extra-retinal signals from the cortical gaze control system by applying trans-cranial magnetic stimulation (TMS) to either the human parietal eye fields or the frontal eye fields, during the interval between viewing several objects and testing their remembered orientation and location. Parietal TMS had a baseline effect on memory of one feature and reduced memory capacity from approximately three down to one feature, but only when applied to the right hemisphere near the time of a saccade. The effects of frontal cortex TMS on trans-saccadic memory capacity were similar, but were more symmetric, and did not affect baseline feature memory. In our task, the latter would occur if spatial memory were disrupted without affecting feature memory. These experiments show that cortical gaze control centers usually associated with the ‘dorsal’ stream of vision are also involved in visual processing and memory of object features during saccades, possibly influencing ‘ventral stream’ processing through re-entrant pathways.

Trans-Saccadic Perception: “Object-otopy” across Space and Time

David Melcher

Real-world perception is typically trans-saccadic: we see the same object across multiple fixations. Yet saccadic eye movements can dramatically change the location in which an object is projected onto the retina. In a series of experiments using eye tracking, psychophysics, neuroimaging and TMS, we have investigated how information from a previous fixation can influence perception in the subsequent fixation. Specifically, we have tested the idea that the “remapping” of receptive fields around the time of saccadic eye movements might play a role in trans-saccadic perception. Our results suggest that two mechanisms interact to produce “object-otopic” perception across saccades. First, a limited number of objects that are individuated in a scene (treated as unique objects potentially subject to action, as opposed to being part of the background gist) are represented and updated across saccades in a sensorimotor “saliency map” (possibly in posterior parietal cortex). Second, the updating of these “pointers” in the map leads to the remapping of receptive fields in intermediate visual areas. We have found that perception can be retinotopic, spatiotopic or even-in the case of moving objects-can involve the combination of information for the same object that is neither retinally or spatially matched. At the same time, however, the visual system must give priority to the retinal information, which tends to be most reliable during fixation of stable objects.

Spatiotopic Apparent Motion

Patrick Cavanagh, Martin Szinte

When our eyes move, stationary objects move over our retina. Our visual system cleverly discounts this retinal motion so that we do not see the objects moving when they are not. What happens if the object does move at the time of the eye movement? There is a question of whether we will see the displacement at all, but if we do see it, is the motion determined by the displacement on the retina or the displacement in space? To address this, we asked subjects to make horizontal saccades of 10°. Two dots were presented, one before and one after the saccade displaced vertically on the screen by 3° from the first. Each dot was presented for 400 msec and the first turned off about 100 msec before the saccade and the second dot turned on 100 msec after the saccade. In this basic condition, the retinal locations of the two dots were in opposite hemifields, separated horizontally by 10°. Nevertheless, subjects reported the dots appeared to be in motion vertically – the spatiotopic direction – although with a noticeable deviation from true vertical. This spatiotopic apparent motion was originally reported by Rock and Ebenholtz (1962) but for displacements along the direction of the saccade. In our experiments, we use the deviation from spatiotopic motion to estimate errors in the remapping of pre-saccadic locations that underlies this spatiotopic motion phenomenon.

Trajectory Integration of Shape and Color of Moving Object

Shin�ya Nishida, Masahiko Terao, Junji Watanabe

Integration of visual input signals along motion trajectory is widely recognized as a basic mechanism of motion detection. It is however not widely recognized that the same computation is potentially useful for shape and color perception of moving objects. This is because trajectory integration can improve signal-to-noise ratio of moving feature extraction without introducing motion blur. Indeed, trajectory integration of shape information is indicated by several phenomena including multiple-slit view (e.g., Nishida, 2004). Trajectory integration of color information is also indicated by a couple of phenomena, motion-induced color mixing (Nishida et al., 2007) and motion-induced color segregation (Watanabe & Nishida, 2007). In the motion-induced color segregation, for instance, temporal alternations of two colors on the retina are perceptually segregated more veridically when they are presented as moving patterns rather than as stationary alternations at the same rate. This improvement in temporal resolution can be explained by a difference in motion trajectory along which color signals are integrated. Furthermore, we recently found that the improvement in temporal resolution is enhanced when an observer views a stationary object while making a pursuit eye movement, in comparison with when an observer views a moving object without moving eyes (Terao et al, 2008, VSS). This finding further strengthens the connection of the motion-induced color segregation with subjective motion deblur.

A Litmus Test for Retino- vs. Non-retinotopic Processing

Michael Herzog, Marc Boi, Thomas Otto, Haluk Ogmen

Most visual cortical areas are retinotopically organized and accordingly most visual processing is assumed to be processed within a retinotopic coordinate frame. However, in a series of psychophysical experiments, we have shown that features of elements are often non-retinotopically integrated when the corresponding elements are motion grouped. When this grouping is blocked, however, feature integration occurs within retinotopic coordinates (even though the basic stimulus paradigm is identical in both conditions and grouping is modulated by spatial or temporal contextual cues only). Hence, there is strong evidence for both retino- and non-retinotopic processing. However, it is not always easy to determine which of these two coordinate systems prevails in a given stimulus paradigm. Here, we present a simple psychophysical test to answer this question. We presented three squares in a first frame, followed by an ISI, the same squares shifted one position to the right, the same ISI, and the squares shifted back to their original position. When this cycle is repeated with ISIs longer than 100ms, three squares are perceived in apparent motion. With this specific set-up, features integrate between the central squares iff integration takes place non-retinotopically. With this litmus test we showed, for example, that motion processing is non-retinotopic whereas motion adaptation is retinotopic. In general, by adding the feature of interest to the central square, it can be easily tested whether a given stimulus paradigm is processed retino- or non-retinotopically.

 

Dynamic Processes in Vision

Dynamic Processes in Vision

Friday, May 8, 3:30 – 5:30 pm Royal Ballroom 4-5

Organizer: Jonathan D. Victor (Weill Medical College of Cornell University)

Presenters: Sheila Nirenberg (Dept. of Physiology and Biophysics, Weill Medical College of Cornell University), Diego Contreras (Dept. of Neuroscience, University of Pennsylvania School of Medicine), Charles E. Connor (Dept. of Neuroscience, The Johns Hopkins University School of Medicine), Jeffrey D. Schall (Department of Psychology, Vanderbilt University)

Symposium Description

The theme of the symposium is the importance of analyzing the time course of neural activity for understanding behavior. Given the very obviously spatial nature of vision, it is often tempting to ignore dynamics, and to focus on spatial processing and maps. As the speakers in this symposium will show, dynamics are in fact crucial: even for processes that appear to be intrinsically spatial, the underlying mechanism often resides in the time course of neural activity. The symposium brings together prominent scientists who will present recent studies that exemplify this unifying theme. Their topics will cover the spectrum of VSS, both anatomically and functionally (retinal ganglion cell population coding, striate cortical mechanisms of contrast sensitivity regulation, extrastriate cortical analysis of shape, and frontal and collicular gaze control mechanisms). Their work utilizes sophisticated physiological techniques, ranging from large-scale multineuronal ex-vivo recording to intracellular in vivo recording, and employs a breadth of analytical approaches, ranging from information theory to dynamical systems.

Because of the mechanistic importance of dynamics and the broad range of the specific topics and approaches, it is anticipated that the symposium will be of interest to physiologists and non-physiologists alike, and that many VSS members will find specific relevance to their own research.

Abstracts

How neural systems adjust to different environments: an intriguing role for gap junction coupling

Sheila Nirenberg

The nervous system has an impressive ability to self-adjust – that is, as it moves from one environment to another, it can adjust itself to accommodate the new conditions. For example, as it moves into an environment with new stimuli, it can shift its attention; if the stimuli are low contrast, it can adjust its contrast sensitivity; if the signal-to-noise ratio is low, it can change its spatial and temporal integration properties. How the nervous system makes these shifts isn’t clear. Here we show a case where it was possible to obtain an answer. It’s a simple case, but one of the best-known examples of a behavioral shift – the shift in visual integration time that accompanies the switch from day to night vision. Our results show that the shift is produced by a mechanism in the retina – an increase in coupling among horizontal cells. Since coupling produces a shunt, the increase causes a substantial shunting of horizontal cell current, which effectively inactivates the cells. Since the cells play a critical role in shaping integration time (they provide feedback to photoreceptors that keeps integration time short), inactivating them causes integration time to become longer. Thus, a change in the coupling of horizontal cells serves as a mechanism to shift the visual system from short to long integration times.  The results raise a new, and possibly generalizable idea: that a neural system can be shifted from one state to another by changing the coupling of one of its cell classes.

Cortical network dynamics and response gain

Diego Contreras

The transformation of synaptic input into spike output by single neurons is a key process underlying the representation of information in sensory cortex. The slope, or gain, of this input-output function determines neuronal sensitivity to stimulus parameters and provides a measure of the contribution of single neurons to the local network. Neuronal gain is not constant and may be modulated by changes in multiple stimulus parameters. Gain modulation is a common neuronal phenomenon that modifies response amplitude without changing selectivity.  Computational and in vitro studies have proposed cellular mechanisms of gain modulation based on the postsynaptic effects of background synaptic activation, but these mechanisms have not been studied in vivo.  Here we used intracellular recordings from cat primary visual cortex to measure neuronal gain while changing background synaptic activity with visual stimulation.  We found that increases in the membrane fluctuations associated with increases in synaptic input do not obligatorily result in gain modulation in vivo.  However, visual stimuli that evoked sustained changes in resting membrane potential, input resistance, and membrane fluctuations robustly modulated neuronal gain.  The magnitude of gain modulation depended critically on the spatiotemporal properties of the visual stimulus.  Gain modulation in vivo may thus be determined on a moment-to-moment basis by sensory context and the consequent dynamics of synaptic activation.

Dynamic integration of object structure information in primate visual cortex

Charles E. Connor

Object perception depends on extensive processing of visual information through multiple stages in the ventral pathway of visual cortex.  We use neural recording to study how information about object structure is processed in intermediate and higher-level ventral pathway cortex of macaque monkeys.  We find that neurons in area V4 (an intermediate stage) represent object boundary fragments by means of basis function tuning for position, orientation, and curvature.  At subsequent stages in posterior, central, and anterior inferotemporal cortex (PIT/CIT/AIT), we find that neurons integrate information about multiple object fragments and their relative spatial configurations.  The dynamic nature of this integration process can be observed in the evolution of neural activity patterns across time following stimulus onset.  At early time points, neurons are responsive to individual object fragments, and their responses to combined fragments are linearly additive.  Over the course of approximately 60 ms, responses to individual object fragments decline and responses to specific fragment combinations increase.  This evolution toward nonlinear selectivity for multi-fragment configurations involves both shifts in response properties within neurons and shifts in population activity levels between primarily linear and primarily nonlinear neurons.  This pattern is consistent with a simple network model in which the strength of feedforward and recurrent inputs varies continuously across neurons.

Timing of selection for the guidance of gaze

Jeffrey D. Schall

Time is of the essence in the execution of visually guided behavior in dynamic environments.  We have been investigating how the visual system responds to unexpected changes of the image when a saccade is being planned.  Performance of stop signal or double-step tasks can be explained as the outcome of a race between a process that produces the saccade and a process that interrupts the preparation.  Neural correlates of dynamic target selection and these race processes have been identified in the frontal eye field and superior colliculus.  The timecourse of these processes can provide useful leverage for understanding how early visual processing occurs.

 

Is number visual? Is vision numerical? Investigating the relationship between visual representations and the property of magnitude

Is number visual? Is vision numerical? Investigating the relationship between visual representations and the property of magnitude

Friday, May 8, 1:00 – 3:00 pm
Royal Ballroom 6-8

Organizer: Michael C. Frank (Massachusetts Institute of Technology)

Presenters: David Burr (Dipartimento di Psicologia, Università Degli Studi di Firenze and Department of Psychology, University of Western Australia), Michael C. Frank (Massachusetts Institute of Technology), Franconeri, Steven (Northwestern University), David Barner (University of California, San Diego), Justin Halberda (Johns Hopkins University)

Symposium Description

The ability to manipulate exact numbers is a signature human achievement, supporting activities like building bridges, designing computers, and conducting economic transactions. Underlying this ability and supporting its acquisition is an evolutionarily-conserved mechanism for the manipulation of approximate quantity: the analog magnitude system. The behavioral and neural signatures of magnitude representations have been extensively characterized but how these representations interact with other aspects of cognitive and visual processing is still largely unknown. Do magnitude features attach to objects, scenes, or surfaces? Is approximate magnitude representation maintained even for sets for which exact quantity is known? Is magnitude estimation ability altered by experience?

The goal of our symposium is to look for answers to these questions by asking both how number is integrated into visual processing and how visual processing in turn forms a basis for the acquisition and processing of exact number. We address these questions through talks on three issues: 1) the basic psychophysical properties of numerical representations (Halberda, Burr), 2) how visual mechanisms integrate representations of number (Franconeri & Alvarez), and 3) how these representations support exact computation, both in standard linguistic representations (Frank) and via alternative representations (Barner).

The issues addressed by our symposium have been a focus of intense recent interest. Within the last four years there have been a wide variety of high-profile reports from developmental, neuroscientific, comparative, and cross-linguistic/cross-cultural studies of number. Research on number is one of the fastest moving fields in cognitive science, due both to the well-defined questions that motivate research in this field and to the wide variety of methods that can be brought to bear on these questions.

The target audience of our symposium is a broad group of vision scientists, both students and faculty, who are interested in connecting serious vision science with cognitive issues of broad relevance to a wide range of communities in psychology, neuroscience, and education. In addition, the study of number provides an opportunity to link innovations in vision research methods—including psychophysical-style experimental designs, precise neuroimaging methods, and detailed computational data analysis—with deep cognitive questions about the nature of human knowledge. We anticipate that attendees of our symposium will come away with a good grasp of the current state of the art and the outstanding issues in the interface of visual and numerical processing.

Abstracts

A visual sense of number

David Burr

Evidence exists for a non-verbal capacity to apprehend number, in humans (including infants), and in other primates. We investigated numerosity perception in adult humans, by measuring Weber fractions with a series of techniques, and by adaptation. The Weber fraction measurements suggest that number estimation and “subitizing” share common mechanisms. Adapting to large numbers of dots increased apparent numerosity (by a factor of 2-3), and adapting to small numbers increased it. The magnitude of adaptation depended primarily on the numerosity of the adapter, not on size, orientation or contrast of test or adapter, and occurred with very low adapter contrasts. Varying pixel density had no effect on adaptation, showing that it depended solely on numerosity, not related visual properties like texture density. We propose that just as we have a direct visual sense of the reddishness  of half a dozen ripe cherries so we do of their sixishness. In other words there are distinct qualia for numerosity, as there are for colour, brightness and contrast, not reducible to spatial frequency or density of texture.

Language as a link between exact number and approximate magnitude

Michael C. Frank

Is exact number a human universal? Cross-cultural fieldwork has given strong evidence that language for exact number is an invention which is not present in all societies. This result suggests a range of questions about how learning an exact number system may interact with pre-existing analog magnitude representations. More generally, number presents a tractable case of the Whorfian question of whether speakers of different languages differ in their cognition. We addressed these questions by studying the performance of the Pirahã, an Amazonian group in Brazil, on a range of simple quantity matching tasks (first used by Gordon, 2004). We compared the performance of this group to the performance of English-speakers who were unable to use exact numerical representations due to a concurrent verbal interference task. We found that both groups were able to complete simple one-to-one matching tasks even without words for numbers and both groups relied on analog magnitude representations when faced with a more difficult task in which items in the set to be estimated were presented one at a time. However, performance between the two groups diverged on tasks in which other strategies could be used. We conclude that language for number is a “cognitive technology” which allows the manipulation of exact quantities across time, space, and changes in modality, but does not eliminate or substantially alter users’ underlying numerical abilities.

Rapid enumeration is based on a segmented visual scene

Steve Franconeri, George Alvarez

How do we estimate the number of objects in a set?  One primary question is whether our estimates are based on an unbroken visual image or a segmented collection of discrete objects.  We manipulated whether individual objects were isolated from each other, or grouped into pairs by irrelevant lines.  If number estimation operates over an unbroken image, then this manipulation should not affect estimates. But if number estimation relies on a segmented image, then grouping pairs of objects into single units should lead to lower estimates. In Experiment 1, participants underestimated the number of grouped squares, relative to when the connecting lines were ‘broken’. Experiment 2 presents evidence that this segmentation process occurred broadly across the entire set of objects.  In Experiment 3, a staircase procedure provides a quantitative measure of the underestimation effect.  Experiment 4 shows that is the strength of the grouping effect was equally strong for a single thin line, and the effect can be eliminated by a tiny break in the line.  These results provide the first direct evidence that number estimation relies on a segmented input.

Constructing exact number approximately: a case study of mental abacus representations

David Barner

Exact numerical representation is usually accomplished through linguistic representations. However, an alternative route for accomplishing this task is through the use of a “mental abacus”—a mental image of an abacus (a device used in some cultures for keeping track of exact quantities and doing arithmetic via the positions of beads on a rigid frame). We investigated the nature of mental abacus representations by studying children ages 7-15 who were trained in this technique. We compared their ability to read the cardinality of “abacus flashcards” (briefly presented images of abacuses in different configurations) with their ability to enumerate sets of dots after similarly brief, masked presentation. We conducted five studies comparing abacus flashcards to: (1) random dot enumeration, (2) spatially proximate dot enumeration, (3) enumeration of dots arranged in an abacus configuration without the abacus frame, (4) enumeration of dots on a rotated abacus, (5) enumeration of dots arranged on an abacus. In all conditions, participants were faster and more accurate in identifying the cardinality of an abacus than they were in enumerating the same number of beads, even when the display was physically identical. Analysis of errors suggested that children in our studies viewed the abacus as a set of objects with each separate row of beads being a single object, each with its own independent magnitude feature. Thus, the “mental abacus” draws on pre-existing approximate and exact visual abilities to construct a highly accurate system for representing large exact number.

An interface between vision and numerical cognition

Justin Halberda

While the similarity of numerical processing across different modalities (e.g., visual objects, auditory objects, extended visual events) suggests that number concepts are domain general even at the earliest ages (4 month old babies), visual processing is constrained in ways that may have constrained the numerical concepts humans have developed.  In this talk I discuss how online processing of numerical content is shaped by the constraints of both object-based and ensemble-based visual processing and discuss how numerical content and vision engage one another.

 

ARVO@VSS 2009

Advances in Understanding the Structure and Function of the Retina

Time/Room: Friday, May 8, 2009, 1:00 – 3:00 pm, Royal Ballroom 4-5
Organizer: Donald Hood, Columbia University
Presenters: Dennis Dacey, Paul R Martin, Austin Roorda, Donald C Hood

This symposium was designed in conjunction with Steve Shevell to bring the latest advances presented at ARVO to the VSS audience. There will be four talks covering the following topics. I will moderate it and speak last on. “Advances in structural imaging of the human retina.” Before me the speakers and topics will be: D. Darcy (Advances in retinal anatomy); P. Martin (Advances in retinal physiology); and A. Roorda (Advances in optical imaging of the human retina). The speakers are all experienced researchers and lectures use to speaking to diverse audiences. Thus  the level should be appropriate for all attendees at VSS from students to experts in vision or cognition.

Advances and challenges in understanding the normal retina

Speaker: Dennis Dacey, University of Washington

The vertebrate retina is one of the most accessible parts of the central nervous system for clarifying the links between neural circuits and visual coding. Advanced imaging methods are already revealing fundamental features of retinal organization and function previously inaccessible to study. As a background for considering future directions I will review our current understanding of the cellular architecture of the primate retina. On the one hand, the retina is an elegantly simple structure at the periphery of the visual system where mosaics of receptor cells transmit signals to interneurons and ganglion cells whose axons project a representation of the visual world to the brain. However, the retina is also an amazingly complex neural machine that contains at least 80 anatomically and physiologically distinct cell populations. The interactions among most of these cell types are precisely arranged in a microlaminated sheet that provides the scaffold for ~ 20 separate visual pathways. In the primate, much attention has been focused in the so-called ‘midget pathway’, yet these cells, despite their numerosity, only account for two anatomically distinct visual pathways. By contrast, the great majority of visual pathways exists at relatively low density and subserves diverse functions ranging from color vision and motion detection to the pupil reflex and setting biological rhythms. Microdissecting the structure and function of each of these diverse low-density pathways remains a key challenge for retinal neurobiology.

Advances in understanding circuits serving colour vision.

Speaker: Paul R Martin, National Vision Research Institute of Australia & Department of Optometry and Vision Sciences & University of Melbourne, Australia

The theory of trichromatic human colour vision was proposed over 200 years ago and the existence of three types of cone photoreceptors was confirmed in the 1980s. I will summarise current views of how the signals from cone photoreceptors are organised into “blue-yellow” and “red-green” pathways in the subcortical visual system. These pathways can be distinguished at the first synapse in the visual pathway, between cone photoreceptors and cone-contacting bipolar cells, and remain segregated in the subcortical afferent visual pathway.  I will review evidence from molecular biology, anatomy, and physiology showing that the blue-yellow pathway likely forms a primordial colour vision system common to most diurnal mammals, whereas the red-green pathway is unique to primates and evolved together with high-acuity spatial vision.

Advances in optical imaging of the human retina.

Speaker: Austin Roorda PhD, University of California, Berkeley

Adaptive optics (AO) is a technique to correct for the aberrations in the eye’s optics, and offers non-invasive, optical access to the retina in living eyes on an unprecedented scale. The technology is very useful for ophthalmic imaging and is being used for basic and clinical imaging, but the scope of applications goes well beyond. By coupling scanning laser technology with adaptive optics, we are able to track and deliver light to the retina with the precision and accuracy of single cones and can simultaneously record either perceptual (human) or electrical responses (monkey). These measurements are helping to reveal basic properties of the human visual system.

Advances in structural imaging of the human retina.

Speaker: Donald C Hood, Columbia University

With recent advances in the structural imaging, it is now possible to visualize individual retinal layers of the human retina in vivo. After a short summary of the technique of optical coherence tomography (OCT), its application to understanding the structure and function of the normal and diseased eye will be considered. First, measurements of the thickness of the normal human receptor, inner nuclear, and ganglion cell layers will be presented and the possibilities of using this technique to study normal human vision discussed. Next, data from patients with diseases that affect the receptors (e.g. retinitis pigmentosa) and retinal ganglion cells (e.g. glaucoma) will be presented and discussed in terms of tests of hypotheses about the relationship between behavior (i.e. visual loss) and structural (i.e. anatomical) changes in these layers.

Vision Sciences Society