11th Annual Dinner and Demo Night

Monday, May 13, 2013, 7:00 – 10:00 pm

Buffet Dinner: 7:00 – 9:00 pm,  Vista Ballroom, Sunset & Vista Decks, and Mangrove Pool
Demos: 7:30 – 10:00 pm, Royal Palm 4-5, Acacia and Cypress Meeting Rooms

Please join us Monday evening for the 11th Annual VSS Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year, Gideon Caplovitz, Arthur Shapiro, Dejan Todorovic, and Maryam Vaziri Pashkam are co-curators for Demo Night.

A buffet dinner is served in the Vista Ballroom and on the Sunset Deck and Mangrove Pool area. Demos are located upstairs on the ballroom level in the Royal Palm 4-5 and Acacia and Cypress Meeting Rooms.

Some exhibitors have also prepared special demos for Demo Night.

Demo Night is free for all registered VSS attendees. Meal tickets are not required, but you must wear your VSS badge for entry to the Dinner Buffet. Guests and family members of all ages are welcome to attend the demos but must purchase a ticket for dinner. You can register your guests at any time during the meeting at the VSS Registration Desk, located in the Royal Palm Foyer. A desk will also be set up at the entrance to the dinner in the Vista Ballroom at 6:30 pm.

Guest prices: Adults: $25, Youth (6-12 years old): $10, Children under 6: free

3-D Depth-Inverting and Motion-Reversing Illusions

Thomas V. Papathomas, Rutgers University; Marcel DeHeer, 3-D Graphics, Amsterdam
We will project video animations of 3-D depth-inverting illusions, including the hollow-mask illusion and variations, the ‘’Exorcist’’ illusion, and various forms of reverse-perspective illusions. Generally, depth is inverted, with concavities being perceived as convexities, and vice versa. Also, the direction of rotation is reversed.

Binocular Rivalry Gets Pushy

Elan Barenholtz, Loren Kogelschatz; Dept. Psychology, Center for Complex Systems, Florida Atlantic University
Wearing red/blue 3D glasses while fixating a homogeneous background results in an interesting form of binocular rivalry: both red and blue fields appear simultaneously, with a boundary that shifts erratically, as the two eyes compete for dominance. The boundary can also be ‘pushed’, by sweeping a hand across the screen.

3-D Phenakistoscope

Peter Thompson, Rob Stone; University of York, UK
The 3-D phenakistoscope generates a moving sequence of real 3-D figures. The viewer spins a vertically oriented disk and views via a series of slits a series of figures on the other side of the disk. A mirror allows the figures to be seen and set in motion. Our model is easy to construct and will thrill your family and friends.

L-POST: A Screening Test for Assessing Perceptual Organization

Karien Torfs1,2, Lee de-Wit, Kathleen Vancleef2, Johan Wagemans2; 1Université Catholique de Louvain, 2University Leuven
We will demonstrate the Leuven Perceptual Organization Screening Test (L-POST) in which a wide range of processes of perceptual organization are measured using a matching-to-sample task. The L-POST is freely available at www.gestaltrevision.be/tests, can be administered in 20 minutes, and has a neglect friendly version. Try it yourself!

Photo to Painting Techniques

Krista Ehinger, MIT; Eric Altschuler, New Jersey Medical School
Turn your photo into a painted portrait! We demonstrate how two classes of computer vision algorithms (top-down morphable 3D models and bottom-up texture synthesis) can be used to replicate the portrait painting techniques of different artists in history.

Reflections on a True Mirror

Jason Haberman, Jordan Suchow; Harvard University
Common mirrors reflect an image of the viewer that is flipped in the plane of depth. Therefore, there is a mismatch between what the viewer sees and what the rest of the world sees. With a non-reversing (i.e., ‘’true’’) mirror, a pair of angled mirrors creates an image that reflects the true self — the image as seen by others or in photographs.

Adaptation of the Vestibulo-Ocular Reflex Using Prism Goggles: An Easy and Compelling Classroom Demonstration

Carl E. Granrud, Michael Todd Allen; University of Northern Colorado
Toss balls into a trashcan while wearing prism goggles that alter the angle of visual inputs. After several misses, accuracy generally improves. When students remove the goggles, they typically miss the trashcan, but in the direction opposite to their initial misses. This demonstrates adaptation of the vestibulo-ocular reflex.

VPixx 3D Survivor Showdown

Peter April, Jean-Francois Hamelin,Stephanie-Ann Seguin; VPixx Technologies
An exciting game in which the PROPixx 500Hz 3D DLP projector presents dynamic 3D images, and pairs of players with passive 3D glasses compete for the fastest response times. VPixx will be awarding prizes to the players with the quickest reflexes!

Virtual Reality Immersion with Cow Cost Head Mounted Displays

Matthias Pusch, Charlotte Li; WorldViz, LLC
Get fully immersed with a research quality Virtual Reality system. Based on the WorldViz Vizard VR software, the system comes with a 3D head-mounted display, motion tracking, rapid application development tools, application starter kit, support & training. Walk through high-fidelity virtual environments in full scale and fully control visual input.

Rotating Columns: Relating Structure-From-Motion, Accretion/Deletion, And Figure/Ground

Vicky Froyen, O. Daglar Tanrikulu, Jacob Feldman, Manish Singh; Rutgers University
When constant textural motion is added to figure-ground displays, the ground regions are perceived as moving as a single surface. Surprisingly, the figural regions are perceived as 3D volumes rotating in depth (like rotating columns)—despite the fact that the textural motion is not consistent with 3D rotation.

The Fuse-A-Face iPad App

Jim Tanaka Buyun Xu, Bonnie Heptonstall, Simen Hagen; University of Victoria, British Columbia, Canada
Have you ever wondered what you would look like with Angelina Jolie’s lips or Johnny Depp’s eyes? Take a self-photo with the iPad camera and have fun combining your face with face of your favorite celebrities. Then post your face mash-up to your Facebook page or the VSS Face Gallery.

Fabricating Transparent Liquid From Visual Motion

Takahiro Kawabe Kazushi Maruya, Shin’ya Nishida; NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation
We will present an illusion in which an impression of transparent liquid is created from band-passed ‘vector’ spatiotemporal distortion of a static image. We also show that translating the image distortion produces an illusion of the flow of transparent liquid, and even triggers a motion aftereffect in the direction opposite to the apparent liquid flow.

The Influence of Local and Global Motion on Shifts in Perceived Position

Peter J. Kohler, Peter U. Tse; Dartmouth College
The perceived position of a briefly presented stimulus can be shifted in the direction of nearby motion. We present several novel versions of this phenomenon, and demonstrate that local and global motion can both have an influence on the direction of the shift in perceived position.

Some Novel Spatiotemporal Boundary Formation Phenomena

Gennady Erlikhman, Phil Kellman; University of California, Los Angeles
We present several new kinds of spatiotemporal boundary formation phenomena. In one set of demos, we show SBF with non-rigid objects of changing size, shape, and orientation. In another, we show that contour formation via SBF can serve as inputs to conventional illusory contour formation.

Color Man Walking

Gi Yeul Bae, Zheng Ma; Johns Hopkins University
A gradual color change along an iso-luminant color space creates a non-uniform percept of color change rate. Background luminance is a strong determinant for this rhythmical percept. We demonstrate this phenomenon using a variety of geometric arrangements of colored objects.

Rotation or Deformation? A Surprising Consequence of the Kinetic Depth Effect

Attila Farkas, Dr. Alen Hajnal; University of Southern Mississippi
Present illusion reveals a trade-off between several cognitive assumptions. One such bias considers the rigidity of the depicted object. A human head is considered to be a rigid object, and therefore is not expected to be seen as spontaneously changing its shape by stretching or shrinking. Stretching will become rotation.

The Garden Path Illusion – Finding Equiluminance Instantly

Bruce Bridgeman, Sabine Blaesi; University of California, Santa Cruz
My garden has flickering roses on one side and flickering foliage on the other. In the middle runs a yellow garden path without flicker. Two panels with opposite brightness gradients and different colors alternate above the chromatic flicker fusion rate; where their brightnesses match, a steady band appears. Instant equiluminance!

Identifying Nonrigid 3D Shapes From Motion Cues

Anshul Jain, Qasim Zaidi; Graduate Center for Vision Research, SUNY College of Optometry
Observers will perform a shape-identification task on novel deforming and rigid shape-from-motion stimuli, which will demonstrate that humans do not make a rigidity assumption to extract 3D shape. We will also demonstrate that observers’ performance does not deteriorate in the periphery if the stimuli size is adjusted for cortical magnification.

Dynamic Illusory Size Contrast

Christopher D. Blair, Kyle Killebrew, Gideon P. Caplovitz, University of Nevada Reno; Ryan Mruczek, Swarthmore College
We demonstrate a new illusion in which dynamic changes in the size of one object can induce perceived dynamic changes in another moving object of constant size.

Surface Flows

Romain Vergne, Université Joseph Fourier; Pascal Barla, Inria
In this demo, we will present two novel image deformation operators that produce the illusion of surface shape depicted through textures (e.g., pigmentation) or reflections/refractions (e.g. off glossy/translucent materials). These deformations work in real-time in our prototype software and can be controlled accurately directly in the image.

The Beuchet Chair

Peter Thompson, Rob Stone; University of York, U.K.
Back by Popular Demand! In the Beuchet chair we see two part of the chair (legs and seat) as belonging together even though they are at different distances from us. Consequently figures at different distances are perceived as being at the same distance. The more distant person appears tiny and the closer figure huge.

Tusi or Not Tusi

Alex Rose-Henig, Arthur Shapiro; American University
We will present several examples of surprising spatial organization. Some examples show that linear elemental motion can produce circular global motion (we call this Tusi motion, from Nasir al-Din al-Tusi,1201-1276), and other examples show that circular elemental motion can produce linear global motion (we call this not-Tusi motion).

Leonardo da Vinci Age Regression

Christopher Tyler, Smith-Kettlewell Institute
Although only one secure portrait of Leonardo da Vinci is know, an array of putative portraits and self-portraits of Leonardo da Vinci are aligned and shown in inverse age sequence to provide a convincing age regression back to his infancy.

2013 Davida Teller Award – Eileen Kowler

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Vision Sciences Society is honored to present Dr. Eileen Kowler with the inaugural Davida Teller Award.

Eileen Kowler

Department of Psychology, Rutgers University

Dr. Eileen Kowler, Professor at Rutgers University, is the inaugural winner of the Davida Teller Award. Eileen transformed the field of eye movement research that eye movements are not reflexive visuomotor responses, but are driven by and tightly linked to attention, prediction, and cognition.

Perhaps the most significant scientific contribution by Eileen was the demonstration that saccadic eye movements and visual perception share attentional resources. This seminal paper has become the starting point for hundreds of subsequent studies about vision and eye movements. By convincingly demonstrating that the preparation of eye movements shares resources with the allocation of visual attention, this paper also established the validity of using eye movements as a powerful tool for investigating the mechanisms of visual attention and perception, which provides a precision and reliability that is otherwise difficult, if not impossible, to achieve. This work forms the basis of most of the work on eye movements that is presented at VSS every year!

Before her landmark studies on saccades and attention, Eileen made a major contribution by showing that cognitive expectations exert strong influences on smooth pursuit eye movements. At that time smooth pursuit eye movements were thought to be driven in a machine-like fashion by retinal error signals. Eileen’s wonderfully creative experiments (e.g., pursuit targets moving through Y-shaped tubes) convinced the field that smooth pursuit is guided in part by higher-level visual processes related to expectations, memory, and cognition.

Anticipatory behavior of human eye movements

Monday, May 13, 2013, 1:00 pm, Royal Palm Ballroom

The planning and control of eye movements is one of the most important tasks accomplished by the brain because of the close connection between eye movements and visual function.    Classical approaches assumed that eye movements are solely or primarily reactions to one or another type of sensory cue, but we now know that eye movements also display anticipatory responses to predicted signals or events.  This talk will illustrate several examples of anticipatory behavior of both smooth pursuit eye movements and saccades.   These anticipatory responses are automatic and effortless, depend on the decoding of symbolic environmental cues and on memory for recent events, and can be found in typical individuals and in those with autism spectrum disorder.   Anticipatory responses show that oculomotor control is driven by internal models that take into account both the capacity limits of the motor system and the states of the surrounding visual environment.

2013 Public Lecture – David J. Lewkowicz

David J. Lewkowicz

Florida Atlantic University

David J. Lewkowicz is an internationally renowned authority on infant perceptual and cognitive development. He is currently Professor of Psychology at Florida Atlantic University and a past President of the International Society on Infant Studies.

Poster graphics created by Guilluame Doucet, McGill University.

Perceptual Expertise Begins in Infancy

Saturday, May 11, 2013, 10:00 – 11:30 am, Renaissance Academy of Florida Gulf Coast University

Contrary to conventional wisdom, infants are not passive, naïve observers. Aided by prenatally acquired perceptual abilities, starting at birth infants begin to interact with their world. As they grow, they rapidly learn about the faces, voices, speech, and language in their native environment. By their first birthday, infants become perceptual experts but, paradoxically, only for native faces, voices, speech, and language. This talk will show how the knowledge that we acquire as infants not only facilitates but also hinders our interactions with our world for the rest of our lives.
The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

Jointly sponsored by VSS and the Renaissance Academy of Florida Gulf Coast University.

2013 Student Workshops

VSS Workshop for PhD Students and Postdocs: How to deal with media?!

Sunday, May 12, 1:00 – 2:00 pm, Acacia 4-6

Chair: Frans Verstraten
Discussants: Aude Oliva, Allison Sekuler, and Jeremy Wolfe

When you have great results it sometimes (but more and more so) means that you will have to deal with journalists who want to tell their readers all about the impact of your research. The problem is that they often exaggerate and can write things that you are not happy about. What should you do to keep in charge when dealing with the media? Also, it has become more and more necessary to present your work to a larger audience. This means more lectures for a general audience, writing popular books, columns in newspapers, appearances on TV and radio programs etc. What is the best way to go here?

These questions will be addressed in a one-hour session introduced by VSS board member Frans Verstraten. His brief introduction is followed by questions and discussion featuring a panel of media experienced VSS members as well as a journalist. All participants will have the chance to ask all the questions they like!

Frans Verstraten

Before Frans Verstraten moved to the University of Sydney in 2012 he was a ‘regular’ on Dutch national TV. Among others, he was a member of the team of scientists in the popular science TV-show Hoe?Zo! (How?So!) which aired for 6 seasons. For several years, he wrote columns for national newspaper De Volkskrant and Mind Magazine. Frans also wrote a book (Psychology in a nutshell) for the general audience. He spends lots of time on scientific outreach. Recently, some of his lectures were published as a 4 CD audio box.

Aude Oliva

Aude Oliva is at the Computer Science and Artificial Intelligence Laboratory at MIT. Her work has been featured in various media outlets, including television, radio, newspapers, as well as in the scientific and popular press (i.e. Wired, Scientific American, Discover Magazine, The Scientist, New Scientist, CNN, and other equivalent outlets in Europe). Her research has made its way in textbooks, as well as in Museums of Art and Science. Her outreach experience includes talks and reports for various companies and industrial firms, as well as governmental agencies.

Allison Sekuler

Allison Sekuler (McMaster University) has a long history of and a strong passion for science outreach, and is a frequent commentator on her own research and that of others in the national and international media. She wrote and was featured in a series of video columns for the Discovery Channel on vision, and has recently appeared on the CBC, Discovery, and the History Channel. She has served as President of the Royal Canadian Institute for the advancement of science, and helped bring the Café Scientifique movement to Canada. She also was the sole scientist on the founding Steering Committee of the Science Media Centre of Canada, and she co-founded #ScienceSunday on Google+, which now has a following of over 65,000 people.

Jeremy Wolfe

Jeremy Wolfe (Brigham & Women’s Hospital) does not consider himself a media star though he does end up in the newspaper, broadcast media, and internet world from time to time. He has learned to be careful about what he says because, if he is not, he knows he will hear from his mother. Jeremy’s primary research focus is visual search, including search by experts like airport baggage screeners, radiologists, and spy satellite image analysts (hence the occasional media interest).

VSS Career Event for PhD Students and Post-docs: What’s Next?

Sunday, May 12, 1:00 – 2:00 pm, Banyan 1-2

Chair: Suzanne McKee
Discussants: Shin’ya Nishida, Lynne Kiorpes, Gunilla Hagerstrom-Portnoy

What next? How can I prepare for my career after grad school? What opportunities are available outside academia? What are the advantages and disadvantages of academic versus other careers? How could I prepare for a career in clinical research? How could I make a contribution to solving clinical problems? What kinds of problems could I work on in industry? What do I need to know about managing a family and an academic career? Can I get a break from teaching duties?

These questions and more will be addressed in a one-hour session with short introductions by our panel of experienced experts. Presentations by panel members will be followed by questions and an interactive discussion session with the audience and panel.

Suzanne McKee

Suzanne McKee is a senior scientist at Smith-Kettlewell Eye Research Institute in San Francisco, CA. She received her Ph.D. from the University of California at Berkeley. She is well-known for her psychophysical studies of all aspects of vision. She will share her experiences working on ‘soft-money’ at a non-profit institution, working in industry, and balancing family and career.

Shin’ya Nishida

Shin’ya Nishida is a Senior Distinguished Scientist of NTT (Nippon Telegram and Telephone Corporation) Communication Science Laboratories, Japan. He received BA, MA and Ph.D degrees in Psychology from Faculty of Letters, Kyoto University. His research has focused on visual motion perception, material perception, time perception and cross-modal interactions.

Lynne Kiorpes

Lynne Kiorpes graduated from Northeastern University with a BS in Psychology and then earned her PhD at the University of Washington with Davida Teller. She is a Professor of Neural Science and Psychology at New York University. Her current work is focused on the development of the visual system and the neural correlates of disorders of visual and cognitive development.

Gunilla Haegerstrom-Portnoy

Gunilla Haegerstrom-Portnoy received her OD and PhD degrees from the School of Optometry University of California, Berkeley where she is a long time faculty member with clinical and administrative responsibilities. She is also a long time consultant to Smith-Kettlewell Eye Research Institute in San Francisco. Her research interests include anomalies of color vision, assessment/management of children with visual impairments and vision function and visual performance in the elderly.

Recipient of the 2013 Davida Teller Award

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Vision Sciences Society is honored to announce Dr. Eileen Kowler as the inaugural recipient of the 2013 Davida Teller Award.

Eileen Kowler

Department of Psychology, Rutgers University

eileen_kowlerDr. Eileen Kowler, Professor at Rutgers University, is the inaugural winner of the VSS Davida Teller Award. Eileen transformed the field of eye movement research that eye movements are not reflexive visuomotor responses, but are driven by and tightly linked to attention, prediction, and cognition.

Perhaps the most significant scientific contribution by Eileen was the demonstration that saccadic eye movements and visual perception share attentional resources. This seminal paper has become the starting point for hundreds of subsequent studies about vision and eye movements. By convincingly demonstrating that the preparation of eye movements shares resources with the allocation of visual attention, this paper also established the validity of using eye movements as a powerful tool for investigating the mechanisms of visual attention and perception, which provides a precision and reliability that is otherwise difficult, if not impossible, to achieve. This work forms the basis of most of the work on eye movements that is presented at VSS every year!

Before her landmark studies on saccades and attention, Eileen made a major contribution by showing that cognitive expectations exert strong influences on smooth pursuit eye movements. At that time smooth pursuit eye movements were thought to be driven in a machine-like fashion by retinal error signals. Eileen’s wonderfully creative experiments (e.g., pursuit targets moving through Y-shaped tubes) convinced the field that smooth pursuit is guided in part by higher-level visual processes related to expectations, memory, and cognition.

Anticipatory behavior of human eye movements

Monday, May 13, 1:00 pm, Royal Palm 4-5

The planning and control of eye movements is one of the most important tasks accomplished by the brain because of the close connection between eye movements and visual function.   Classical approaches assumed that eye movements are solely or primarily reactions to one or another type of sensory cue, but we now know that eye movements also display anticipatory responses to predicted signals or events. This talk will illustrate several examples of anticipatory behavior of both smooth pursuit eye movements and saccades.   These anticipatory responses are automatic and effortless, depend on the decoding of symbolic environmental cues and on memory for recent events, and can be found in typical individuals and in those with autism spectrum disorder.   Anticipatory responses show that oculomotor control is driven by internal models that take into account both the capacity limits of the motor system and the states of the surrounding visual environment.

.

2013 Young Investigator – Roland W. Fleming

Roland W. Fleming

Kurt Koffka Junior Professor Of Experimental Psychology,
University Of Giessen

Roland W. Fleming is the 2013 winner of the VSS Young Investigator Award. Roland is the Kurt Koffka Junior Professor of Experimental Psychology at University of Giessen in Giessen, Germany.  His work combines deep insight about perceptual processes with rigorous experimentation and computational analysis, and he communicates his findings with exemplary clarity. Roland is well-known for his transformative work connecting the perception of object material properties with image statistics.  Equally important is his work on shape estimation from ‘orientation fields’, which has been widely appreciated for highlighting raw information in the image that is diagnostic of 3D shape. Roland has also applied insights from perception to the advancement of computer graphics. He takes an interdisciplinary approach that combines neural modelling, psychophysical experiments, and advanced image synthesis and analysis methods. In addition to his formidable array of intellectual contributions, Roland has been a tireless contributor to the academic community, serving on editorial boards, organizing symposia and short courses, and training first rate students and postdocs.

Elsevier/Vision Research Article

Dr. Fleming’s presentation:

Shape, Material Perception and Internal Models

Monday, May 13, 1:00 pm, Royal Palm Ballroom

When we look at objects, we don’t just recognize them, we also mentally ‘size them up’, making many visual inferences about their physical and functional properties.  Without touching an object, we can usually judge how rough or smooth it is, whether it is physically stable or likely to topple over, or where it might break if we applied force to it.  High-level inferences like these are computationally extremely challenging, and yet we perform them effortlessly all the time.  In this talk, I will present research on how we perceive and represent the properties of materials and objects.  I’ll discuss gloss perception and the inference of fluid viscosity from shape cues.  Using these examples I’ll argue that the visual system doesn’t actually estimate physical parameters of materials and objects.  Instead, I suggest, the brain is remarkably adept at building ‘statistical generative models’ that capture the natural degrees of variation in appearance between samples.  For example, when determining perceived glossiness, the brain doesn’t estimate parameters of a physical reflection model.  Instead, it uses a constellation of low- and mid-level image measurements to characterize the extent to which the surface manifests specular reflections.  Likewise, when determining apparent viscosity, the brain uses many general-purpose shape and motion measurements to characterize the behaviour of a material and relate it to other samples it has seen before. I’ll argue that these ‘statistical generative models’ are both more expressive and easier to compute than physical parameters, and therefore represent a powerful middle way between a ‘bag of tricks’ and ‘inverse optics’.  In turn, this leads to some intriguing future directions about how ‘generative’ representations of shape could be used for inferring not only material properties but also causal history and class membership from few exemplars.

 

2013 Keynote – Dora Angelaki

Dora Angelaki, Ph.D.

Dora Angelaki, Ph.D.

Dept of Neuroscience, Baylor College of Medicine
Website

Audio and slides from the 2013 Keynote Address are available on the Cambridge Research Systems website.

Optimal integration of sensory evidence: Building blocks and canonical computations

Saturday, May 11, 2013, 7:00 pm, Royal Ballroom 4-5

A fundamental aspect of our sensory experience is that information from different modalities is often seamlessly integrated into a unified percept. Recent computational and behavioral studies have shown that humans combine sensory cues according to a statistically optimal scheme derived from Bayesian probability theory; they perform better when two sensory cues are combined. We have explored multisensory cue integration for self-motion (heading) perception based on visual (optic flow) and vestibular (linear acceleration) signals. Neural correlates of optimal cue integration during a multimodal heading discrimination task are found in the activity of single neurons in the macaque visual cortex. Neurons with congruent heading preferences for visual and vestibular stimuli (‘congruent cells’) show improved sensitivity under cue combination. In contrast, neurons with opposite heading preferences (‘opposite cells’) show diminished sensitivity under cue combination. Responses of congruent neurons also reflect trial-by-trial re-weighting of visual and vestibular cues, as expected from optimal integration, and population responses can predict the main features of perceptual cue weighting that have been observed many times in humans. The trial-by-trial re-weighting can be simulated using a divisive normalization model extended to multisensory integration. Deficits in behavior after reversible chemical inactivation provide further support of the hypothesis that extrastriate visual cortex mediates multisensory integration for self-motion perception.

However, objects that move through the environment can distort optic flow and bias perceptual estimates of heading.  In biologically-constrained simulations, we show that decoding a mixed population of congruent and opposite cells according to their vestibular heading preferences can allow estimates of heading to be dissociated from object motion. These theoretical predictions are further supported by perceptual and neural responses: (1) Combined visual and vestibular stimulation reduces perceptual biases during object and heading discrimination tasks. (2) As predicted by model simulations, visual/vestibular integration creates a more robust representation of heading in congruent cells and a more robust representation of object motion in opposite cells.

In summary, these findings provide direct evidence for a biological basis of the benefits of multisensory integration, both for improving sensitivity and for resolving sensory ambiguities. The studies we summarize identify both the computations and neuronal mechanisms that may form the basis for cue integration. Diseases, such as autism spectrum disorders, might suffer from deficits in one or more of these canonical computations, which are fundamental in helping merge our senses to interpret and interact with the world.

Biography

Dr. Angelaki is the Wilhelmina Robertson Professor & Chair of the Department of Neuroscience, Baylor College of Medicine, with a joint appointment in the Departments of Electrical & Computer Engineering and Psychology, Rice University. She holds Diploma and PhD degrees in Electrical and Biomedical Engineering from the National Technical University of Athens and University of Minnesota.  Her general area of interest is computational, cognitive and systems neuroscience.  Within this broad field, she specializes in the neural mechanisms of spatial orientation and navigation using humans and non-human primates as a model.  She is interested in neural coding and how complex, cognitive behavior is produced by neuronal populations. She has received many honors and awards, including the inaugural Pradal Award in Neuroscience from the National Academy of Sciences (2012), the Grass lectureship from the Society of Neuroscience (2011), the Halpike-Nylen medal from the Barany Society (2006) and the Presidential Early Career Award for Scientists and Engineers (1996). Dr. Angelaki maintains a very active research laboratory funded primarily by the National Institute of Health and a strong presence in the Society for Neuroscience and other international organizations.

Does appearance matter?

Time/Room: Friday, May 10, 3:30 – 5:30 pm, Royal 6-8
Organizer: Sarah R. Allred, Rutgers–The State University of New Jersey
Presenters: Benjamin T. Backus, Frank H. Durgin, Michael Rudd, Alan Gilchrist, Qasim Zaidi, Anya Hurlbert

< Back to 2013 Symposia

Symposium Description

Vision science originated with questions about how and why things look the way do. With the advent of physiological tools and the development of rigorous psychophysical methods, however, the language of appearance has been largely abandoned. As scientists, we rarely invoke or report on the qualities of visual appearance and instead report more objective measures such as discrimination thresholds or points of subjective equality. This is not surprising; after all, appearance is experienced subjectively, and the goal of science is objectivity. Thus, phenomenology is sometimes given short shrift in the field as a whole. Here we offer several views, sometimes disparate, grounded in both experimental data and theory, on how vision science is advanced by incorporating phenomenology and appearance. We discuss the nature of scientifically objective methods that capture what we mean by appearance, and the role of subjective descriptions of appearance in vision science. Between us, we argue that by relying on phenomenology and the language of appearance, we can provide a parsimonious framework for interpreting many empirical phenomena, including instructional effects in lightness perception, contextual effects on color constancy, systematic biases in egocentric distance perception and predicting 3D shape from orientation flows. We also discuss contemporary interactions between appearance, physiology, and neural models. Broadly, we examine the criteria for the behaviors that are best thought of as mediated by reasoning about appearances. This symposium is timely. Although the basic question of appearance has been central to vision science since its inception, new physiological and psychophysical methods are rapidly developing. This symposium is thus practical in the sense that these new methods can be more fully exploited by linking them to phenomenology. The symposium is also of broad interest to those interested in the big picture questions of vision science. We expect to pull from a wide audience: the speakers represent a range of techniques (physiology, modeling, psychophysics), a diversity of institutional affiliations and tenure, and similarly broad areas of focus (e.g. cue integration, distance perception, lightness perception, basic spatial and color vision, and higher level color vision).

Presentations

Legitimate frameworks for studying how things look

Speaker: Benjamin T. Backus, Graduate Center for Vision Research, SUNY College of Optometry

What scientific framework can capture what we might mean by “visual appearance” or “the way things look”? The study of appearance can be operationalized in specific situations, but a general definition is difficult. Some visually guided behaviors, such as changing one’s pupil size, maintaining one’s upright posture, ducking a projectile, or catching an object when it rolls off the kitchen counter, are not mediated by consciously apprehended appearances. These behaviors use vision in a fast, stereotyped, and automatic way. Compare them to assessing which side of a mountain to hike up, or whether a currently stationary object is at risk of rolling off the counter. These are behaviors probably are mediated by appearance, in the sense of a general-purpose representation that makes manifest to consciousness various estimated scene parameters. One can reason using appearances, and talk about them with other people. Over the years various strategies have been employed to study or exploit appearance: recording unprompted verbal responses from naïve observers; using novel stimuli that cannot be related to previous experience; or using of stimuli that force a dichotomous perceptual decision. We will review these ideas and try to identify additional criteria that might be used. An important realization for this effort is that conscious awareness need not be all-or-none; just as visual sense data are best known at the fovea, appearance is best known at the site of attentional focus.

Why do things seem closer than they are?

Speaker: Frank H. Durgin, Swarthmore Collete
Authors: Zhi Li; Swarthmore College

Systematic and stable biases in the visual appearance of locomotor space may reflect functional coding strategies for the sake of more precisely guiding motor actions. Perceptual matching tasks and verbal estimates suggest that there is a systematic underestimation of egocentric distance along the ground plane in extended environments. Whereas underestimation has previously been understood as a mere failure of proper verbal calibration, such an interpretation cannot account for perceptual matching results. Moreover, we have observed that the subjective geometry of distance perception on the ground plane is quantitatively consistent with the explicit overestimation of angular gaze declination which we have measured independently of perceived distance. We suggest that there is a locally-consistent expansion of specific angular variables in visual experience that is useful for action, and that this stable expansion may aid action by retaining more precise angular information, despite the information being mis-scaled approximately linearly. Actions are effective in this distorted perceived space by being calibrated to their perceived consequences (but notice that this means that measuring spatial action parameters, such as walked distance, are not directly informative about perceived distance). We distinguish our view from reports of small judgmental biases moderated by semantic, social and emotional factors on the one hand (which might or might not involve changes in visual appearance) and also from the prevailing implicit assumption that the perceptual variables guiding action must be accurate. The perceptual variables guiding action must be stable in order to support action calibration and precise to support precise action. We suggest that the systematic biases evident in the visual (and haptic) phenomenology of locomotor space may reflect a functional coding strategy that can render actions that are coded in the same perceived space more effective than if space were perceived veridically.

How expectations affect color appearance and how that might happen in the brain

Speaker: Michael Rudd, Howard Hughes Medical Institute; University of Washington

The highest luminance anchoring principle (HLAP) asserts the highest luminance surface within an illumination field appears white and the lightness of other surfaces are computed relative to the highest luminance. HLAP is a key tenet of the anchoring theories of Gilchrist and Bressan, and Land’s Retinex color constancy model. The principle is supported by classical psychophysical findings that the appearance of incremental targets is not much affected by changes in the surround luminance, while the appearances of decremental targets depends on the target-surround luminance ratio (Wallach, 1948; Heinemann, 1955). However, Arend and Spehar (1993) showed that this interpretation is too simplistic. Lightness matches made with such stimuli are strongly affected by instructions regarding either the perceptual dimension to be matched (lightness versus brightness) or the nature of illumination when lightness judgments are made. Rudd (2010) demonstrated that instructional effects can even transform contrast effects into assimilation effects. To model these results, I proposed a Retinex-like neural model incorporating mechanisms of edge integration, contrast gain control, and top-down control of edge weights. Here I show how known mechanisms in visual cortex could instantiate the model. Feedback from prefrontal cortex to layer 6 of V1 modulates edge responses in V1 to reorganize the edge integration properties of the V1-V4 circuit. Filling-in processes in V4 compute different lightnesses depending on the V1 gain settings, which are controlled by the observer’s conscious intention to view the stimulus in one way or another. The theory accounts for the instruction-dependent shifts between contrast and assimilation.

How things look

Speaker: Alan Gilchrist, Rutgers – Newark

Recognizing the historical role of materialism in the advancement of modern science, psychology has long sought to get the ghosts out of its theories. Phenomenology has thus been given short shrift, in part because of its distorted form under the early sway of introspectionism. However, phenomenology can no more be avoided in visual perception than the nature of matter can be avoided in physics. Visual experience is exactly what a theory of perception is tasked to explain. If we want to answer Koffka’s question of why things look as they do, a crucial step is the description of exactly how things do look. Of course there are pitfalls. Because we cannot measure subjective experience directly, we rely heavily on matching techniques. But the instructions to subjects must be carefully constructed so as to avoid matches based on the proximal stimulus on one hand, and matches that represent cognitive judgments (instead of the percept) on the other. Asking the subject “What do you think is the size (or shade of gray) of the object?” can exclude a proximal stimulus match but it risks a cognitive judgment. Asking “What does the size (or shade of gray) look like?” can exclude a cognitive judgment but risks a proximal match. Training subjects on the correct nature of the task may represent the best way to exclude proximal stimulus matches while the use of indirect tasks may represent the best way to exclude cognitive judgments. Though there may be no perfect solution to this problem, it cannot be avoided.

Phenomenology and neurons

Speaker: Qasim Zaidi, Graduate Center for Vision Research, SUNY College of Optometry

Frequent pitfalls of relying solely on visual appearances are theories that confuse the products of perception with the processes of perception. Being blatantly reductionist and seeking cell-level explanations helps to conceive of underlying mechanisms and avoid this pitfall. Sometimes the best way to uncover a neural substrate is to find physically distinct stimuli that appear identical, while ignoring absolute appearance. The prime example was Maxwell’s use of color metamers to critically test for trichromacy and estimate the spectral sensitivities of three classes of receptors. Sometimes it is better to link neural substrates to particular variations in appearance. The prime example was Mach’s inference of the spatial gradation of lateral inhibition between neurons, from what are now called Mach-bands. In both cases, a theory based on neural properties was tested by its perceptual predictions, and both strategies continue to be useful. I will first demonstrate a new method of uncovering the neural locus of color afterimages. The method relies on linking metamers created by opposite adaptations to shifts in the zero-crossings of retinal ganglion cell responses. I will then use variations in appearance to show how 3-D shape is inferred from orientation flows, relative distance from spatial-frequency gradients, and material qualities from relative energy in spatial-frequency bands. These results elucidate the advantages of the parallel extraction of orientations and spatial frequencies by striate cortex neurons, and suggest models of extra-striate neural processes. Phenomenology is thus made useful by playing with identities and variations, and considering theories that go below the surface.

The perceptual quality of colour

Speaker: Anya Hurlbert, Institute of Neuroscience, Newcastle University

Colour has been central to the philosophy of perception, and has been invoked to support the mutually opposing views of subjectivism and realism. Here I demonstrate that by understanding color as an appearance, we can articulate a sensible middle ground: although colour is constructed by the brain, it corresponds to a real property of objects. I will argue here that (1) color is a perceptual quality, a reading of the outside world, taken under biological and environmental constraints, and a meaningful property in the perceiver’s internal world (2) the core property of colour constancy makes sense only if colour is subjective and (3) measuring colour constancy illustrates both the need for and the difficulty of subjective descriptions of appearance in vision science. For example, colour names give parsimonious descriptions of subjective appearance, and the technique of colour naming under changing illumination provides a reliable method for measuring colour constancy which is both objective and subjective at the same time. In measurements of simultaneous chromatic contrast, responses of “more red” or “more green” are also appearance descriptors which can be quantified. Achromatic adjustment methods (“adjust the patch until it appears white”) also map a physical stimulus to the subjective experience of neutrality. I will compare the results of such techniques with our recent measurements of colour constancy using techniques that do not rely on appearance descriptors, in particular, the measurement of discrimination thresholds for global illumination change in real scenes.

< Back to 2013 Symposia

ARVO@VSS 2013

Visual Development

Time/Room: Friday, May 10, 2013, 3:30 – 5:30 pm, Royal 1-3
Organizers: Susana Chung, University of California, Berkeley and Anthony Norcia, Stanford University
Presenters: Yuzo Chino, Lynne Kiorpes, Dennis Levi, Gunilla Haegerstrom-Portnoy

< Back to 2013 Symposia

Many visual functions continue to develop and reach adult levels only in late childhood. The successful development of normal visual functions requires ‘normal’ visual experience. The speakers of this symposium will review the time courses of normal visual development of selected visual functions, and discuss the consequences of abnormal visual experience during development on these visual functions. The prospect of recovering visual functions in adults who experienced abnormal visual experience during development will also be discussed, along with the advances made in the assessment of visual functions in children with abnormal visual development due to damage to the visual cortex and the posterior visual pathways.

Postnatal development of early visual cortex in macaque monkeys

Speaker: Yuzo Chino, University of Houston

Our recent studies have demonstrated that the cortical circuitry supporting the monocular and binocular RF properties of V1 and V2 neurons in macaque monkeys is qualitatively adult like as early as 4 weeks of age, and, if not, by 8 weeks of age. However the functional organization of visual cortex in neonates and infants is fragile and needs ‘normal’ visual experience to complete its postnatal development. Experiencing binocular imbalance soon after birth disrupts this development and can result in binocular vision anomalies and often amblyopia. What happens to the visual brain of amblyopic subjects that experience early binocular imbalance is not well understood except for some aspects of early monocular form deprivation. This talk will present the results of studies in primate models of strabismic and anisometropic amblyopia, and make a proposal on how some of monocular deficits in amblyopes may develop. Our earlier studies established that binocular imbalance in infant monkeys immediately initiates interocular suppression in their visual cortex, which persists until adulthood. We also found that the depth of amblyopia in individual strabismic monkeys is highly correlated with the strength of binocular suppression in V1 and V2. I will present our preliminary data to demonstrate that such robust binocular suppression can disrupt the functional development of cortical circuits supporting the spatial map of subunits within the receptive field of a given V2 neuron in amblyopic monkeys, and also, suppression may affect the timing and reliability of spiking by these neurons.

Postnatal development of form and motion pathways in macaque monkeys

Speaker: Lynne Kiorpes, New York University

Many visual functions are poor in infant primates and develop to adult levels during the early months and years after birth. Basic visual processes and those that are higher-order develop over different time courses. These later developing aspects of vision are those that require the integration of information over space (such as contour integration) or space-time (such as global motion or pattern motion discrimination), and likely depend at least in part on the maturation of extrastriate visual areas. Moreover, these developmental programs can be modified by visual experience, with the later developing functions showing greater vulnerability to abnormal visual experience. This talk will describe the development of global form and motion perception, highlight the influence of abnormal visual experience and discuss underlying neural correlates.

Removing the brakes on brain plasticity in adults with amblyopia

Speaker: Dennis Levi, University of California, Berkeley

Experience-dependent plasticity is closely linked with the development of sensory function. Beyond this sensitive period, developmental plasticity is actively limited; however, new studies provide growing evidence for plasticity in the adult visual system. The amblyopic visual system is an excellent model for examining the “brakes” that limit recovery of function beyond the critical period.  While amblyopia can often be reversed when treated early, conventional treatment is generally not undertaken in older children and adults. However new clinical and experimental studies in both animals and humans provide evidence for neural plasticity beyond the critical period. The results suggest that perceptual learning and video game play may be effective in improving a range of visual performance measures and importantly the improvements may transfer to better visual acuity and stereopsis. These findings, along with the results of new clinical trials, suggest that it might be time to re-consider our notions about neural plasticity in amblyopia.

Assessing visual functions in children with cortical visual impairment

Speaker: Gunilla Haegerstrom-Portnoy, University of California, Berkeley

CVI (cortical or cerebral visual impairment) refers to bilateral reduction in vision function due to damage to the visual cortex and/or the posterior visual pathways in the absence of ocular pathology. CVI is the most common cause of bilateral severe visual impairment in children in the developed world. The causes include hypoxic–ischemic brain damage, head injury (such as shaken baby syndrome), infection, hydrocephalus and metabolic disorders. CVI occurs commonly in premature infants and is often accompanied by cerebral palsy, quadraplegia, seizure disorders and developmental delay. Assessment of vision function in children with CVI is a challenge. Preferential looking methods and sweep VEP methods can be used sucessfully and in our population of children with CVI show an enormous range of values of visual acuity (20/50 to 20/800 VEP grating acuity) and contrast sensitivity (1.3 to 25% Michelsen contrast). Large discrepancies often occur between behavioral and VEP measures of function (often a factor of 10 or more). Longitudinal follow-up of 39 children with CVI over 6.5 years on average demonstrated significant improvement in about 50% of the patients and showed that early VEP measures can predict later behavioral vision function. Improvement in vision function occurred over a surprisingly long time (into the teens).

< Back to 2013 Symposia

Active Perception: The synergy between perception and action

Time/Room: Friday, May 10, 1:00 – 3:00 pm, Royal 6-8
Organizer: Michele Rucci & Eli Brenner, Boston University & VU University
Presenters: Eli Brenner, John Wann, Heiner Deubel, Michele Rucci, Ronen Segev, Yves Frégnac

< Back to 2013 Symposia

Symposium Description

Visual perception is often studied in a passive manner. The stimulus on the display is typically regarded as the input to the visual system, and the results of experiments are frequently interpreted without consideration of the observer’s motor activity. In fact, movements of the eyes, head or body are often treated as a nuisance in vision research experiments, and care is often taken in minimizing them by properly constraining the observer. Like many other species, however, humans are not passively exposed to the incoming flow of sensory data. Instead, they actively seek useful information by coordinating sensory processing with motor activity. Motor behavior is a key component of sensory perception, as it enables control of sensory signals in ways that simplify perceptual tasks. The goal of this symposium is to make VSS attendees aware of recent advances in the field of active vision. Non-specialists often associate active vision with the study of how vision controls behavior. To counterbalance this view, the present workshop will instead focus on closing the loop between perception and action. That is, we will examine both the information that emerges in an active observer and how this information is used to guide behavior. To emphasize the fact that behavior is a fundamental component of visual perception, this symposium will address the functional consequences of a moving agent from multiple perspectives. We will cover the perceptual impact of very different types of behavior, from locomotion to microscopic eye movements. We will discuss the multimodal sources of information that emerge and need to be combined during motor activity. Furthermore, we will look at the implications of active vision at multiple levels, from the general computational strategies to the specific impact of eye movement modulations on neurons in the visual cortex. Speakers with expertise in complementary areas and with research programs involving a variety of techniques and focusing on different levels of analysis were specifically selected to provide a well-rounded overview of the field. We believe that this symposium will be of interest to all VSS participants, both students and faculty. It will make clear (to students in particular) that motor activity should not be regarded as an experimental nuisance, but as a critical source of information in everyday life. The symposium will start with a general introduction to the topic and the discussion of a specific example of closed sensory-motor loop, the interception of moving objects (Eli Brenner). It will continue discussing the visual information emerging during locomotion and its use in avoiding collisions (John Wann). We will then examine the dynamic strategy by which attention is redirected during grasping (Heiner Deubel), and how even microscopic “involuntary” eye movements are actually part of a closed sensory-motor loop (Michele Rucci). The last two speakers will address how different types of visual information emerging in an active observer are encoded in the retina (Ronen Segev) and in the cortex (Yves Fregnac).

Presentations

Introduction to active vision: the complexities of continuous visual control

Speaker: Eli Brenner, Human Movement Sciences, VU University
Authors: Jeroen Smeets, Human Movement Sciences, VU University

Perception is often studied in terms of image processing: an image falls on the retina and is processed in the eye and brain in order to retrieve whatever one is interested in. Of course the eye and brain analyse the images that fall on the retina, but it is becoming ever more evident that vision is an active process. Images do not just appear on the retina, but we actively move our eyes and the rest of our body, presumably to ensure that we constantly have the best possible information at our disposal for the task at hand. We do this despite the complications that moving sometimes creates for extracting the relevant information from the images. I will introduce some of the complications and benefits that arise from such active vision on the basis of research on the role of pursuing an object with one’s eyes when trying to intercept it. People are quite flexible in terms of where they look when performing an interception task, but where they look affects their precision. This is not only due to the inhomogeneity of the retina, but also to the fact that neuromuscular delays affect the combination of information from different sensory modalities. The latter can be overcome by relying as much as possible on retinal information (such as optic flow) but there are conditions in which people do not do so but rely on combinations of retinal and extra-retinal information instead (efferent and afferent information about one’s own actions).

Why it’s good to look where you are going

Speaker: John Wann, Dept of Psychology, Royal Holloway University of London

The control of direction and avoidance of collision is fundamental to effective locomotion. A strong body of research has explored the use of optic flow and/or eye-movement signals in judging heading. This presentation will outline research on active steering that explores the use of optic flow and eye-movement signals, but where a key aspect of effective control is where you look and when. The talk will also briefly outline studies using fMRI that highlight the neural systems that support the control model proposed from the behavioural research. Although this model is based on principles derived from optical geometry it conveniently converges on the heuristics used in advanced driver/motorcyclist training, and elite cycling, for negotiating bends at speed. Research supported by the UK EPSRC, UK ESRC, EU FP7 Marie Curie.

Motor selection and visual attention in manual pointing and grasping

Speaker: Heiner Deubel, Department Psychologie, Ludwig-Maximilians-Universitat Munchen, Germany
Authors: Rene Gilster, Department Psychologie, Ludwig-Maximilians-Universitat Munchen, Germany; Constanze Hesse, School of Psychology, University of Aberdeen, United Kingdom

It is now well established that goal-directed movements are preceded by covert shifts of visual attention to the movement target. I will first review recent evidence in favour of this claim for manual reaching movements, demonstrating that the planning of some of these actions establishes multiple foci of attention which reflect the spatial-temporal requirements of the intended motor task. Recently our studies have focused on how finger contact points are chosen in grasp planning and how this selection is related to the spatial deployment of attention. Subjects grasped cylindrical objects with thumb and index finger. A perceptual discrimination task was used to assess the distribution of visual attention prior to the execution of the grasp. Results showed enhanced discrimination for those locations where index finger and thumb would touch the object, as compared to the action-irrelevant locations. A same-different task was used to establish that attention was deployed in parallel to the grasp-relevant locations. Interestingly, while attention seemed to split to the action-relevant locations, the eyes tended to fixate the centre of the to-be-grasped object, reflecting a dissociation between overt and covert attention. A separate study demonstrated that a secondary, attention-demanding task affected the kinematics of the grasp, slowing the adjustment of hand aperture to object size. Our results highlight the import role of attention also in grasp planning. The findings are consistent with the conjecture that the planning of complex movements enacts the formation of a flexible “attentional landscape” which tags all those locations in the visual lay-out that are relevant for the impending action.

The function of microsaccades in fine spatial vision

Speaker: Michele Rucci, Boston University

The visual functions of microsaccades, the microscopic saccades that humans perform while attempting to maintain fixation, have long been debated. The traditional proposal that microsaccades prevent perceptual fading has been criticized on multiple grounds. We have recently shown that, during execution of a high-acuity task, microsaccades move the gaze to nearby regions of interest according to the ongoing demands of the task (Ko et al., Nature Neurosci. 2010). That is, microsaccades are used to examine a narrow region of space in the same way larger saccades normally enable exploration of a visual scene. Given that microsaccades keep the stimulus within the fovea, what is the function of these small gaze relocations? By using new gaze-contingent display procedures, we were able to selectively stimulate retinal regions at specific eccentricities within the fovea. We show that, contrary to common assumptions, vision is not uniform within the fovea: a stimulus displacement from the center of gaze of only 10 arcmin already causes a significant reduction in performance in a high-acuity task. We also show that precisely-directed microsaccades compensate for this lack of homogeneity giving the false impression of uniform foveal vision in experiments that lack control of retinal stimulation. Finally, we show that the perceptual improvement given by microsaccades in high-acuity tasks results from accurately positioning the preferred retinal locus in space rather than from the temporal transients microsaccades generate. These results demonstrate that vision and motor behavior operate in a closed loop also during visual fixation.

Decorrelation of retinal response to natural scenes by fixational eye movements

Speaker: Ronen Segev, Ben Gurion University of the Negev, Department of Life Sciences and Zlotowski Center for Neuroscience

Fixational eye movements are critical for vision since without them the retina adapts fast to a stationary image and the entire visual perception fades away in a matter of seconds. Still, the connection between fixational eye movements and retinal encoding is not fully understood. To address this issue, it was suggested theoretically that fixational eye movements are required to reduce the spatial correlations which are typical for natural scenes. The goal of our study was to put this theoretical prediction under experimental test. Using a multi electrode array, we measured the response of the tiger salamander retina to movies which simulated two types of stimuli: fixational eye movements over a natural scene and flash followed by static view of a natural scene. Then we calculated the cross-correlation in the response of the ganglion cells as a function of receptive field distance. We found that when static natural images are projected, strong spatial correlations are present in the neural response due to correlation in the natural scene. However, in the presence of fixational eye movements, the level of correlation in the neural response drops much faster as a function of distance which results in effective decorrelation of the channels streaming information to the brain. This observation confirms the prediction that fixational eye movement act to reduce the correlations in retinal response and provides better understanding of the contribution of fixational eye movements to the information processing by the retina.

Searching for a fit between the “silent” surround of V1 receptive fields and eye-movements

Speaker: Yves Frégnac, UNIC-CNRS Department of Neurosciences, Information and Complexity Gif-sur-Yvette, France

To what extent emerging macroscopic perceptual features (i.e., Gestalt rules) can be predicted in V1 from the characteristics of neuronal integration? We use on vivo intracellular electrophysiology in the anesthetized brain, but where the impact of visuomotor exploration on retinal flow is controlled by simulating realistic but virtual classes of eye-movements (fixation, tremor, shift, saccade). By comparing synaptic echoes to different types of full field visual statistics (sparse noise, grating, natural scene, dense noise, apparent motion noise) in which the retinal effects of virtual eye-movements is, or is not, included, we have reconstructed the perceptual association field of visual cortical neurons extending 10 to 20° away from the classical discharge field. Our results show that there exists for any V1 cortical cell a fit between the spatio-temporal organization of its subthreshold “silent” (nCRF) and spiking (CRF) receptive fields with the dynamic features of the retinal flow produced by specific classes of eye-movements (saccades and fixation). The functional features of the resulting association field are interpreted as facilitating the integration of feed-forward inputs yet to come by propagating some kind of network belief of the possible presence of Gestalt-like percepts (co-alignment, common fate, filling-in). Our data support the existence of global association fields binding Form and Motion, which operate during low-level (non attentive) perception as early as V1 and become dynamically regulated by the retinal flow produced by natural eye-movements. Current work is supported by CNRS, and grants from ANR (NatStats and V1-complex) and the European Community FET-Bio-I3 programs (IP FP6: FACETS (015879), IP FP7: BRAINSCALES(269921) and Brain-i-nets (243914)).

< Back to 2013 Symposia

Vision Sciences Society