13th Annual Dinner and Demo Night

Monday, May 18, 2015, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall

Please join us Monday evening for the 13th Annual VSS Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada Reno; Arthur Shapiro, American University; Dejan Todorovic, University of Belgrade and Karen Schloss, Brown University.

A Beach BBQ is served on the Beachside Sun Decks. Demos are located in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, & Spotted Curlew.

Demos are free for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a ticket for the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. A desk will also be set up on the Seabreeze Terrace at 6:30 pm.

Guest prices: Adults: $25, Youth (6-12 years old): $10, Children under 6: free

#theDress: An explanation based on simple spatial filter

Arthur Shapiro, Oliver Flynn, Erica Dixon, American University
Individual differences in the perception of #theDress have generated numerous hypotheses regarding color constancy. Here we demonstrate that the effects of simulated illumination on #theDress can be negated with a simple spatial filter (See Shapiro & Lu 2011). Could the #theDress phenomena indicate variation in a spatial gain control?

#theDress: A Color Constancy Color Controversy

Rosa Lafer-Sousa, Department of Brain and Cognitive Science, MIT, Bevil Conway, Wellesley College, MIT
A photograph of a dress that drives two distinct color-percepts recently went viral. We believe the two percepts arise because the brain is guessing about the ambiguous illuminant (blueish-or-yellowish?). We show that the identical dress in two unambiguous contexts can yield the two distinct percepts that divided the Internet.

A Rotating Square Becomes Both Non-Rigid and Non-Uniform

Harald Ruda, Guillaume Riesen, Northeastern University
A simple white square, rotating around its center has edges that become non-rigid for a range of speeds. In addition, a pattern of luminance variation in the shape of a darker cross also becomes apparent with rotation.

Adaptive and Gaze Contingent Contrast Sensitivity Testing

Edward Ryklin, Ryklin Software, Inc.
Quickly obtain your Contrast Sensitivity Function Curve by simply gazing at a series of dynamically presented Gabor patches. Generate a complete CSF curve in about 2 minutes.

Afterimages Foil Visual Search

Guillaume Riesen, Harald Ruda, Northeastern University
Visual search performance can be impacted by afterimages from previously fixated stimuli. Can you find the brightest target after looking at the adaptation stimulus, or will you be fooled by its afterimages?

Ambiguous Garage Roof

Kokichi Sugihara, Meiji University
A roof of a garage appears to be quite different when it is seen from two special viewpoints. The two viewpoints are realized simultaneously by a mirror. Even though we know that we are seeing the same object, our brains do not correct our inconsistent perception.

Assassin’s Creed Rogue – Player Immersion with Tobii Eye Tracking

Ken Gregory, Joanna Fiedler, Tobii Technology, Inc.
With Tobii eye tracking integration into Assassin’s Creed Rogue™, characters behavior is influenced by eye contact like in real life. Aim your weapon where you look while running in another direction. Make your games become deeply immersive, faster and more intense by adding eye tracking to traditional controls and game play.

Attention Beyond Pixels – Bridging Machines and Humans

Qi Zhao, Chengyao Shen, Xun Huang, National University of Singapore
We will present an interactive demo to show human-like gaze prediction in natural scenes that effectively bridges the semantic gap. Users can input new images from the Internet or taken using mobile devices on the spot, and see how it predicts where humans look.

Biological Motion: is that really me?

Andre Gouws, Peter Thompson, Rob Stone, University of York
A real-time demonstration of point-light biological motion. Walk, jump, dance in front of the sensor and see your point-light display. Using an Xbox Kinect sensor (approx $50), watch how we tweak some simple settings that can make apparent changes to your physical build, gender and even mood!

Blur photographs by light projection

Takahiro Kawabe, Shin’ya Nishida, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Japan
We demonstrate that it is possible to make real photographs printed on a paper apparently blurred by means of the projection of luminance patterns.

Can you read without your macula? A 1440Hz gaze-contingent paradigm

Peter April, Jean-Francois Hamelin, Danny Michaud, Stephanie-Ann Seguin, VPixx Technologies
How well could you read if you developed macular degeneration? VPixx Technologies will be demonstrating a 1440Hz gaze contingent display, using our PROPixx DLP projector refreshing at 1440Hz, and our TRACKPixx high speed binocular eye tracker. The gaze contingent paradigm will simulate a scotoma in your central visual field. Can you still read?

DPI precision eye drawings

Warren Ward, Ward Technical Consulting
Showing, by accurate eye tracking data, that we don’t really know our eye position. Chart recorder drawings will be demonstrated using real-time eye position.

Glow Toggled by Shape

Minjung Kim, New York University and York University, Laurie Wilcox, Dr. Richard Murray, York University
We rendered a blobby, Lambertian disc under purely diffuse light. From the front, the disc looks like an ordinary, solid, white object. However, as the disc rotates, revealing its underside, the disc takes a translucent appearance, and appears to glow.

Modulation of line length judgment of Vertical Horizontal illusion by mathematical observation

Ayane Murai, Masahiro Ishii, Sapporo City University
A stimulus that consists of two lines forming an inverted-T shape creates an illusion. One can mentally divide the linked lines into two disconnected lines, then rotate and translate one of them to compare. Our demo shows that the observers underestimate the length of a vertical line with this observation.

Motion parallax: Putting a Wii bit of depth in your world

Andre Gouws, Peter Thompson, University of York
Using just $20 worth of hardware (a Nintendo Wii remote and infrared LEDs), we will demonstrate that a simple spatial transformation of multiple 2D objects on a screen, relative to the tracked movements of an observer, can produce a striking sensation of scene depth and 3D virtual reality.

Reflections of the environment distort perceived 3D shape

Steven A. Cholewiak, Department of Psychology, Justus-Liebig-University Giessen, Germany, Gizem Küçükoğlu, Department of Psychology, New York University
We will showcase how a specular object’s image is dependent upon the way the reflected environment interacts with the the object’s geometry and how its perceived shape depends upon motion and the frequency content of the environment. Demos include perceived non-rigid deformation of shape and changes in material percept.

Reverse Stroop Battle

Caterina Ripamonti, Jakob Thomassen, Cambridge Research Systems Ltd.
Compete against your colleagues in the Reverse Stroop Battle. Two players will compete at the same time to determine who responds quickest to an identical set of stimuli presented simultaneously on two synchronised touchscreen monitors.

Robust Size Illusion Produced by Expanding and Contracting Flow Fields

Xue Dong, The Institute of Psychology, Chinese Academy of Sciences
We observe a new illusion that the positions of radially moving dots, which moved within an imaginary annular window, appear shifted in the opposite direction of motion. The apparent size of the inner annular boundary shrank during the dots’ expanding phase and dilated during the contracting phase.

Selective stimulation of penumbral cones to visualize retinal blood vessels

Manuel Spitschan, Geoffrey K. Aguirre, David H. Brainard, Department of Psychology, University of Pennsylvania
In 1819, Johann Purkinje described how a moving light source that displaces the shadow of the retinal blood vessels to adjacent cones can produce the entopic percept of a branching tree. We demostrate a novel method for producing a similar percept. We use a device that mixes 56 narrowband primaries under computer control, in conjunction with the method of silent substitution, to present observers with a spectral modulation that selectively targets penumbral cones in the shadow of the retinal blood vessels. Such a modulation elicits a clear Purkinje-tree percept.

Star Wars Scroll Illusion

Arthur Shapiro, Oliver Flynn, American University
The seventh episode of the Star Wars saga will be released later this year. It might be of interest to note that Kingdom’s ‘’Leaning Tower Illusion’’ can also be created with the scrolling text shown at the beginning of the Star Wars movies.

stimBOLD, Simulation from Visual Stimulus to BOLD

Mark Schira, School of Psychology, University of Wollongong
We have developed a stimBOLD toolbox that allows generating a prediction of measured BOLD responses from and arbitrary video input within 5-10 minutes. I is aimed for experimental planning and teaching such as providing a hands on experience of retinotopic mapping.

Stroboscopic Ping-Pong

Brought to you by VSS and the Demo Night Committee
The title speaks for itself. Come test your skills against the vision-community’s finest in the ultimate ping-pong challenge!

Thatcherise Your Face

Andre Gouws, Peter Thompson, Mladen Sormaz, University of York
Come and see a real-time demonstration of this ever-popular perceptual phenomenon. Have your own face “thatcherised” in real time, take away a still version of your thatcherised face as a souvenir, and enter the prize competition for the “most-thatcherise-able” face of VSS 2015.

The amazing ever popular Beuchet chair

Peter Thompson, Rob Stone, Tim Andrews, University of York
Once again we are bringing the Beuchet chair, an old favourite at Demo night. This year’s chair is a new and improved design! The Beuchet chair is a thought-provoking demonstration of one of the problems our visual system has to solve – the interpretation of our eyes’ 2-D images of a 3-D world. The images of distant objects must be small but we still see them as their real size thanks to ‘size constancy’. The chair breaks size constancy by providing cues that two people at very different distances are actually at the same distance. Get your photo taken with a friend….

The Blue/Black and Gold/White Dress Pavillion

Michael Rudd, University of Washington; Maria Olkkonen, University of Pennsylvania; Bei Xiao, American University; Annette Werner, University of Tubingen; Anya Hurlbert, Newcastle University
The infamous color-switching dress will be viewed in person under a variety of spectral illumination conditions to test some hypotheses that have been proposed to explain the phenomena. The dress demo will be supplemented by additional demos of materials seen under different illuminants, and by photos illustrating color constancy phenomena.

The jumping pen illusion

Rachel Denison, Center for Neural Science and Department of Psychology, New York University, Zhimin Chen, Department of Psychology, Peking University; Gerrit Maus, Department of Psychology, University of California, Berkeley
In our new “jumping pen” illusion, an object (such as a pen) appears to jump in front of an occluder when the two cross in the blind spot, due to perceptual competition between the two filled-in percepts. The perceptual consequences of this illusory depth ordering can include surprising size illusions.

The mind-writing pupil

Sebastiaan Mathot, Jean-Baptiste Melmi, Lotje van der Linden, Aix-Marseille University, France, Stefan Van der Stigchel, Helmholtz Institute, Utrecht University, The Netherlands
Are you ready to write with your mind? In this demo, we show how you can decode the focus of covert visual attention through pupillometry. Using this technique, you can select letters from a virtual keyboard by covertly attending to them.

The Pulfrich Solidity Illusion

Brent Strickland, CNRS Institut Jean Nicod; LPP
I will present a modified version of the double Pulfrich pendulum illusion (Wilson & Robinson, 1986). A pendulum appears to swing on an (illusory) elliptical path through a solid wooden beam! This demonstrates that object solidity has a relatively low priority relative to spatiotemporal motion cues in visual processing.

The shrunken finger illusion: Unseen sights can make your finger feel shorter

Vebjørn Ekroll, Bilge Sayim, Ruth van der Hallen, Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven
When you put a semi-spherical shell on your finger and view it directly from above, the shell is perceived as a complete ball due to amodal volume completion and you can experience how your finger feels shorter than normal, as if to make space for the illusory ball.

The Watercolor Effect Colors Non-flat Two Dimensional Manifolds and Three Dimensional Volumes, Neon Color Does Not

Eric L Altschuler,MD, PhD, Temple University School of Medicine, Xintong Li, Alice Hon, Rutgers New Jersey Medical School, Abigail Huang, Elizabeth Seckel, VS Ramachandran, UCSD
We have noticed a dramatic difference between two color spreading effects: the watercolor effect will color non-flat two dimensional manifolds and three dimensional volumes while neon color will not and only colors a flat surface.

Vision Scientists Love Drifting Gabors that Move

Gennady Erlikhman, Gideon Caplovitz, University of Nevada, Reno
Several demonstrations of form-motion illusions using drifting Gabor patches that have been used over the last few years. We include a novel version in which a figure appears to rotate even though the Gabors that form its outline are not changing in position or orientation, only phase.

Wide Area Walking with HMD based Virtual Reality System

Matthias Pusch, Charlotte Li, WorldViz Virtual Reality
Wide area walking in Virtual Reality: Participants experience Virtual Reality with the currently highest end head mounted displays in a large walking space with allows for natural locomotion. This creates a very high level of ‘presence’ which can be experienced with a chilling ‘fear of heights’ demo.

2015 Public Lecture – Cancelled

The 2015 Public Lecture was cancelled. The 2015 lecture will be given at the 2016 meeting.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

2015 Student Workshops

VSS Workshop for PhD Students and Postdocs:
Is there a strategy behind successful grant writing?

Sunday, May 17, 1:00 – 2:00 pm, Glades/Jasmine (Jacaranda Hall)

Moderator: Frans Verstraten
Discussants: Bart Anderson, Peter Bex, Allison Sekuler, Simon Thorpe
Research grant$ are difficult to get. For some parts of the world that is a clear understatement: your chance of going getting some money is probably higher in Las Vegas than in most national research funding agencies. For some of us it is crucial to get research funding, especially if you are in a soft money institute. Clearly, some colleagues are more successful than others.  It is not just simply a random process. What are the secrets? In this workshop some colleagues will discuss their strategies, some successful, others not, and also give some insight in how review committees work. Moreover, they might answer all the questions you always wanted to ask (about funding that is…)

Bart Anderson

Bart is a Professorial Research Fellow in the psychology department at the University of Sydney.  After completing postdoctoral training at Rutgers and Harvard, he received multiple grants from the NIH after joining the faculty at MIT.  He has had continuous funding from the Australian Research Council since moving to Australia in 2003, including two senior research fellowships.

Peter Bex

Pete is a Professor of Psychology at Northeastern University in Boston Massachusetts and has worked in academic university departments, soft money research institutes and industry.  He has been writing grants for nearly 20 years and reviews grants for organizations across 4 continents. His grant applications have been funded and rejected by government agencies, charities and corporations in the US and Europe

Allison Sekuler

Allison is a Professor in Psychology, Neuroscience & Behaviour and Associate Vice-President and Dean of Graduate Studies at McMaster University. Previously, she served as McMaster’s Associate Vice-President Research and Interim Vice-President Research and International Affairs; and she served on the VSS Board from 2005-2009. She has been funded continuously by Federal Granting Agencies since 1991, and also received funding from Provincial Agencies, Non-Profit Organizations, and most recently through an industry-related research project. Since VSS was founded 15 years ago, Allison received $5.5M in funding for research projects as a Principal Investigator, and has been a co-investigator on large collaborative grants funding more than $30M. She has served on grant review committees for Canadian and US Federal agencies as well as for Ontario agencies, and has led numerous sessions on successful grantsmanship for graduate students, postdoctoral fellows, and faculty. When her grant funding runs out, she plans to become a professional Hearthstone player.

Simon Thorpe

Simon is director of the CerCo (Brain and Cognition Research Center) in Toulouse, France. He has spent 12 years as a member of the Brain, Behaviour and Cognition Committee of the CNRS that evaluates and recruits French scientists, and a further 10 years as a member of an Interdisciplinary commission. He has also been involved in several evaluation committees for the European Commission, and recently obtained a highly competitive ERC Advanced grant.

Frans Verstraten

Frans (now University of Sydney) funded most of his post-doc time by successfully applying for several competitive grants. Soon after he was appointed at Utrecht University in 2000, the Netherlands Organisation for Scientific Research awarded him a 1.65 million Euro Pioneer grant. Later, he collected a number of grants to support his research and his many PhD students.  He was also a member of several grant review committees in different countries. He is the past-president of VSS and this is the fourth (and last) VSS-workshop he has organized.

VSS Workshop for PhD Students and Postdocs:
Finding your path in graduate school

Sunday, May 17, 1:00 – 2:00 pm, Sabal/Sawgrass (Jacaranda Hall)

Moderator: Frank Tong
Discussants: Jody Culham, John Serences, Geoffrey Woodman, Yaoda Xu

Charting your path through graduate school may seem like a straightforward task with clearly marked sign posts: learn important scientific skills, work hard in the lab, run experiments and gather lots of data, write papers and get them published, then put together a hefty thesis. Really though, grad school consists of both well-defined and ill-defined problems to be solved, and the possible paths to doing well are diverse and many.

In this workshop, you will have the opportunity to hear from expert panelists who will describe their own personal adventures at navigating this exciting but sometimes mysterious and challenging terrain. We will learn how they honed in on particular research questions to pursue, the scientific tools they sought to acquire and master, cool experiments they tried that failed as well as those that worked, and the valuable “life lessons” they learned from their advisor, professors, labmates, or from their own experience. We will discuss the joys and challenges of scientific writing, the ups and downs of the review process, and how to scale the apparently daunting wall of the thesis by setting concrete goals for writing. Finally, we will discuss how successful navigation of the PhD will prepare you for embarking on the next stage of your career.

Jody Culham

Jody Culham is a Professor in the Department of Psychology at the University of Western Ontario. Her research relies on functional neuroimaging and psychophysical methods to address how vision is used to support perception and to guide actions. Jody received her PhD from Harvard University in 1997, and pursued postdoctoral work at Western University before starting her faculty position in 2001. Jody has received multiple awards for her research, including the CIHR New Investigator Award (2003), Western Faculty Scholar Award, (2008), and the NSERC E. W. R. Steacie Memorial Fellowship (2010).

John Serences

John is an Associate Professor in the Department of Psychology at UC San Diego. His research relies on psychophysics, computational modeling, EEG, and fMRI to investigate how behavioral goals and other attentional factors influence perception, memory and decision making.  He received his PhD from Johns Hopkins University in 2005, and pursued postdoctoral research at the Salk Institute before beginning his position as assistant professor in 2007. He is the 2015 recipient of the VSS Young Investigator Award.

Geoffrey Woodman

Geoff is an Associate Professor in the Department of Psychology at Vanderbilt University.  His research uses behavioral methods, electrophysiological recordings, imaging, and causal manipulations of the primate brain to understand visual attention, working memory, and cognitive control. He received his PhD in 2002 from the University of Iowa, and then pursued postdoctoral research at Vanderbilt before beginning his faculty position in 2007.  He is an Associate Editor at JEP:HPP, supported by grants from the National Eye Institute, and the 2012 recipient of the Young Investigator Award from VSS.

Yaoda Xu

Yaoda is an Associate Professor in the Department of Psychology at Harvard University. Her research focuses on how the human brain extracts visual object information from multiple levels of processing and how task-relevant information is represented in higher brain areas. She received her PhD from MIT in 2000, and pursued postdoctoral research at Harvard, MIT and Yale before beginning her faculty position at Harvard in 2008. Her research is supported by the National Eye Institute.

Frank Tong

Frank Tong is a Professor of Psychology at Vanderbilt University. He is interested in understanding the fundamental mechanisms underlying visual perception, attentional selection, object processing, and visual working memory. He has received multiple awards for his research advances (including the VSS YIA award), for his work on fMRI decoding of visual and cognitive states. He particularly enjoys working with students and postdocs as they carve their path towards scientific discovery and independence, and currently serves as a VSS board member.

David Knill Memorial Symposium

Friday, May 15, 2015, 9:00 – 11:30 am, Island Ballroom

Dave Knill was a beloved scientist, teacher, and VSS regular who suddenly passed away in 2014. Dave also served on the VSS Board of Directors from 2002 to 2007. Dave got his Ph.D. from Brown University in 1990 with a thesis about the perception of surface shape and reflectance. He did a postdoc at the University of Minnesota, after which he held faculty positions at the University of Pennsylvania and the University of Rochester, where he was since 1999. Dave left a towering legacy in many areas of vision science and decision-making, from Bayesian modeling to spatial vision to active sensing to multisensory perception to bounded rationality. In this symposium, a few of Dave’s many trainees and collaborators will commemorate his life and work.

Speakers: Dan Kersten, Paul Schrater, Robert Jacobs, Chris Sims, Krystel Huxlin, Wei Ji Ma

View Introduction

Dan Kersten

Bayesian vision: The early years   View Slides, View Video

By the mid 1980s, computer science had helped to define vision problems to be solved, but had also shown how elusive their solutions could be. Neurophysiology was showing that primate visual processing involved significantly more cortex than had been thought. Marr’s book had just been published and understanding human vision was starting to look like a bigger and more interesting challenge. Around the same time, advances in digital signal processing were providing the means to create, filter and manipulate images. 3D computer graphics was making it possible to generate images from models of objects and scenes. Signal detection theory had been widely used for several decades, but most applications to studies of human vision had involved image patterns as the signals. This was the state of affairs when David Knill began his graduate work at Brown University. In my talk, I will describe the enormous role David played over the subsequent ten years in developing our understanding of objects and scenes as signals, images as their causal results, and from there, perception as Bayesian inference.

Paul Schrater

Perception for action   View Slides, View Video

Cue integration formed a critical problem when I was Dave’s graduate student and formed one of Dave’s research foci throughout his career. I will describe Dave’s key contributions to the conceptualization of cues, Bayesian approaches and methods for elicitation. As we both moved to visuomotor control, we began to reconceptualize cue integration from a control perspective. I will trace that history and describe its deep influence on more recent work where we challenge  the idea of cues as information towards privileged variables like object shape, size or location, and instead develop the idea that integrating perceptual information should subserve the goals of action. In effect, what you are doing determines what information is relevant, which variables should be estimated, and how perceptual input relates to the variables needed to make control decisions. I’ll review Dave’s innovative approaches to assessing cue integration, from slant from texture to visual signals to hand location. I’ll also describe a cue integration experiment where subjects successfully learned to integrate visual and auditory cues in non-standard ways in order to control an object. Throughout, Dave’s pioneering use of probabilistic modeling for conceptual development, stimulus design and data analysis will be highlighted.

Robert Jacobs

Theoretical approaches to multisensory perception   View Slides, View Video

I will start by describing research that was generated by Dave’s scientific creativity, rigor, and passion. This research, conducted by Joseph Atkins, Dave, and me, examined how inconsistent sensory signals in a multisensory (visual-haptic) environment can lead people to recalibrate how they combine depth information from multiple visual cues. I will then review subsequent research from my lab on crossmodal transfer of object shape knowledge across visual and haptic modalities, as well as work on transfer of knowledge from a perception task to a motor production task. Dave and I were both interested in sensory integration, multisensory perception, and possible relationships between perception and motor production. Talking with Dave about all of these topics was great fun and often insightful.

Chris Sims

Bounded rationality   View Slides, View Video

Dave Knill wasn’t content to observe or measure human behavior, he wanted to explain it. To Dave, an explanation for behavior and brain function almost always consisted of its elegant and parsimonious restatement as the solution to a computational problem. During the time I spent in his lab, I focused on two projects: Understanding the adaptive allocation of visual gaze in complex tasks (an offshoot of his work in visual-motor control with Jeff Saunders), and redefining visual working memory as the problem of minimizing behavioral costs under a capacity constraint (building on his work with Anne-Marie Brouwer). This latter project advanced information theory as a principled approach to defining the limits of visual working memory. In hindsight, both projects are really about bounded rationality—computational explorations of the idea that the brain can be highly limited, and yet simultaneously efficient. Dave was selfless as a mentor, and I am honored to have had the chance to grow as a scientist working in his laboratory.

Krystel Huxlin

Impact of vision lost and regained on direction of heading estimates from optic flow   View Slides, View Video

Understanding this was the goal of the research Dave and I were pursuing with our graduate student, Laurel Issen. Dave passed away before this goal could be fully realized but it illustrates his openness and willingness to apply rigorous approaches to the study of clinical problems. I will start by explaining why our original question was (and still remains) of interest in the context of cortically blind people. My lab studies how visual training restores some of the vision lost in this patient population. It was of great interest for Dave and me to better understand why the cortically blind, who retain at least one intact hemifield of vision in both eyes, have such trouble navigating and orienting in their environment. Answering this question represents a first step in assessing whether restoring some of the vision lost is likely to impact visually-guided functions and ultimately, quality of life in this patient population.

Wei Ji Ma

Mixture priors and causal inference   View Slides, View Video

Although Dave was one of the humblest people you would ever meet, the theoretical and empirical contributions he made to perception research are second to none. I will highlight mixture priors and causal inference, two intertwined computational concepts related to the inference of hidden causes. In a seminal 2003 paper on depth perception, he introduced the concept of mixture priors in vision. He later studied visuo-memory cue combination in a naturalistic reaching task with Anne-Marie Brouwer. In 2008, I worked on an extension of this study with him, in which we explored causal inference in the same reaching task. Very recently, Dave and Oh-Sang Kwon used the same notion of causal inference to explain how the brain resolves conflicts between local and global motion cues. Dave was a dear friend, an amazingly selfless and patient mentor, and one of the best scientists I have known.

2015 Davida Teller Award – Suzanne McKee

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Vision Sciences Society is honored to present Dr. Suzanne McKee with the 2015 Davida Teller Award.

Suzanne McKee

The Smith-Kettlewell Eye Research Institute

Suzanne began her scientific career at UC Berkeley, and has spent much of her research career at Smith-Kettlewell Eye Research Institute.

Suzanne has been a hugely influential figure in vision science, and is one of a small group of researchers who laid the foundations of modern visual psychophysics. She has worked on many aspects of vision, and is responsible for a remarkably varied array of important scientific contributions in the fields of motion perception, binocular vision, color perception, amblyopia, and visual search. She has made a series of seminal and thought provoking discoveries in these areas that have challenged existing theories. Her early work on spatial vision centered on the visual hyperacuities, where the challenge was to explain how resolution limits for vernier and stereo offsets could dramatically exceed the sampling limits imposed by the retina. Suzanne has made many fundamental contributions to understanding the stereo matching problem as well as insight into the role of binocular vision in amblyopia. Her work is notable for its clear and innovative conception, quality of execution, and care of interpretation.

Suzanne’s impact on the field has been profound, both directly through her work, but also indirectly through her mentorship. Like Davida Teller, she was a trail-blazer at a time when few women worked in vision science, overcoming many of the obstacles common in that era. Along the way, Suzanne inspired generations of both men and women to follow in her footsteps. In the course of her career, Suzanne has worked with a variety of students, post-docs, and colleagues, and those who have worked with her are extraordinarily grateful for her generosity, guidance, wisdom, and encouragement. VSS would like to thank Suzanne for her contributions to vision science.

Part-whole relationships in visual cortex

Time/Room: Friday, May 11, 1:00 – 3:00 pm, Royal Ballroom 6-8
Organizer: Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven
Presenters: Johan Wagemans, Charles E. Connor, Scott O. Murray, James R. Pomerantz, Jacob Feldman, Shaul Hochstein

< Back to 2012 Symposia

Symposium Description

With his famous paper on phi motion, Wertheimer (1912) launched Gestalt psychology, arguing that the whole is different from the sum of the parts. In fact, wholes were considered primary in perceptual experience, even determining what the parts are. Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? Are wholes constructed from combinations of the parts? If so, to what extent are the combinations additive, what does superadditivity really mean, and how does it arise along the visual hierarchy? How much of the combination process occurs in incremental feedforward iterations or horizontal connections and at what stage does feedback from higher areas kick in? What happens to the representation of the lower-level parts when the higher-level wholes are perceived? Do they become enhanced or suppressed (“explained away”)? Or, are wholes occurring before the parts, as argued by Gestalt psychologists? But what does this global precedence really mean in terms of what happens where in the brain? Does the primacy of the whole only account for consciously perceived figures or objects, and are the more elementary parts still combined somehow during an unconscious step-wise processing stage? A century later, tools are available that were not at the Gestaltists’ disposal to address these questions. In this symposium, we will take stock and try to provide answers from a diversity of approaches, including single-cell recordings from V4, posterior and anterior IT cortex in awake monkeys (Ed Connor, Johns Hopkins University), human fMRI (Scott Murray, University of Washington), human psychophysics (James Pomerantz, Rice University), and computational modeling (Jacob Feldman, Rutgers University). Johan Wagemans (University of Leuven) will introduce the theme of the symposium with a brief historical overview of the Gestalt tradition and a clarification of the conceptual issues involved. Shaul Hochstein (Hebrew University) will end with a synthesis of the current literature, in the framework of Reverse Hierarchy Theory. The scientific merit of addressing such a central issue, which has been around for over a century, from a diversity of modern perspectives and in light of the latest findings should be obvious. The celebration of the centennial anniversary of Gestalt psychology also provides an excellent opportunity to doing so. We believe our line-up of speakers, addressing a set of closely related questions, from a wide range of methodological and theoretical perspectives, promises to be attracting a large crowd, including students and faculty working in psychophysics, neurosciences and modeling. In comparison with other proposals taking this centennial anniversary as a window of opportunity, ours is probably more focused and allows for a more coherent treatment of a central Gestalt issue, which has been bothering vision science for a long time.

Presentations

Part-whole relationships in vision science: A brief historical review and conceptual analysis

Johan Wagemans, Laboratory of Experimental Psychology, University of Leuven

Exactly 100 years ago, Wertheimer’s paper on phi motion (1912) effectively launched the Berlin school of Gestalt psychology. Arguing against elementalism and associationism, they maintained that experienced objects and relationships are fundamentally different from collections of sensations. Going beyond von Ehrenfels’s notion of Gestalt qualities, which involved one-sided dependence on sense data, true Gestalts are dynamic structures in experience that determine what will be wholes and parts. From the beginning, this two-sided dependence between parts and wholes was believed to have a neural basis. They spoke of continuous “whole-processes” in the brain, and argued that research needed to try to understand these from top (whole) to bottom (parts ) rather than the other way around. However, Gestalt claims about global precedence and configural superiority are difficult to reconcile with what we now know about the visual brain, with a hierarchy from lower areas processing smaller parts of the visual field and higher areas responding to combinations of these parts in ways that are gradually more invariant to low-level changes to the input and corresponding more closely to perceptual experience. What exactly are the relationships between parts and wholes then? In this talk, I will briefly review the Gestalt position and analyse the different notions of part and whole, and different views on part-whole relationships maintained in a century of vision science since the start of Gestalt psychology. This will provide some necessary background for the remaining talks in this symposium, which will all present contemporary views based on new findings.

Ventral pathway visual cortex: Representation by parts in a whole object reference frame

Charles E. Connor, Department of Neuroscience and Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Anitha Pasupathy, Scott L. Brincat, Yukako Yamane, Chia-Chun Hung

Object perception by humans and other primates depends on the ventral pathway of visual cortex, which processes information about object structure, color, texture, and identity.  Object information processing can be studied at the algorithmic, neural coding level using electrode recording in macaque monkeys.  We have studied information processing in three successive stages of the monkey ventral pathway:  area V4, PIT (posterior inferotemporal cortex), and AIT (anterior inferotemporal cortex).  At all three stages, object structure is encoded in terms of parts, including boundary fragments (2D contours, 3D surfaces) and medial axis components (skeletal shape fragments).  Area V4 neurons integrate information about multiple orientations to produce signals for local contour fragments.  PIT neurons integrate multiple V4 inputs to produce representations of multi-fragment configurations.  Even neurons in AIT, the final stage of the monkey ventral pathway, represent configurations of parts (as opposed to holistic object structure).  However, at each processing stage, neural responses are critically dependent on the position of parts within the whole object.  Thus, a given neuron may respond strongly to a specific contour fragment positioned near the right side of an object but not at all when it is positioned near the left.  This kind of object-centered position tuning would serve an essential role by representing spatial arrangement within a distributed, parts-based coding scheme. Object-centered position sensitivity is not imposed by top-down feedback, since it is apparent in the earliest responses at lower stages, before activity begins at higher stages.  Thus, while the brain encodes objects in terms of their constituent parts, the relationship of those parts to the whole object is critical at each stage of ventral pathway processing.

Long-range, pattern-dependent contextual effects in early human visual cortex

Scott O. Murray, Department of Psychology, University of Washington, Sung Jun Joo, Geoffrey M. Boynton

The standard view of neurons in early visual cortex is that they behave like localized feature detectors. We will discuss recent results that demonstrate that neurons in early visual areas go beyond localized feature detection and are sensitive to part-whole relationships in images. We measured neural responses to a grating stimulus (“target”) embedded in various visual patterns as defined by the relative orientation of flanking stimuli. We varied whether or not the target was part of a predictable sequence by changing the orientation of distant gratings while maintaining the same local stimulus arrangement. For example, a vertically oriented target grating that is flanked locally with horizontal flankers (HVH) can be made to be part of a predictable sequence by adding vertical distant flankers (VHVHV). We found that even when the local configuration (e.g. HVH) around the target was kept the same there was a smaller neural response when the target was part of a predictable sequence (VHVHV). Furthermore, when making an orientation judgment of a “noise” stimulus that contains no specific orientation information, observers were biased to “see” the orientation that deviates from the predictable orientation, consistent with computational models of primate cortical processing that incorporate efficient coding principles. Our results suggest that early visual cortex is sensitive to global patterns in images in a way that is markedly different from the predictions of standard models of cortical visual processing and indicate an important role in coding part-whole relationships in images.

The computational and cortical bases for configural superiority

James R. Pomerantz, Department of Psychology, Rice University, Anna I. Cragin, Department of Psychology, Rice University; Kimberley D. Orsten, Department of Psychology, Rice University; Mary C. Portillo, Department of Social Sciences, University of Houston-Downtown

In the configural superiority effect (CSE; Pomerantz et al., 1977; Pomerantz & Portillo, 2011), people respond more quickly to a whole configuration than to any one of its component parts, even when the parts added to create a whole contribute no information by themselves.  For example, people discriminate an arrow from a triangle more quickly than a positive from a negative diagonal even when those diagonals constitute the only difference between the arrows and triangles.  How can a neural or other computational system be faster at processing information about combinations of parts – wholes – than about parts taken singly?   We consider the results of Kubilius et al. (2011) and discuss three possibilities: (1) Direct detection of wholes through smart mechanisms that compute higher order information without performing seemingly necessary intermediate computations; (2) the “sealed channel hypothesis” (Pomerantz, 1978), which holds that part information is extracted prior to whole information in a feedforward manner but is not available for responses; and (3) a closely related reverse hierarchy model holding that conscious experience begins with higher cortical levels processing wholes, with parts becoming accessible to consciousness only after feedback to lower levels is complete (Hochstein & Ahissar, 2002).  We describe a number of CSEs and elaborate both on these mechanisms that might explain them and how they might be confirmed experimentally.

Computational integration of local and global form

Jacob Feldman, Dept. of Psychology, Center for Cognitive Science, Rutgers University – New Brunswick, Manish Singh, Vicky Froyen

A central theme of perceptual theory, from the Gestaltists to the present, has been the integration of local and global image information. While neuroscience has traditionally viewed perceptual processes as beginning with local operators with small receptive fields before proceeding on to more global operators with larger ones, a substantial body of evidence now suggests that supposedly later processes can impose decisive influences on supposedly earlier ones, suggesting a more complicated flow of information. We consider this problem from a computational point of view. Some local processes in perceptual organization, like the organization of visual items into a local contour, can be well understood in terms of simple probabilistic inference models. But for a variety of reasons nonlocal factors such as global “form” resist such simple models. In this talk I’ll discuss constraints on how form- and region-generating probabilistic models can be formulated and integrated with local ones. From a computational point of view, the central challenge is how to embed the corresponding estimation procedure in a locally-connected network-like architecture that can be understood as a model of neural computation.

The rise and fall of the Gestalt gist

Shaul Hochstein, Departments of Neurobiology and Psychology, Hebrew University, Merav Ahissar

Reviewing the current literature, one finds physiological bases for Gestalt-like perception, but also much that seems to contradict the predictions of this theory. Some resolution may be found in the framework of Reverse Hierarchy Theory, dividing between implicit processes, of which we are unaware, and explicit representations, which enter perceptual consciousness. It is the conscious percepts that appear to match Gestalt predictions – recognizing wholes even before the parts. We now need to study the processing mechanisms at each level, and, importantly, the feedback interactions which equally affect and determine the plethora of representations that are formed, and to analyze how they determine conscious perception. Reverse Hierarchy Theory proposes that initial perception of the gist of a scene – including whole objects, categories and concepts – depends on rapid bottom-up implicit processes, which seems to follow (determine) Gestalt rules. Since lower level representations are initially unavailable to consciousness – and may become available only with top-down guidance – perception seems to immediately jump to Gestalt conclusions. Nevertheless, vision at a blink of the eye is the result of many layers of processing, though introspection is blind to these steps, failing to see the trees within the forest. Later, slower perception, focusing on specific details, reveals the source of Gestalt processes – and destroys them at the same time. Details of recent results, including micro-genesis analyses, will be reviewed within the framework of Gestalt and Reverse Hierarchy theories.

< Back to 2012 Symposia