Symposium Submission Policies

  • The organizer must be a current 2024 member in good standing.
  • Invited speakers must register for the meeting but need not be members.
  • No speaker or organizer can participate in more than one symposium.
  • Speaker substitutions are not allowed.
  • If a symposium talk has more than one author, it must be presented by the first author.
  • Submitting a symposium proposal or speaking in a symposium does not prevent you from submitting an abstract for a talk or poster presentation at VSS.
  • Before submitting a proposal, organizers must ensure that all speakers are committed to participating in the symposium and registering for the meeting before submitting a proposal.

For questions about Symposium Submission Policies, please contact us.

2018 Public Lecture – Cancelled

The 2018 Public Lecture was cancelled.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

16th Annual Dinner and Demo Night

Monday, May 21, 2018, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall

Please join us Monday evening for the 16th Annual VSS Dinner and Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada, Reno; Arthur Shapiro, American University; Gennady Erlikhman, University of Nevada, Reno; and Karen Schloss, University of Wisconsin–Madison.

Demos are free to view for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a VSS Friends and Family Pass to attend the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. Guest passes may also be purchased at the BBQ event, beginning at 5:45 pm.

The following demos will be presented from 7:00 to 10:00 pm, in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall:

Paradoxical memory color for faces

Rosa Lafer-Sousa, MIT; Maryam Hasantash, Institute for Research in Fundamental Sciences, Iran;  Arash Afraz, National Institute of Mental Health, NIH; Bevil R. Conway, National Institute of Mental Health, NIH and National Eye Institute, NIH

In this demo we use monochromatic sodium light (589 nm), which renders vision objectively achromatic, to elicit memory colors for familiar objects in a naturalistic setting.  The demo showcases a surprising finding, that faces, and only faces, provoke a paradoxical memory color, appearing greenish.

Vision in the extreme periphery:  Perceptual illusions of flicker, selectively rescued by sound

Daw-An Wu, California Institute of Technology; Takashi Suegami, California Institute of Technology and Yamaha Motors Corporation; Shinsuke Shimojo, California Institute of Technology

Synchronously pulsed visual stimuli, when spread across central and peripheral vision, appear to pulse at different rates.  When spread bilaterally into extreme periphery (70˚+), the left and right stimuli can also appear different from each other.  Pulsed sound can cause some or all of the stimuli to become perceptually synchronized.

Don’t Go Chasing Waterfalls

Matthew Harrison and Matthew Moroz, University of Nevada Reno

‘High Phi’ in VR, illusory motion jumps are perceived when the random noise texture of a moving 3D tunnel is replaced with new random textures. In 2D, these illusory jumps tend to be perceived in the direction opposite the preceding motion, but in 3D, this is not always the case!

The UW Virtual Brain Project: Exploring the visual system in immersive virtual reality

Chris Racey, Bas Rokers, Nathaniel Miller, Jacqueline Fulvio, Ross Tredinnick, Simon Smith, and Karen B. Schloss, University of Wisconsin – Madison

The UW Virtual Brain Project allows you to explore the visual system in virtual reality. It helps to visualize the flow of information from the eyes to visual cortex. The ultimate aim of the project is to improve neuroscience education by leveraging our natural abilities for space-based learning.

Augmented Reality Art

Jessica Herrington, Australian National University

Art inspired by vision science! Come and explore augmented reality artworks that contain interactive, digital sculptures. Augmented reality artworks will be freely available for download as iPhone apps.

Staircase Gelb effect

Alan Gilchrist, Rutgers University

A black square suspended in midair and illuminated by a spotlight appears white. Now successively lighter squares are added within the spotlight. Each new square appears white and makes the other squares appear to get darker. This demonstrates the highest luminance rule of lightness anchoring and gamut compression.

Hidden in Plain Sight!

Peter April, Jean-Francois Hamelin, Stephanie-Ann Seguin, and Danny Michaud, VPixx Technologies

Can visual information be hidden in plain sight?  We use the PROPixx 1440Hz projector, and the TRACKPixx 2kHz eye tracker, to demonstrate images which are invisible until you make a rapid eye movement.  We implement retinal stabilization to show other images that fade during fixations.  Do your eyes deceive?

Do I know you? Discover your eye gaze strategy for face recognition

Janet Hsiao and Cynthia Chan, University of Hong Kong

At VSS, do you often wonder whether you’ve seen someone before? Are you using good gaze strategies for face recognition? Try our hidden Markov modeling approach (EMHMM; to summarize your gaze strategy in terms of personalized regions-of-interest and transition patterns, and quantitatively assess its similarity to commonly used strategies.

Virtual Reality reconstruction of Mondrian’s ‘Salon for Madame B’

Johannes M. Zanker and Jasmina Stevanov, Royal Holloway University of London; Tim Holmes, Tobii Pro Insight

We present the first Virtual Reality realisation of Mondrian’s design for a salon painted in his iconic style which was never realised in his lifetime. Visitors can explore the VR space whilst their eye-movements are tracked allowing the researcher to evaluate possible reasons why Mondrian did not pursue his plan.

Hidden Stereo: Hiding phase-based disparity to present ghost-free 2D images for naked-eye viewers

Shin’ya Nishida, Takahiro Kawabe, and Taiki Fukiage, NTT Communication Science Lab

When a conventional stereoscopic display is viewed without 3D glasses, image ghosts are visible due to the fusion of stereo image pairs including binocular disparities. Hidden Stereo is a method to hide phase-based binocular disparities after image fusion, and to present ghost-free 2D images to viewers without glasses.

Quick estimation of contrast sensitivity function using a tablet device

Kenchi Hosokawa and Kazushi Maruya, NTT Communication Science Laboratories

Contrast sensitivity functions (CSFs) are useful but sometimes impossible in practical uses due to imitations of time. We demonstrate web-based applications to measure CSF in a short time (<3 min) at moderate precisions. Those applications allow collecting CSFs’ data from various types of observers and experimental circumstances.

The optical illusion blocks: Optical illusion patterns in a three dimensional world

Kazushi Maruya, NTT Communication Science Laboratories; Tomoko Ohtani, Tokyo University of the Arts

The optical illusion blocks are a set of toy blocks whose surfaces have particular geometric patterns. When combined, the blocks induce various types of optical illusion such as shape from shading, cafe wall, and subjective contour. With the blocks, observers can learn rules behind the illusions through active viewpoint changes.

Dis-continuous flash suppression

Shao-Min (Sean) Hung, Caltech; Po-Jang (Brown) Hsieh, Duke-NUS Medical School; Shinsuke Shimojo, Caltech

We report a novel variant of continuous flash suppression (CFS): Dis-continuous flash suppression (dCFS) where the suppressor and suppressed are presented intermittently. Our findings suggest approximately two-fold suppression power, as evident by lower breaking rates and longer suppression duration. dCFS thus may be suitable for future investigations of unconscious processing.

Virtual Reality Collaboration with interactive outside-in and tether-less inside-out tracking setup

Matthias Pusch, Dan Tinkham, and Sado Rabaudi, WorldViz

Multiple participants can interact with both local and remote participants in VR – the demo will contain both, outside-in tracking paradigm for some participants, in combo with inside-out integrated tracking for other participants. Importantly, the inside-out system will be entirely tether-less (using so-called consumer backpack VR ) and the user will be free to explore the entire indoor floor plan.

The illusion of floating objects caused by light projection of cast shadow

Takahiro Kawabe, NTT Communication Science Laboratories 

We demonstrate an illusion wherein objects in pictures and drawings apparently float in the air due to the light projection of cast shadow patterns onto them. We also conduct a demonstration of a light projection method making an opaque colored paper appear to be a transparent color film floating in the air.

Extension of phenomenal phenomena toward printed objects

Takahiro Kawabe, NTT Communication Science Laboratories 

We demonstrate that the phenomenal phenomena (Gregory and Heard, 1983) can be extended toward printed objects placed against a background with luminance modulation. In our demo, the audience experiences not only the illusory translation of the printed objects but also their illusory expansion/contraction and rotation.

Stereo Illusions in Augmented Reality

Moqian Tian, Meta Company

Augmented Reality with environmental tracking and real world lighting projection can uncover new perspectives of some classical illusions. We will present Hallow Face Illusion, Necker’s Cube, and Crazy Nuts Illusion in multiple conditions, while observers can interact with the holograms through Meta 2 AR headset.

A Color-Location Misbinding Illusion

Cristina R. Ceja and Steven L. Franconeri, Northwestern University

Illusory conjunctions, formed by misbound features, can be formed when attention is overloaded or diverted (Treisman & Schmidt, 1982). Here we provide the opportunity to experience a new illusory conjunction illusion, using even simpler stimulus displays.

Thatcherize your face

Andre Gouws, York Neuroimaging Centre, University of York; Peter Thompson, University of York

The Margaret Thatcher illusion is one of the best-loved perceptual phenomena. Here you will have the opportunity to see yourself ‘thatcherized’ in real time and we print you a copy of the image to take away.

The Ever-Popular Beuchet Chair

Peter Thompson, Rob Stone and Tim Andrews, University of York

A favorite at demo Night for the past few years, the Beuchet chair is back again. The two parts of the chair are at different distances and the visual system fails to apply size constancy appropriately. The result is people can be shrunk or made giants.

Illusory grating

William F. Broderick, New York University

By windowing a large two-dimensional sinusoidal grating, a perpendicular illusory grating is created. This illusion is quite strong, and depends on the overall size of the image, as well as the relative size of the grating and windows.

Look where Simon says without delay

Katia Ripamont, Cambridge Research Systems; Lloyd Smith, Cortech Solutions

Can you beat the Simon effect using your eye movements? Compete with other players to determine who can look where Simon says without delay. All you need to do is to control your eye movements before they run off. It sounds so simple and yet so difficult!

Chromatic induction from achromatic stimulus

Leone Burridge, Artist/ Medical practitioner in private practice

These are acrylic paintings made with only black and white pigments. On sustained gaze subtle colours become visible.

Grandmother’s neuron

Katerina Malakhova, Pavlov Institute of Physiology

If we could find a grandma cell, what kind of information would this cell code? Artificial neural networks allow us to study l atent representations which activate neurons. I choose a unit with the highest selectivity for grandmother images and visualize a percept which drives this neuron.

Planarian Eyespot(s) – Amazing redundancy in visual-motor behavior

Kensuke Shimojo, Chandler School, Pasadena, CA; Eiko Shimojo, California Institute of Technology

The planarian dissected body parts, even with incomplete eyespots, show ‘light avoiding behavior” long before the completion of the entire body (and sensory-motor organs). We will demonstrate this live (in Petri dishes) and in video.

Real-Life Continuous Flash Suppression – Suppressing the real world from awareness

Uri Korisky, Tel Aviv University

‘Real life CFS’ is a new method for suppressing real life stimuli. Using augmented reality goggles, CFS masks (“mondrians”) are presented to your dominant eye, causing whatever is presented to your non-dominant eye to be suppressed from awareness – even real objects placed in front of you.

The Motion Induced Contour Revisited

Gideon Caplovitz and Gennady Erlkhman, University of Nevada, Reno

As a tribute to Neomi Weisstein (1939-2015) we recreate and introduce some novel variants of the Motion Induced Contour, which was first described in a series of papers published in the 1980’s.

Illusory Apparent Motion

Allison K. Allen, Nicolas Davidenko and Nathan H. Heller, University of California, Santa Cruz

When random textures are presented at a moderate pace, observers report experiencing coherent percepts of apparent motion, which we term Illusory Apparent Motion (IAM). In this demo, we will cue observers to experience different types of motion percepts from random stimuli by using verbal suggestion, action commands, and intentional control.

Illusory color in extreme-periphery

Takashi Suegami, California Institute of Technology and Yamaha Motors Corporation; Daw-An Wu and Shinsuke Shimojo, California Institute of Technology

Our new demo will show that foveal color cue can induce illusory color in extreme-periphery (approx. 70°-90°) where cone cells are less distributed. One can experience, for example, clear red color perception for extreme-peripheral green flash, with isoluminant red foveal pre-cueing (or vice versa).

Silhouette Zoetrope

Christine Veras, University of Texas at Dallas; Gerrit Maus, Nanyang Technological University

A contemporary innovation of the traditional zoetrope, called Silhouette Zoetrope. In this new device, an animation of moving silhouettes is created by sequential cutouts placed outside a rotating empty cylinder, with slits illuminating the cutouts successively from the back. This new device combines motion, mirroring, depth, and size Illusions.

Spinning reflections on depth from spinning reflections

Michael Crognale, University of Nevada, Reno

A trending novelty toy when spun, induces a striking depth illusion from disparity in specular reflections from point sources. However, “specular” disparity from static curved surfaces is usually discounted or contributes to surface curvature. Motion obscures surface features that compete with depth cues and result in a strong depth illusion.

High Speed Gaze-Contingent Visual Search

Kurt Debono and Dan McEchron, SR Research Ltd.

Try to find the target in a visual search array which is continuously being updated based on the location of your gaze. High speed video based eye tracking combined with the latest high speed monitors make for a compelling challenge.

The photoreceptor refresh rate

Allan Hytowitz, Dyop Vision Associates

A dynamic optotype Dyop (a segmented spinning ring) provides a much more precise, consistent, efficient, and flexible means of measuring acuity. Adjustment of the rotation rate of the segmented ring determined the optimum rate as well as the photoreceptor refresh rate for perceived retrograde motion.

Stereo psychophysics by means of continuous 3D target-tracking in VR

Benjamin T. Backus and James J. Blaha, Vivid Vision Labs, Vivid Vision, Inc.; Lawrence K. Cormack and Kathryn L. Bonnen, University of Texas at Austin

What’s your latency for tracking binocular disparity? Let us cross-correlate your hand motion with our flying bugs to find out.

Motion-based position shifts

Stuart Anstis, University of California, San Diego; Patrick Cavanagh, Glendon College, York University

Motion-based position shifts are awesome!


VSS Staff

Back by popular demand. Strobe lights and ping pong!

2018 Young Investigator – Melissa Le-Hoa Võ

Vision Sciences Society is honored to present Melissa Le-Hoa Võ with the 2018 Young Investigator Award.

The Young Investigator Award is an award given to an early stage researcher who has already made a significant contribution to our field. The award is sponsored by Elsevier, and the awardee is invited to submit a review paper to Vision Research highlighting this contribution.

Melissa Le-Hoa VõMelissa Le-Hoa Võ

Professor of Cognitive Psychology, Goethe Universität Frankfurt; Head of the DFG-funded Emmy Noether Group, Scene Grammar Lab, Goethe Universität Frankfurt

Reading Scenes: How Scene Grammar Guides Attention and Perception in Real-World Environments

Dr. Võ will talk during the Awards Session
Monday, May 21, 2018, 12:30 – 1:30 pm, Talk Room 1-2

How do you recognize that little bump under the blanket as being your kid’s favorite stuffed animal? What no state-of-the-art deep neural network or sophisticated object recognition algorithm can do, is easily done by your toddler. This might seem trivial, however, the enormous efficiency of human visual cognition is actually not yet well understood.

Visual perception is much more than meets the eye. While bottom-up features are of course an essential ingredient of visual perception, my work has mainly focused on the role of the “invisible” determinants of visual cognition, i.e. the rules and expectations that govern scene understanding. Objects in scenes — like words in sentences — are arranged according to a “grammar”, which allows us to immediately understand objects and scenes we have never seen before. Studying scene grammar therefore provides us with the fascinating opportunity to study the inner workings of our mind as it makes sense of the world and interacts with its complex surroundings. In this talk, I will highlight some recent projects from my lab in which we have tried to shed more light on the influence of scene grammar on visual search, object perception and memory, its developmental trajectories, as well as its role in the ad-hoc creation of scenes in virtual reality scenarios. For instance, we found that so-called “anchor objects” play a crucial role in guiding attention and anchoring predictions about other elements within a scene, thereby laying the groundwork for efficient visual processing. This opens up exciting new avenues for investigating the building blocks of our visual world that our Scene Grammar Lab is eager to pursue.

Elsevier/Vision Research Article


Melissa Võ received her PhD from the Ludwig-Maximilians University in Munich in 2009. She then moved on to perform postdoctoral work, first with John Henderson at the University of Edinburgh, and then with Jeremy Wolfe at Harvard Medical School. Dr. Võ’s work has been supported by numerous grants and fellowships, including grants from the NIH and the German Research Council. In 2014, Melissa Võ moved back to Germany where as freshly appointed Full Professor for Cognitive Psychology she set up the Scene Grammar Lab at the Goethe University Frankfurt.

Dr. Võ is a superb scientist who has already had an extraordinary impact on our field. Her distinctive contribution has been to develop the concept of “scene grammar”, particularly scrutinizing the distinction between semantics and syntax in visual scenes. The distinction can be illustrated by considering scene components that are semantically incongruent (e.g. a printer in a kitchen) versus those that are syntactically incongruent (e.g. a cooking pot in a kitchen, floating in space rather than resting on a counter). Dr. Võ has used eye-tracking and EEG techniques in both children and adults to demonstrate that the brain processes semantic and syntactic visual information differentially, and has shown that scene grammar not only aids visual processing but also plays a key role in efficiently guiding search in real-world scenarios. Her work has implications in many areas, ranging from computer science to psychiatry. In addition to being a tremendously innovative and productive researcher, Dr. Võ is an active mentor of younger scientists and an award-winning teacher. Her outstanding contributions make her a highly worthy recipient of the 12th VSS Young Investigator Award.





2018 Funding Workshop

VSS Workshop on Grantsmanship and Funding

No registration required. First come, first served, until full.

Saturday, May 19, 2018, 1:00 – 2:00 pm, Sabal/Sawgrass

Moderator: Mike Webster, University of Nevada, Reno
Discussants: Todd Horowitz, National Cancer Institute, Lawrence R. Gottlob, National Science Foundation and Cheri WIggs, National Eye Institute

You have a great research idea, but you need money to make it happen. You need to write a grant. What do you need to know before you write a grant? How does the granting process work? Writing grants to support your research is as critical to a scientific career as data analysis and scientific writing. In this year’s session, we are focusing on the work of the US National Institutes of Health (NIH) and the US National Science Foundation. Cheri Wiggs (National Eye Institute) and Todd Horowitz (National Cancer Institute) will provide insight into the inner workings of the NIH extramural research program. Larry Gottlob will represent the Social, Behavioral, and Economic (SBE) directorate of the NSF. There will be time for your questions.

Todd Horowitz

National Cancer Institute

Todd S. Horowitz, Ph.D., is a Program Director in the Behavioral Research Program’s (BRP) Basic Biobehavioral and Psychological Sciences Branch (BBPSB), located in the Division of Cancer Control and Population Sciences (DCCPS) at the National Cancer Institute (NCI). Dr. Horowitz earned his doctorate in Cognitive Psychology at the University of California, Berkeley in 1995. Prior to joining NCI, he was Assistant Professor of Ophthalmology at Harvard Medical School and Associate Director of the Visual Attention Laboratory at Brigham and Women’s Hospital. He has published more than 70 peer-reviewed research papers in vision science and cognitive psychology. His research interests include attention, perception, medical image interpretation, cancer-related cognitive impairments, sleep, and circadian rhythms.

Lawrence R. Gottlob

National Science Foundation

Larry Gottlob, Ph.D., is a Program Director in the Perception, Action, and Cognition program at the National Science Foundation. His permanent home is in the Psychology Department at the University of Kentucky, but he is on his second rotation at NSF. Larry received his PhD from Arizona State University in 1995 and has worked in visual attention, memory, and cognitive aging.

Cheri Wiggs

National Eye Institute

Cheri Wiggs, Ph.D., serves as a Program Director at the National Eye Institute (of the National Institutes of Health). She oversees extramural funding through three programs — Perception & Psychophysics, Myopia & Refractive Errors, and Low Vision & Blindness Rehabilitation. She received her PhD from Georgetown University in 1991 and came to the NIH as a researcher in the Laboratory of Brain and Cognition. She made her jump to the administrative side of science in 1998 as a Scientific Review Officer. She currently represents the NEI on several trans-NIH coordinating committees (including BRAIN, Behavioral and Social Sciences Research, Medical Rehabilitation Research) and was appointed to the NEI Director’s Audacious Goals Initiative Working Group.

Vision Sciences Society