2018 Public Lecture – Cancelled

The 2018 Public Lecture was cancelled.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

Disclosure of Conflicts of Interest

It is the responsibility of the First, Corresponding, or Presenting Author to list on the abstract any relevant commercial relationships or other conflicts of interest.

For each abstract and presentation, the First/Corresponding Author must also disclose the name of the organization with which a commercial relationship exists for the First Author and each Co-author.

Each Platform Presenter is to orally state and display on a slide at the beginning of the presentation all relevant commercial relationships or other conflicts of interest, as well as the name of the organization(s) with which a commercial relationship(s) exists for the First Author and each Co-author.

At a poster presentation, the presenter must display the relevant commercial relationships or other conflicts of interest, as well as the name of the organization(s) with which a commercial relationship(s) exists for the First Author and each Co-author.

Conformity with this Policy is a requirement.

16th Annual Dinner and Demo Night

Monday, May 21, 2018, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall

Please join us Monday evening for the 16th Annual VSS Dinner and Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada, Reno; Arthur Shapiro, American University; Gennady Erlikhman, University of Nevada, Reno; and Karen Schloss, University of Wisconsin–Madison.

Demos are free to view for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a VSS Friends and Family Pass to attend the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. Guest passes may also be purchased at the BBQ event, beginning at 5:45 pm.

The following demos will be presented from 7:00 to 10:00 pm, in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall:

Paradoxical memory color for faces

Rosa Lafer-Sousa, MIT; Maryam Hasantash, Institute for Research in Fundamental Sciences, Iran;  Arash Afraz, National Institute of Mental Health, NIH; Bevil R. Conway, National Institute of Mental Health, NIH and National Eye Institute, NIH

In this demo we use monochromatic sodium light (589 nm), which renders vision objectively achromatic, to elicit memory colors for familiar objects in a naturalistic setting.  The demo showcases a surprising finding, that faces, and only faces, provoke a paradoxical memory color, appearing greenish.

Vision in the extreme periphery:  Perceptual illusions of flicker, selectively rescued by sound

Daw-An Wu, California Institute of Technology; Takashi Suegami, California Institute of Technology and Yamaha Motors Corporation; Shinsuke Shimojo, California Institute of Technology

Synchronously pulsed visual stimuli, when spread across central and peripheral vision, appear to pulse at different rates.  When spread bilaterally into extreme periphery (70˚+), the left and right stimuli can also appear different from each other.  Pulsed sound can cause some or all of the stimuli to become perceptually synchronized.

Don’t Go Chasing Waterfalls

Matthew Harrison and Matthew Moroz, University of Nevada Reno

‘High Phi’ in VR, illusory motion jumps are perceived when the random noise texture of a moving 3D tunnel is replaced with new random textures. In 2D, these illusory jumps tend to be perceived in the direction opposite the preceding motion, but in 3D, this is not always the case!

The UW Virtual Brain Project: Exploring the visual system in immersive virtual reality

Chris Racey, Bas Rokers, Nathaniel Miller, Jacqueline Fulvio, Ross Tredinnick, Simon Smith, and Karen B. Schloss, University of Wisconsin – Madison

The UW Virtual Brain Project allows you to explore the visual system in virtual reality. It helps to visualize the flow of information from the eyes to visual cortex. The ultimate aim of the project is to improve neuroscience education by leveraging our natural abilities for space-based learning.

Augmented Reality Art

Jessica Herrington, Australian National University

Art inspired by vision science! Come and explore augmented reality artworks that contain interactive, digital sculptures. Augmented reality artworks will be freely available for download as iPhone apps.

Staircase Gelb effect

Alan Gilchrist, Rutgers University

A black square suspended in midair and illuminated by a spotlight appears white. Now successively lighter squares are added within the spotlight. Each new square appears white and makes the other squares appear to get darker. This demonstrates the highest luminance rule of lightness anchoring and gamut compression.

Hidden in Plain Sight!

Peter April, Jean-Francois Hamelin, Stephanie-Ann Seguin, and Danny Michaud, VPixx Technologies

Can visual information be hidden in plain sight?  We use the PROPixx 1440Hz projector, and the TRACKPixx 2kHz eye tracker, to demonstrate images which are invisible until you make a rapid eye movement.  We implement retinal stabilization to show other images that fade during fixations.  Do your eyes deceive?

Do I know you? Discover your eye gaze strategy for face recognition

Janet Hsiao and Cynthia Chan, University of Hong Kong

At VSS, do you often wonder whether you’ve seen someone before? Are you using good gaze strategies for face recognition? Try our hidden Markov modeling approach (EMHMM; http://visal.cs.cityu.edu.hk/research/emhmm/) to summarize your gaze strategy in terms of personalized regions-of-interest and transition patterns, and quantitatively assess its similarity to commonly used strategies.

Virtual Reality reconstruction of Mondrian’s ‘Salon for Madame B’

Johannes M. Zanker and Jasmina Stevanov, Royal Holloway University of London; Tim Holmes, Tobii Pro Insight

We present the first Virtual Reality realisation of Mondrian’s design for a salon painted in his iconic style which was never realised in his lifetime. Visitors can explore the VR space whilst their eye-movements are tracked allowing the researcher to evaluate possible reasons why Mondrian did not pursue his plan.

Hidden Stereo: Hiding phase-based disparity to present ghost-free 2D images for naked-eye viewers

Shin’ya Nishida, Takahiro Kawabe, and Taiki Fukiage, NTT Communication Science Lab

When a conventional stereoscopic display is viewed without 3D glasses, image ghosts are visible due to the fusion of stereo image pairs including binocular disparities. Hidden Stereo is a method to hide phase-based binocular disparities after image fusion, and to present ghost-free 2D images to viewers without glasses.

Quick estimation of contrast sensitivity function using a tablet device

Kenchi Hosokawa and Kazushi Maruya, NTT Communication Science Laboratories

Contrast sensitivity functions (CSFs) are useful but sometimes impossible in practical uses due to imitations of time. We demonstrate web-based applications to measure CSF in a short time (<3 min) at moderate precisions. Those applications allow collecting CSFs’ data from various types of observers and experimental circumstances.

The optical illusion blocks: Optical illusion patterns in a three dimensional world

Kazushi Maruya, NTT Communication Science Laboratories; Tomoko Ohtani, Tokyo University of the Arts

The optical illusion blocks are a set of toy blocks whose surfaces have particular geometric patterns. When combined, the blocks induce various types of optical illusion such as shape from shading, cafe wall, and subjective contour. With the blocks, observers can learn rules behind the illusions through active viewpoint changes.

Dis-continuous flash suppression

Shao-Min (Sean) Hung, Caltech; Po-Jang (Brown) Hsieh, Duke-NUS Medical School; Shinsuke Shimojo, Caltech

We report a novel variant of continuous flash suppression (CFS): Dis-continuous flash suppression (dCFS) where the suppressor and suppressed are presented intermittently. Our findings suggest approximately two-fold suppression power, as evident by lower breaking rates and longer suppression duration. dCFS thus may be suitable for future investigations of unconscious processing.

Virtual Reality Collaboration with interactive outside-in and tether-less inside-out tracking setup

Matthias Pusch, Dan Tinkham, and Sado Rabaudi, WorldViz

Multiple participants can interact with both local and remote participants in VR – the demo will contain both, outside-in tracking paradigm for some participants, in combo with inside-out integrated tracking for other participants. Importantly, the inside-out system will be entirely tether-less (using so-called consumer backpack VR ) and the user will be free to explore the entire indoor floor plan.

The illusion of floating objects caused by light projection of cast shadow

Takahiro Kawabe, NTT Communication Science Laboratories 

We demonstrate an illusion wherein objects in pictures and drawings apparently float in the air due to the light projection of cast shadow patterns onto them. We also conduct a demonstration of a light projection method making an opaque colored paper appear to be a transparent color film floating in the air.

Extension of phenomenal phenomena toward printed objects

Takahiro Kawabe, NTT Communication Science Laboratories 

We demonstrate that the phenomenal phenomena (Gregory and Heard, 1983) can be extended toward printed objects placed against a background with luminance modulation. In our demo, the audience experiences not only the illusory translation of the printed objects but also their illusory expansion/contraction and rotation.

Stereo Illusions in Augmented Reality

Moqian Tian, Meta Company

Augmented Reality with environmental tracking and real world lighting projection can uncover new perspectives of some classical illusions. We will present Hallow Face Illusion, Necker’s Cube, and Crazy Nuts Illusion in multiple conditions, while observers can interact with the holograms through Meta 2 AR headset.

A Color-Location Misbinding Illusion

Cristina R. Ceja and Steven L. Franconeri, Northwestern University

Illusory conjunctions, formed by misbound features, can be formed when attention is overloaded or diverted (Treisman & Schmidt, 1982). Here we provide the opportunity to experience a new illusory conjunction illusion, using even simpler stimulus displays.

Thatcherize your face

Andre Gouws, York Neuroimaging Centre, University of York; Peter Thompson, University of York

The Margaret Thatcher illusion is one of the best-loved perceptual phenomena. Here you will have the opportunity to see yourself ‘thatcherized’ in real time and we print you a copy of the image to take away.

The Ever-Popular Beuchet Chair

Peter Thompson, Rob Stone and Tim Andrews, University of York

A favorite at demo Night for the past few years, the Beuchet chair is back again. The two parts of the chair are at different distances and the visual system fails to apply size constancy appropriately. The result is people can be shrunk or made giants.

Illusory grating

William F. Broderick, New York University

By windowing a large two-dimensional sinusoidal grating, a perpendicular illusory grating is created. This illusion is quite strong, and depends on the overall size of the image, as well as the relative size of the grating and windows.

Look where Simon says without delay

Katia Ripamont, Cambridge Research Systems; Lloyd Smith, Cortech Solutions

Can you beat the Simon effect using your eye movements? Compete with other players to determine who can look where Simon says without delay. All you need to do is to control your eye movements before they run off. It sounds so simple and yet so difficult!

Chromatic induction from achromatic stimulus

Leone Burridge, Artist/ Medical practitioner in private practice

These are acrylic paintings made with only black and white pigments. On sustained gaze subtle colours become visible.

Grandmother’s neuron

Katerina Malakhova, Pavlov Institute of Physiology

If we could find a grandma cell, what kind of information would this cell code? Artificial neural networks allow us to study l atent representations which activate neurons. I choose a unit with the highest selectivity for grandmother images and visualize a percept which drives this neuron.

Planarian Eyespot(s) – Amazing redundancy in visual-motor behavior

Kensuke Shimojo, Chandler School, Pasadena, CA; Eiko Shimojo, California Institute of Technology

The planarian dissected body parts, even with incomplete eyespots, show ‘light avoiding behavior” long before the completion of the entire body (and sensory-motor organs). We will demonstrate this live (in Petri dishes) and in video.

Real-Life Continuous Flash Suppression – Suppressing the real world from awareness

Uri Korisky, Tel Aviv University

‘Real life CFS’ is a new method for suppressing real life stimuli. Using augmented reality goggles, CFS masks (“mondrians”) are presented to your dominant eye, causing whatever is presented to your non-dominant eye to be suppressed from awareness – even real objects placed in front of you.

The Motion Induced Contour Revisited

Gideon Caplovitz and Gennady Erlkhman, University of Nevada, Reno

As a tribute to Neomi Weisstein (1939-2015) we recreate and introduce some novel variants of the Motion Induced Contour, which was first described in a series of papers published in the 1980’s.

Illusory Apparent Motion

Allison K. Allen, Nicolas Davidenko and Nathan H. Heller, University of California, Santa Cruz

When random textures are presented at a moderate pace, observers report experiencing coherent percepts of apparent motion, which we term Illusory Apparent Motion (IAM). In this demo, we will cue observers to experience different types of motion percepts from random stimuli by using verbal suggestion, action commands, and intentional control.

Illusory color in extreme-periphery

Takashi Suegami, California Institute of Technology and Yamaha Motors Corporation; Daw-An Wu and Shinsuke Shimojo, California Institute of Technology

Our new demo will show that foveal color cue can induce illusory color in extreme-periphery (approx. 70°-90°) where cone cells are less distributed. One can experience, for example, clear red color perception for extreme-peripheral green flash, with isoluminant red foveal pre-cueing (or vice versa).

Silhouette Zoetrope

Christine Veras, University of Texas at Dallas; Gerrit Maus, Nanyang Technological University

A contemporary innovation of the traditional zoetrope, called Silhouette Zoetrope. In this new device, an animation of moving silhouettes is created by sequential cutouts placed outside a rotating empty cylinder, with slits illuminating the cutouts successively from the back. This new device combines motion, mirroring, depth, and size Illusions.

Spinning reflections on depth from spinning reflections

Michael Crognale, University of Nevada, Reno

A trending novelty toy when spun, induces a striking depth illusion from disparity in specular reflections from point sources. However, “specular” disparity from static curved surfaces is usually discounted or contributes to surface curvature. Motion obscures surface features that compete with depth cues and result in a strong depth illusion.

High Speed Gaze-Contingent Visual Search

Kurt Debono and Dan McEchron, SR Research Ltd.

Try to find the target in a visual search array which is continuously being updated based on the location of your gaze. High speed video based eye tracking combined with the latest high speed monitors make for a compelling challenge.

The photoreceptor refresh rate

Allan Hytowitz, Dyop Vision Associates

A dynamic optotype Dyop (a segmented spinning ring) provides a much more precise, consistent, efficient, and flexible means of measuring acuity. Adjustment of the rotation rate of the segmented ring determined the optimum rate as well as the photoreceptor refresh rate for perceived retrograde motion.

Stereo psychophysics by means of continuous 3D target-tracking in VR

Benjamin T. Backus and James J. Blaha, Vivid Vision Labs, Vivid Vision, Inc.; Lawrence K. Cormack and Kathryn L. Bonnen, University of Texas at Austin

What’s your latency for tracking binocular disparity? Let us cross-correlate your hand motion with our flying bugs to find out.

Motion-based position shifts

Stuart Anstis, University of California, San Diego; Patrick Cavanagh, Glendon College, York University

Motion-based position shifts are awesome!

StroboPong

VSS Staff

Back by popular demand. Strobe lights and ping pong!

2018 Young Investigator – Melissa Le-Hoa Võ

Vision Sciences Society is honored to present Melissa Le-Hoa Võ with the 2018 Young Investigator Award.

The Young Investigator Award is an award given to an early stage researcher who has already made a significant contribution to our field. The award is sponsored by Elsevier, and the awardee is invited to submit a review paper to Vision Research highlighting this contribution.

Melissa Le-Hoa VõMelissa Le-Hoa Võ

Professor of Cognitive Psychology, Goethe Universität Frankfurt; Head of the DFG-funded Emmy Noether Group, Scene Grammar Lab, Goethe Universität Frankfurt

Reading Scenes: How Scene Grammar Guides Attention and Perception in Real-World Environments

Dr. Võ will talk during the Awards Session
Monday, May 21, 2018, 12:30 – 1:30 pm, Talk Room 1-2

How do you recognize that little bump under the blanket as being your kid’s favorite stuffed animal? What no state-of-the-art deep neural network or sophisticated object recognition algorithm can do, is easily done by your toddler. This might seem trivial, however, the enormous efficiency of human visual cognition is actually not yet well understood.

Visual perception is much more than meets the eye. While bottom-up features are of course an essential ingredient of visual perception, my work has mainly focused on the role of the “invisible” determinants of visual cognition, i.e. the rules and expectations that govern scene understanding. Objects in scenes — like words in sentences — are arranged according to a “grammar”, which allows us to immediately understand objects and scenes we have never seen before. Studying scene grammar therefore provides us with the fascinating opportunity to study the inner workings of our mind as it makes sense of the world and interacts with its complex surroundings. In this talk, I will highlight some recent projects from my lab in which we have tried to shed more light on the influence of scene grammar on visual search, object perception and memory, its developmental trajectories, as well as its role in the ad-hoc creation of scenes in virtual reality scenarios. For instance, we found that so-called “anchor objects” play a crucial role in guiding attention and anchoring predictions about other elements within a scene, thereby laying the groundwork for efficient visual processing. This opens up exciting new avenues for investigating the building blocks of our visual world that our Scene Grammar Lab is eager to pursue.

Biography

Melissa Võ received her PhD from the Ludwig-Maximilians University in Munich in 2009. She then moved on to perform postdoctoral work, first with John Henderson at the University of Edinburgh, and then with Jeremy Wolfe at Harvard Medical School. Dr. Võ’s work has been supported by numerous grants and fellowships, including grants from the NIH and the German Research Council. In 2014, Melissa Võ moved back to Germany where as freshly appointed Full Professor for Cognitive Psychology she set up the Scene Grammar Lab at the Goethe University Frankfurt.

Dr. Võ is a superb scientist who has already had an extraordinary impact on our field. Her distinctive contribution has been to develop the concept of “scene grammar”, particularly scrutinizing the distinction between semantics and syntax in visual scenes. The distinction can be illustrated by considering scene components that are semantically incongruent (e.g. a printer in a kitchen) versus those that are syntactically incongruent (e.g. a cooking pot in a kitchen, floating in space rather than resting on a counter). Dr. Võ has used eye-tracking and EEG techniques in both children and adults to demonstrate that the brain processes semantic and syntactic visual information differentially, and has shown that scene grammar not only aids visual processing but also plays a key role in efficiently guiding search in real-world scenarios. Her work has implications in many areas, ranging from computer science to psychiatry. In addition to being a tremendously innovative and productive researcher, Dr. Võ is an active mentor of younger scientists and an award-winning teacher. Her outstanding contributions make her a highly worthy recipient of the 12th VSS Young Investigator Award.

Save

Save

Save

Save

2018 Funding Workshop

VSS Workshop on Grantsmanship and Funding

No registration required. First come, first served, until full.

Saturday, May 19, 2018, 1:00 – 2:00 pm, Sabal/Sawgrass

Moderator: Mike Webster, University of Nevada, Reno
Discussants: Todd Horowitz, National Cancer Institute, Lawrence R. Gottlob, National Science Foundation and Cheri WIggs, National Eye Institute

You have a great research idea, but you need money to make it happen. You need to write a grant. What do you need to know before you write a grant? How does the granting process work? Writing grants to support your research is as critical to a scientific career as data analysis and scientific writing. In this year’s session, we are focusing on the work of the US National Institutes of Health (NIH) and the US National Science Foundation. Cheri Wiggs (National Eye Institute) and Todd Horowitz (National Cancer Institute) will provide insight into the inner workings of the NIH extramural research program. Larry Gottlob will represent the Social, Behavioral, and Economic (SBE) directorate of the NSF. There will be time for your questions.

Todd Horowitz

National Cancer Institute

Todd S. Horowitz, Ph.D., is a Program Director in the Behavioral Research Program’s (BRP) Basic Biobehavioral and Psychological Sciences Branch (BBPSB), located in the Division of Cancer Control and Population Sciences (DCCPS) at the National Cancer Institute (NCI). Dr. Horowitz earned his doctorate in Cognitive Psychology at the University of California, Berkeley in 1995. Prior to joining NCI, he was Assistant Professor of Ophthalmology at Harvard Medical School and Associate Director of the Visual Attention Laboratory at Brigham and Women’s Hospital. He has published more than 70 peer-reviewed research papers in vision science and cognitive psychology. His research interests include attention, perception, medical image interpretation, cancer-related cognitive impairments, sleep, and circadian rhythms.

Lawrence R. Gottlob

National Science Foundation

Larry Gottlob, Ph.D., is a Program Director in the Perception, Action, and Cognition program at the National Science Foundation. His permanent home is in the Psychology Department at the University of Kentucky, but he is on his second rotation at NSF. Larry received his PhD from Arizona State University in 1995 and has worked in visual attention, memory, and cognitive aging.

Cheri Wiggs

National Eye Institute

Cheri Wiggs, Ph.D., serves as a Program Director at the National Eye Institute (of the National Institutes of Health). She oversees extramural funding through three programs — Perception & Psychophysics, Myopia & Refractive Errors, and Low Vision & Blindness Rehabilitation. She received her PhD from Georgetown University in 1991 and came to the NIH as a researcher in the Laboratory of Brain and Cognition. She made her jump to the administrative side of science in 1998 as a Scientific Review Officer. She currently represents the NEI on several trans-NIH coordinating committees (including BRAIN, Behavioral and Social Sciences Research, Medical Rehabilitation Research) and was appointed to the NEI Director’s Audacious Goals Initiative Working Group.

2018 Ken Nakayama Medal for Excellence in Vision Science – George Sperling

The Vision Sciences Society is honored to present George Sperling with the 2018 Ken Nakayama Medal for Excellence in Vision Science.

The Ken Nakayama Medal is in honor of Professor Ken Nakayama’s contributions to the Vision Sciences Society, as well as his innovations and excellence to the domain of vision sciences.

The winner of the Ken Nakayama Medal receives this honor for high-impact work that has made a lasting contribution in vision science in the broadest sense. The nature of this work can be fundamental, clinical or applied. The Medal is not a lifetime career award and is open to all career stages.

George Sperling

Department of Cognitive Sciences, Department of Neurobiology and Behavior, and the Institute of Mathematical Behavioral Sciences, University of California, Irvine

Five encounters with physical and physics-like models in vision science

Dr. Sperling will talk during the Awards session
Monday, May 21, 2018, 12:30 – 1:30 pm, Talk Room 1-2.

Two early concepts in a vision course are photons and visual angles:

1. Every second, a standard candle produces 5.1×1016 photons, enough to produce 6.8×106 photons for every one of the 7.7×109 persons on earth–a very bright flash (68,000*threshold) if delivered to the pupil. Obviously, photons pass seamlessly through each other or we’d be in a dense fog. And, the unimaginably large number of photons solves the ancients’ problem: How can the light from a candle produce a detailed image behind a tiny, ¼ inch pupil that captures only an infinitesimal fraction of the meager candlelight reflected off relatively distant surfaces?

2. The visual angles of the moon (0.525°) and the sun (0.533°) are almost the same although their physical sizes are enormously different. Occlusion demo: A solar eclipse on a reduced scale in which the earth is 1/4 inch diam, the moon is 1/16 inch diam 7.5 inch away, and the sun is a 27 inch beach ball 250 ft away. Note: The beach ball nearest the sun, Alpha Centauri, is 12,200 mi away.

3. A simply dynamical system of a marble rolling under the influence of gravity in a bowl (filled with a viscous fluid) whose shape is distorted by the covariance of the images in the two eyes. The marble’s position can represent the vergence angle of horizontal, vertical, or torsional vergence of the eyes, or of binocular fusion; the bowl’s shape represents the bistable nature of these processes (Sperling, 1970).

4. A simple RC electrical circuit–a capacitor that stores an electrical charge that leaks away through the resistor–illustrates exponential decay. When the resistance is allowed to vary, it represents shunting inhibition in a neuron. A feedforward shunting inhibition circuit models the compression of the 106 range of visual inputs into the approximately 30:1 useful range of neural signals, and also the concurrent changes in visual receptive field structure (Sperling and Sondhi, 1968). A constant noise source after the range compression produces a S/N ratio inversely proportional to the average input intensity, i.e., a Weber Law (Sperling, 1989).

5. A similar feedback shunting-gain-control system efficiently models mechanisms of top-down spatial, temporal, and feature attention. Example: Reeves and Sperling, 1986: A simple 3 -parameter model of the shift of visual attention from one rapid stream to an adjacent stream of characters (an attention reaction-time paradigm) accurately accounts for over 200 data points from variants of this procedure.

Biography

George Sperling attended public school in New York City. He received a B.S. in mathematics from the University of Michigan, an M.A. from Columbia University and a Ph.D. from Harvard, both in Experimental Psychology.

For his doctoral thesis, Sperling introduced the method of partial report to measure the capacity and decay rate of visual sensory memory, which was renamed iconic memory by Ulrich Neisser. To measure the information outflow from iconic memory, Sperling introduced post-stimulus masking to terminate iconic persistence, and confirmed this with an auditory synchronization paradigm: Subjects adjusted an auditory click to be simultaneous with the perceived onset and on other trials with the perceived termination of visible information. The interclick duration defined the duration of visible persistence.

Sperling’s first theoretical venture was a feed-forward gain control model based on shunting inhibition, formalized with a mathematician, Mohan Sondhi. It accounted for the change of visual flicker sensitivity with light intensity and for Barlow’s observation that visual receptive fields change from pure excitation in the dark to antagonistic center-surround in the light. Subsequently, Sperling observed that this same model, with internal noise following the gain control, also accounted for Weber’s Law. For binocular vision, Sperling proposed a dynamic, energy-well model (a pre-catastrophe theory “catastrophe” model) to account for multiple stable states in vergence-accommodation as well as for Julesz’s hysteresis phenomena in binocular fusion. With Jan van Santen, Sperling elaborated Reichardt’s beetle-motion-detection model for human psychophysics, and experimentally confirmed five counter-intuitive model predictions. Shortly afterwards, Charlie Chubb and Sperling defined a large class visual stimuli (which they called “second-order”) that were easily perceived as moving but were invisible to the Reichard model. These could be made visible to the Reichard model by prior contrast rectification (absolute value or square), thereby defining the visual pre-processing of a second motion system. With Zhong-Lin Lu, Sperling found yet another class of stimuli that produced a strong motion perceptions but were invisible to both Reichard (first-order) and second-order motion detecting systems. They proposed these stimuli were processed by a third-order motion system that operated on a salience map and, unlike the first- and second-order systems, was highly influenced by attention. To characterize these three motion-detection systems, they developed pure stimuli that exclusively stimulated each of the three motion system. More recently, Jian Ding and Sperling used interocular out-of-phase sinewave grating stimuli to precisely measure the contribution of each eye to a fused binocular percept. This method has been widely adopted to assess treatments of binocular disorders.

Twenty five years after his thesis work, Sperling returned to attention research with a graduate student, Adam Reeves, to study attention reaction times of unobservable shifts of visual attention which they measured with the same precision as concurrent finger-press motor reaction times. Their basic experiment was then greatly elaborated to produce hundreds different data points. A simple (3-parameter) attention gating model that involved briefly opening an attention gate to short-term memory accurately accounted for the hundreds of results. Subsequently, Erich Weichselgartner and Sperling showed that the shifts of visual attention in a Posner-type attention-cued reaction time experiment could be fully explained by independent spatial and temporal attention gates. In a study of dual visual attention tasks, Melvin Melchner and Sperling demonstrated the first Attention Operating Characteristics (AOCs). Sperling and Barbara Dosher showed how AOCs, the ROCs of Signal Detection Theory, and macro-economic theory all used the same underlying utility model. Shui-I Shih and Sperling revisited the partial-report paradigm to show that when attention shifted from one row of letters to another, attention moved concurrently to all locations. Together, these attention experiments showed that visual spatial attention functions like the transfer of power from one fixed spotlight to another, rather than like a moving spotlight. Most recently, Sperling, Peng Sun, Charlie Chubb, and Ted Wright, developed efficient methods for measuring the perceptual attention filters that define feature attention.

Sperling owes what success he has had to his many wonderful mentors and collaborators. Not fully satisfied with these fifty-plus years of research, Sperling still hopes to do better in the future.

 

2018 Student Travel Awards

Kirsten Adam
University of Chicago
Advisor: Edward Vogel
Jit Wei Ang
Nanyang Technological University
Advisor: Gerrit Maus
Benay Başkurt
Bilkent University
Advisor: Aaron Michael Clarke
Chloe Callahan-Flintoft
Pennsylvania State University
Advisor: Brad Wyble
Laurent Caplette
Université de Montréal
Advisors: Frédéric Gosselin and Karim Jerbi
Ting-Yu Chang
University of Wisconsin-Madison
Advisor: Ari Rosenberg
Elliot Collins
Carnegie Mellon University
Advisor: Marlene Behrmann
Abigail Finch
Durham University
Advisor: Gordon D. Love
Nina Hanning
Ludwig-Maximilians-Universität München
Advisor: Heiner Deubel
Frederik Kamps
Emory University
Advisor: Daniel D. Dilks
Saya Kashiwakura
The University of Tokyo
Advisor: Isamu Motoyoshi
Insub Kim
Sungkyunkwan University
Advisor: Won Mok Shim
Lina Klein
Justus-Liebig University Giessen
Advisors: Roland W. Fleming and Jody C. Culham
Ethan Knights
University of East Anglia
Advisor: Stephanie Rossit
Jacob Paul
University of Melbourne and Utrecht University
Advisors: Jason Forte and Robert Reeve
Carmen Pons
SUNY College of Optometry
Advisor: Jose-Manuel Alonso
Yelda Semizer
Rutgers University
Advisor: Melchi M. Michel
Natalya Shelchkova
Boston University
Advisor: Martina Poletti
Weizhen Xie
University of California, Riverside
Advisor: Weiwei Zhang
Jingyang Zhou
New York University
Advisor: Jonathan Winawer

 

2018 Davida Teller Award – Nancy Kanwisher

Vision Sciences Society is honored to present Dr. Nancy Kanwisher with the 2018 Davida Teller Award

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Nancy Kanwisher

Walter A. Rosenblith Professor, Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology

Functional imaging of the brain as a window into the architecture of the human mind

Dr. Kanwisher will talk during the Awards Session
Monday, May 21, 2018, 12:30 – 1:30 pm, Talk Room 1-2

The last twenty years of fMRI research have given us a new sketch of the human mind, in the form of the dozens of cortical regions that have now been identified, many with remarkably specific functions. I will describe several ongoing lines of work in my lab on cortical regions engaged in perceiving social interactions, understanding the physical world, and perceiving music. After presenting various findings that use pattern analysis (MVPA), I will also raise caveats about this method, which can both fail to reveal information that we know is present in a given region, and which can also reveal information that is likely epiphenomenal. I’ll argue that  human cognitive neuroscience would greatly benefit from the invention of new tools to address these challenges.

Biography

My research uses fMRI and other methods to try to discover the functional organization of the brain as a window into the architecture of the human mind. My early forays in this work focused on high-level visual cortex, where my students and I developed the methods to test the functional profile of regions in the ventral visual pathway specialized for the perception of face, places, bodies, and words. The selectivity of these regions is now widely replicated, and ongoing work  in my lab and many other labs is now asking what exactly is represented and computed in each of these regions, how they arise both developmentally and evolutionarily, how they are structurally connected to each other and the rest of the brain, what the causal role of each is in behavior and perceptual awareness, and why, from a computational point of view, we have functional selectivity in the brain in the first place.

My career would quite simply never have happened without the great gift of fabulous mentors. Molly Potter fought to have me accepted to graduate school (from the bottom of the waiting list), and, against all reason, did not give up on me even when I dropped out of grad school three times to try to become a journalist.  Then after a diversionary postdoc in international security,  Anne Treisman gave me an incredible second chance in vision research as a postdoc in her lab, despite my scanty list of publications. Later in my own lab, my luck came in the form of spectacular mentees. I have had the enormous privilege and delight of working with many of the most brilliant young scientists in my field.

I think we scientists have an obligation to share the cool results of our work with the public (who pays for it). My latest effort in this direction is my growing collection of short lectures about human cognitive neuroscience for lay and undergraduate audiences: nancysbraintalks.mit.edu.

2018 Satellite Events

Wednesday, May 16

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 16 – Friday, May 18, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
8:30 – 11:45 am Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zygmunt Pizlo, UC Irvine; Anne B. Sereno, Purdue University; and Qasim Zaidi, SUNY College of Optometry

Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington

The 7th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the Tradewinds Island Resorts in St. Pete Beach, FL, May 16 – May 18. A keynote address will be given by Eero Simoncelli, New York University.

The early registration fee is $100 for regular participants, $50 for students. More information can be found on the workshop’s website: http://www.conf.purdue.edu/modvis/

Thursday, May 17

Eye Tracking in Virtual Reality

Thursday, May 17, 10:00 am – 3:00 pm, Jasmine/Palm

Organizer: Gabriel Diaz, Rochester Institute of Technology

This will be a hands-on workshop run by Gabriel Diaz, with support from his graduate students Kamran Binaee and Rakshit Kothari.

The ability to incorporate eye tracking into computationally generated contexts presents new opportunities for research into gaze behavior. The aim of this workshop is to provide an understanding of the hardware, data collection process, and algorithms for data analysis. Example data and code will be provided in two both Jupyter notebooks and Matlab (choose your preference). This workshop is sponsored by The Optical Society’s Vision Technical Group and is suitable for both PIs and graduate students.

Friday, May 18

Tutorial on Big Data and Online Crowd-Sourcing for Vision Research

Friday, May 18, 8:30 – 11:45 am, Jasmine/Palm

Organizer: Wilma Bainbridge, National Institutes of Health

Speakers: Wilma Bainbridge, National Institutes of Health; Tim Brady, University of California San Diego; Dwight Kravitz, George Washington University; and Gijsbert Stoet, Leeds Beckett University

Online experiments and Big Data are becoming big topics in the field of vision science, but can be hard to access for people not familiar with web development and coding. This tutorial will teach attendees the basics of creating online crowd-sourced experiments, and how to think about collecting and analyzing Big Data related to vision research. Four experts in the field will discuss how they use and collect Big Data, and give hands-on practice to tutorial attendees. We will discuss Amazon Mechanical Turk, its strengths and weaknesses, and how to leverage it in creative ways to collect powerful, large-scale data. We will then discuss Psytoolkit, an online experimental platform for coding timed behavioral and psychophysical tasks, that can integrate with Amazon Mechanical Turk. We will then discuss how to create Big Datasets using various ways of “scraping” large-scale data from the internet. Finally, we will discuss other sources of useful crowd-sourced data, such as performance on mobile games, and methods for scaling down and analyzing these large data sets.

To help us plan for this event, please register here: http://wilmabainbridge.com/research/bigdata/bigdataregistration.html

Sunday, May 20

FoVea (Females of Vision et al) Workshop

Sunday, May 20, 7:30 – 8:30 pm, Horizons

Organizers: Diane Beck, University of Illinois, Urbana-Champaign; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, Baycrest Health Sciences

Speaker: Virginia Valian, Hunter College
Title: Remedying the (Still) Too Slow Advancement of Women

Dr. Valian is a Distinguished Professor of Psychology and Director of The Gender Equity Project.

FoVea is a group founded to advance the visibility, impact, and success of women in vision science (www.foveavision.org). We encourage vision scientists of all genders to participate in the workshops.

Please register at: http://www.foveavision.org/vss-workshops

Monday, May 21

Psychophysics Toolbox Discussion

Monday, May 21, 2:00 – 3:00 pm, Talk Room 1

Organizer: Vijay Iyer, MathWorks

Panelists: Vijay Iyer, David Brainard, and Denis Pelli

Discussion of the current-state (technical, funding, community status) of the Psychophysics toolbox, widely used for visual stimulus generation in vision science experiments.

Social Hour for Faculty at Primarily Undergraduate Institutions (PUIs)

Monday, May 21, 2:00 – 4:00 pm, Royal Tern

Organizer: Katherine Moore, Arcadia University

Do you work at a primarily undergraduate institution (PUI)? Do you juggle your research program, student mentoring, and a heavy teaching load? If so, come along to the PUI social and get to know other faculty at PUIs! It will be a great opportunity to share your ideas and concerns. Feel free to bring your own drinks / snacks. Prospective faculty of PUIs are also welcome to attend and get to know us and our institutions.

Canadian Vision Social

Monday, May 21, 2:00 – 4:00 pm, Jasmine/Palm

Organizer: Doug Crawford, York Centre for Vision Research

This afternoon Social is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! This event will feature free snacks and refreshments, with a complementary beverage for the first 200 attendees. We particularly encourage trainees and scientists who would like to learn about the various research and training funds available through York’s Vision: Science to Applications (VISTA) program. This event is sponsored by the York Centre for Vision Research and VISTA, which is funded in part by the Canada First Research Excellence Fund (CFREF).

Tuesday, May 22

Virtual Reality as a Tool for Vision Scientists

Tuesday, May 22, 1:00 – 2:00 pm, Talk Room 1
Organizer: Matthias Pusch, WorldViz

In a hands on group session, we will show how Virtual Reality can be used by Vision Scientists for remote and on site collaborative experiments. Full experimental control over stimuli and reactions enable a unique setting for measuring performance. We will experience collaboration with off-site participants, and show the basics of performance data recording and analysis.

2018 Student Workshops

There is no advanced sign-up for workshops. Workshops will be filled on a first-come, first-served basis.

VSS Workshop for PhD Students and Postdocs:
Getting that Faculty Job

Saturday, May 19, 2018, 1:00 – 2:00 pm, Jasmine/Palm
Moderator: David Brainard
Panelists: Michelle Greene, Tim Brady, Nicole Rust, James Elder

A key transition on the academic career path is obtaining a faculty position.  This workshop will focus on the application process (optimizing CV, statements, letters), the interview and job talk, handling the two-body problem, and post-offer steps such as negotiation about start-up funds, space, and teaching responsibilities.  Panelists include junior scientists who have recently obtained a faculty position as well as more senior scientists who can offer perspective from the hiring side of the process.

Michelle Greene, Bates College
Michelle R. Greene is an Assistant Professor of Neuroscience at Bates College, where she heads the Bates Computational Vision Laboratory. Her work examines the temporal evolution of high-level visual perception. She received her PhD from MIT in 2009, and did postdoctoral work at Harvard Medical School and Stanford University before joining Bates in 2017.
Tim Brady, UCSD
Timothy Brady is an Asst. Professor in the Department of Psychology at the University of California, San Diego, where he started in 2015, ending his need to think about the faculty job market forever (he hopes). His research uses a combination of behavioral, computational and cognitive neuroscience methods to understand the limits on our ability to encode and maintain information in visual memory. He received his B.A. in Cognitive Science from Yale University ’06, his Ph.D. from MIT in Brain and Cognitive Sciences ’11 and conducted postdoctoral research in the Harvard University Vision Sciences Laboratory ’11-’15.
Nicole Rust, University of Pennsylvania
Nicole Rust is an Associate Professor in the Department of Psychology.  She received her Ph.D. in neuroscience from New York University, and trained as a postdoctoral researcher at Massachusetts Institute of Technology before joining the faculty at Penn in 2009. Research in her laboratory is focused on understanding the neural basis of visual memory, including our remarkable ability to remember the objects and scenes that we have encountered, even after viewing thousands, each only for few seconds. To understand visual memory, her lab employs a number of different approaches, including investigations of human and animal visual memory behaviors, measurements and manipulations of neural activity, and computational modeling. She has received a number of awards for both research and teaching including a McKnight Scholar award, an NSF CAREER award, a Alfred P. Sloan Fellowship, and the Charles Ludwig Distinguished teaching award. Her research is currently funded by the National Eye Institute at the National Institutes of Health, the National Science Foundation, and the Simons Collaboration on the Global Brain.
James Elder, York University
James Elder is a Professor in the Department of Psychology and the Department of Electrical Engineering & Computer Science at York University, and a member of York’s Centre for Vision Research and Vision:  Science to Applications (VISTA) program. His research seeks to improve machine vision systems through a better understanding of visual processing in biological systems. Dr. Elder’s current research is focused on natural scene statistics, perceptual organization, contour processing, shape perception, single-view 3D reconstruction, attentive vision systems and machine vision systems for dynamic 3D urban awareness.
David Brainard, University of Pennsylvania
David H. Brainard is the RRL Professor of Psychology at the University of Pennsylvania. He is a fellow of the Optical Society, ARVO and the Association for Psychological Science. At present, he directs Penn’s Vision Research Center, co-directs Penn’s Computational Neuroscience Initiative, co-directs Penn’s NSF funded certificate program in Complex Scene Perception, is on the Board of the Vision Sciences Society, and is a member of the editorial board of the Journal of Vision. His research interests focus on human color vision, which he studies both experimentally and through computational modeling of visual processing. He will be moderating this session.

VSS Workshop for PhD Students and Postdocs:
The public face of your science

Sunday, May 20, 2018, 1:00 – 2:00 pm, Jasmine/Palm
Moderator: Jeff Schall
Panelists: Allison Sekuler, Frans Verstraten, Morgan Ryan

Your research has several potential audiences. In this workshop, we will focus on the general public. When should you tell the world about your latest results? Always? Only if you think it is particularly noteworthy? Only when someone else asks? How should you communicate with the public? Social media? Press releases? How can you attract attention for your work (when you want to) and what should you do if you attract attention that you do not want? Our panel consists of two vision scientists, Allison Sekuler and Frans Verstraten, who have experience in the public eye and Morgan Ryan, the editor for SpringerNature, who handles the Psychonomic Society journals (including AP&P, PBR, and CRPI). Bring your questions.

Allison Sekuler, McMaster University
Dr. Allison Sekuler is Vice-President of Research and the Sandra A. Rotman Chair at Baycrest Health Sciences. She came to Baycrest from her position as a Professor in the Department of Psychology, Neuroscience & Behaviour at McMaster University, where she was the first Canada Research Chair in Cognitive Neuroscience (2001-2011). She is also the Co-Chair of the Academic Colleagues at the Council of Ontario Universities and Chair of the Natural Sciences and Engineering Research Council of Canada‘s (NSERC) Scholarships & Fellowships group along with being a member of NSERC’s Committee for Discovery Research. The recipient of numerous awards for research, teaching and leadership, Dr. Sekuler has a notable record of scientific achievements in aging and vision science, cognitive neuroscience, learning and neural plasticity, and neuroimaging and neurotechnology, as well as extensive experience in senior academic and research leadership roles.
Frans Verstraten, University of Sydney
Professor Frans Verstraten is the McCaughey Chair of Psychology at the University of Sydney and Head of School. He was a board member and former president of the Vision Sciences Society. Before his move to Australia in 2012 he was also active in the domains of the popularization of science and science communication. Among other things, he gave many talks for the general audience, participated in a popular science TV-show for several years, and wrote columns in a national newspaper and several magazines. He has been a member of many national and international committees where he represents the psychological and behavioural sciences. Currently, he tries to convince the University’s marketing and communication teams to understand the power of good press releases (and to refrain from making unwarranted statements to spice research results up).
Morgan Ryan, SpringerNature
With over eight years of experience in scholarly publishing, Morgan Ryan is a Senior Editor in Behavioral Sciences at Springer, part of Springer Nature. As the Publishing Development Editor for more than 14 psychology journals, including the Psychonomic Society journals, she has extensive experience in research promotion and journal strategy. Among other projects, she has organized and presented research-publishing workshops for graduate students and early career scholars.  She enjoys initiating and coordinating press office activity between Springer and the Psychonomic Society  to increase the public visibility of science.
Jeff Schall, Vanderbilt University
The session will be moderated by Jeff Schall, who is the E. Bronson Ingram Professor of Neuroscience and Professor of Psychology and of Ophthalmology & Visual Sciences at Vanderbilt University. Schall’s research investigates how the visual system selects targets for and controls the initiation of saccades using cognitive neurophysiology, anatomical and computational approaches. Schall is a founding member of the advisory board for the interdisciplinary major at Vanderbilt, Communication of Science and Technology, through which students master communication tools and techniques, learn science, and are embedded in research programs. He has also been involved in the complexities of communication at the boundary of law and neuroscience.

 

Vision Sciences Society