Prediction in perception and action

Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany
Presenters: Mary Hayhoe, Miriam Spering, Cristina de la Malla, Katja Fiehler, Kathleen Cullen

< Back to 2018 Symposia

Symposium Description

Prediction is an essential mechanism enabling humans to prepare for future events. This is especially important in a dynamically changing world, which requires rapid and accurate responses to external stimuli. Predictive mechanisms work on different time scales and at various information processing stages. They allow us to anticipate the future state both of the environment and ourselves. They are instrumental to compensate for noise and delays in the transmission of neural signals and allow us to distinguish external events from the sensory consequences of our own actions. While it is unquestionable that predictions play a fundamental role in perception and action, their underlying mechanisms and neural basis are still poorly understood. The goal of this symposium is to integrate recent findings from psychophysics, sensorimotor control, and electrophysiology to update our current understanding of predictive mechanisms in different sensory and motor systems. It brings together a group of leading scientists at different stages in their career who all have made important contributions to this topic. Two prime examples of predictive processes are considered: when interacting with moving stimuli and during self-generated movements. The first two talks from Hayhoe and Spering will focus on the oculomotor system which provides an excellent model for examining predictive behavior. They will show that smooth pursuit and saccadic eye movements significantly contribute to sucessful predictions of future visual events. Moreover, Hayhoe will provide examples for recent advances in the use of virtual reality (VR) techniques to study predictive eye movements in more naturalistic situations with unrestrained head and body movements. De la Malla will extend these findings to the hand movement system by examining interceptive manual movements. She will conclude that predictions are continuously updated and combined with online visual information to optimize behavior. The last two talks from Fiehler and Cullen will take a different perspective by considering predictions during self-generated movements. Such predictive mechanims have been associated with a forward model that predicts the sensory consequences of our own actions and cancels the respective sensory reafferences. Fiehler will focus on such cancellation mechanisms and present recent findings on tactile suppression during hand movements. Based on electrophysiological studies on self-motion in monkeys, Cullen will finally answer where and how the brain compares expected and actual sensory feedback. In sum, this symposium targets the general VSS audience and aims to provide a novel and comprehensive view on predictive mechanisms in perception and action spanning from behavior to neurons and from strictly laboratory tasks to (virtual) real world scenarios.

Presentations

Predictive eye movements in natural vision

Speaker: Mary Hayhoe, Center for Perceptual Systems, University of Texas Austin, USA

Natural behavior can be described as a sequence of sensory motor decisions that serve behavioral goals. To make action decisions the visual system must estimate current world state. However, sensory-motor delays present a problem to a reactive organism in a dynamically changing environment. Consequently it is advantageous to predict future state as well. This requires some kind of experience-based model of how the current state is likely to change over time. It is commonly accepted that the proprioceptive consequences of a planned movement are predicted ahead of time using stored internal models of the body’s dynamics. It is also commonly assumed that prediction is a fundamental aspect of visual perception, but the existence of visual prediction and the particular mechanisms underlying such prediction are unclear. Some of the best evidence for prediction in vision comes from the oculomotor system. In this case, both smooth pursuit and saccadic eye movements reveal prediction of the future visual stimulus. I will review evidence for prediction in interception actions in both real and virtual environments. Subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions, and the head and body are unrestrained. These predictions appear to be used in common by both eye and arm movements. Predictive eye movements reveal that the observer’s best guess at the future state of the environment is based on image data in combination with representations that reflect learnt statistical properties of dynamic visual environments.

Smooth pursuit eye movements as a model of visual prediction

Speaker: Miriam Spering, Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada

Real-world movements, ranging from intercepting prey to hitting a ball, require rapid prediction of an object’s trajectory from a brief glance at its motion. The decision whether, when and where to intercept is based on the integration of current visual evidence, such as the perception of a ball’s direction, spin and speed. However, perception and decision-making are also strongly influenced by past sensory experience. We use smooth pursuit eye movements as a model system to investigate how the brain integrates sensory evidence with past experience. This type of eye movement provides a continuous read-out of information processing while humans look at a moving object and make decisions about whether and how to interact with it. I will present results from two different series of studies: the first utilizes anticipatory pursuit as a means to understand the temporal dynamics of prediction, and probes the modulatory role of expectations based on past experience. The other reveals the benefit of smooth pursuit itself, in tasks that require the prediction of object trajectories for perceptual estimation and manual interception. I will conclude that pursuit is both an excellent model system for prediction, and an important contributor to successful prediction of object motion.

Prediction in interceptive hand movements

Speaker: Cristina de la Malla, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands

Intercepting a moving target requires spatial and temporal precision: the target and the hand need to be at the same position at the same time. Since both the target and the hand move, we cannot just aim for the target’s current position, but need to predict where the target will be by the time we reach it. We normally continuously track targets with our gaze, unless the characteristics of the task or of the target make it impossible to do so. Then, we make saccades and direct our movements towards specific locations where we predict the target will be in the future. If the precise location at which one is to hit the target only becomes evident as the target approaches the interception area, the gaze, head and hand movements towards this area are delayed due to not having the possibility of predicting the target future position. Predictions are continuously updated and combined with online visual information to optimize our actions: the less predictable the target’s motion, the more we have to rely on online visual information to guide our hand to intercept it. Updating predictions with online information allow to correct for any mismatch between the predicted target position and the hand position during an on-going movement, but any perceptual error that is still present at the last moment at which we can update our prediction will result in an equivalent interception error.

Somatosensory predictions in reaching

Speaker: Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany

Movement planning and execution lead to changes in somatosensory perception. For example, tactile stimuli on a moving compared to a resting limb are typically perceived as weaker and later in time. This phenomenon is termed tactile suppression and has been linked to a forward model mechanism which predicts the sensory consequences of the self-generated action and as a result discounts the respective sensory reafferences. As tactile suppression is also evident in passive hand movements, both predictive and postdictive mechanisms may be involved. However, its functional role is still widely unknown. It has been proposed that tactile suppression prevents sensory overload due to the large amount of afferent information generated during movement and therefore facilitates processing of external sensory events. However, if tactile feedback from the moving limb is needed to gain information, e.g. at the fingers involved in grasping, tactile sensitivity is less strongly reduced. In the talk, I will present recent results from a series of psychophysical experiments that show that tactile sensitivity is dynamically modulated during the course of the reaching movement depending on the reach goal and the predicted movement consequences. These results provide first evidence that tactile suppression may indeed free capacities to process other, movement-relevant somatosensory signals. Moreover, the observed perceptual changes were associated with adjustments in the motor system suggesting a close coupling of predictive mechanisms in perception and action.

Prediction during self-motion: the primate cerebellum selectively encodes unexpected vestibular information

Speaker: Kathleen Cullen, Department of Physiology, McGill University, Montréal, Québec, Canada

A prevailing view is that the cerebellum is the site of a forward model that predicts the expected sensory consequences of self-generated action. Changes in motor apparatus and/or environment will cause a mismatch between the cerebellum’s prediction and the actual resulting sensory stimulation. This mismatch – the ‘sensory prediction error,’ – is thought to be vital for updating both the forward model and motor program during motor learning to ensure that sensory-motor pathways remain calibrated. However, where and how the brain compares expected and actual sensory feedback was unknown. In this talk, I will first review experiments that focused on a relatively simple sensory-motor pathway with a well-described organization to gain insight into the computations that drive motor learning. Specifically, the most medial of the deep cerebellar nuclei (rostral fastigial nucleus), constitutes a major output target of the cerebellar cortex and in turn sends strong projections to the vestibular nuclei, reticular formation, and spinal cord to generate reflexes that ensure accurate posture and balance. Trial by trial analysis of these neurons in a motor learning task revealed the output of a computation in which the brain selectively encodes unexpected self-motion (vestibular information). This selectively enables both the i) rapid suppression of descending reflexive commands during voluntary movements and ii) rapid updating of motor programs in the face of changes to either the motor apparatus or external environment. I will then consider the implications of these findings regarding our recent work on the thalamo-cortical processing of vestibular information.

< Back to 2018 Symposia

2017 Exhibitors and Advertisers

Exhibitors

Exhibits are located in Banyan Breezeway. View the Exhibits Floor Plan.

Exhibit Hours

Saturday, May 20, 9:00 am – 5:30 pm
Sunday, May 21, 9:00 am – 5:30 pm
Monday, May 22, 9:00 am – 12:30 pm
Tuesday, May 23, 9:00 am – 5:30 pm

Brain Vision, LLC

Booth 6
Brain Vision is the leader for EEG in Vision Science We offer full integration of EEG with many leading eye tracking systems We provide flexible and robust solutions for high density, active EEG, wireless EEG, dry EEG, and a wide range of bio-sensors like GSR, EKG, Respiration, and EMG We integrate eye tracking and EEG with other modalities such as fMRI, TMS, fNIRS, tDCS/HDtDCS and MEG If you want to know how EEG improves Vision Science and how eye-tracking improves EEG, please talk to us Let us help you push the edge of what research is possible.

Cortech Solutions

Booth 9
Cortech Solutions is your source for vision science and functional neuroimaging tools, including research-grade LCD displays, eye-tracking, transcranial magnetic stimulation (TMS), EEG and evoked potentials (EP), near-infrared spectroscopy (NIRS) and more. We are your sales and support contact in the US for leading brands from around the world, including Cambridge Research Systems tools for vision sci­ence, Mag & More / PowerMAG TMS, Biosemi ActiveTwo EEG / EP, Artinis Oxymon NIRS, and more. We intend to exceed your expectations!

Exponent, Inc.

Booth 11
Exponent
is an engineering and scientific consulting firm that provides solutions to complex technical problems. Our multidisciplinary team of scientists, physicians, engineers, and business consultants performs in-depth research and analysis in more than 90 technical disciplines. We offer clients the scientific expertise needed to understand important issues and make sound strategic decisions. Our clients include a wide range of manufacturers, utilities, insurers, industry groups, government agencies, venture capital companies, and law firms.

Exponent’s Human Factors engineers and scientists evaluate human performance and safety in product and system use. Our consultants study how the limitations and capabilities of people, including memory, perception, reaction time, judgment, physical size and dexterity, affect the way they use a product, interact with an organization or environment, process information, or participate in an activity.

Our Human Factors Practice has experience in the following areas of research:

  • Evaluating human performance in a wide variety of applications
  • Visibility, conspicuity, low-illuminance scene assessment
  • Applying fields of cognitive, developmental and experimental psychology as well as human factors and ergonomics (such as visual perception, attention, perception-response time, decision making and auditory perception) to real-world situations
  • Conducting qualitative and quantitative research and experiments with human subjects through the use of questionnaires, focus groups, interviews, observations, instrumentation, and data acquisition
  • Safety and risk analysis
  • Consumer product hazard assessment
  • Developing safety information to be placed on products and in manuals
  • Assisting in the development and design of consumer products

At Exponent, we pride ourselves on the high quality of our 1,000+ employees. More than 800 are degreed technical professionals, and more than 500 have earned an M.D. or Ph.D. Exponent operates 20 regional offices and 6 international locations, and is publicly traded on the NASDAQ exchange under the symbol EXPO.

Feel Good, Inc.

Booth 8
Feel Good, Inc
. provides portable TENS (transcutaneous electrical nerve stimulation) units offering a wide variety of benefits including alleviating back, nerve, post-op, diabetic pain and migraines. Our units also improve circulation and sleep patterns to decrease the use of pain relievers that causes negative side effects.

MIT Press

Booth 2
MIT Press
is the only university press in the United States whose list is based in science and technology. This does not mean that science and engineering are all we publish, but it does mean that we are committed to the edges and frontiers of the world—to exploring new fields and new modes of inquiry. We publish about 200 new books a year and 150 issues from over 30 journals. Our goal is to create content that is challenging, creative, attractive, and yet affordable to individual readers.

Oxford University Press

Booth 1
Please visit Oxford University Press to browse our new and classic titles including The Oxford Compendium of Visual Illusions, by Shapiro; Development of Perception in Infancy, by Arterberry; and Art, Aesthetics, and the Brain, by Huston.

Psychonomic Society

Booth 3
The Psychonomic Society is the home for scientists who study how the mind works. Members of the Society are cognitive psychologists and include some of the most distinguished researchers in the field. Many of us are concerned with the application of psychology to health, technology and education. Some of the most innovative research uses converging methods such as neuroscience and computational science to achieve our research goals. But what brings us together is that we study the fundamental properties of how the mind works by using behavioral techniques to better understand mental functioning. Members of the Society perform and promote the basic science of behavior in areas such as memory, learning, problem solving, action planning, language, and perception that connect with other fields of research. Please visit us at www.psychonomic.org.

Rogue Research Inc.

Booth 14
Rogue Research Inc.
develops the Brainsight family of products including Brainsight TMS and NIRS for human neuroscience as well as Brainsight Vet, a complete neuronavigation system and suite of neurosurgical tools for a variety of applications. We also offer design and manufacturing services for custom surgical tool or implants.

SensoMotoric Instruments, Inc.

Booth 10
SMI
designs advanced eye tracking systems that combine ease of use and flexibility with advanced technology. SMI products offer the ability to measure gaze position, saccades, fixations, pupil size, etc. Our newest devices include a 250 Hz virtual reality integration, and the 2000 Hz ultra-precise iView 2K.

SR Research Ltd.

Booth 13
SR Research
, makers of EyeLink eye-trackers, welcomes you to VSS 2017! Come and see the EyeLink Portable Duo – a high performance eye-tracker in a portable package, or the EyeLink 1000 Plus. Starting this year, all new EyeLinks track at up to 2000 Hz binocularly by default, with up to 1000 Hz remote, head free-to move binocular tracking available. While the EyeLink Portable Duo is perfect for school or clinic visits, the EyeLink 1000 Plus provides a uniform, cutting-edge eye-tracking solution for the behavioral lab, MRI/MEG, or EEG. Start with a high-precision, high-speed eye-tracker in the behavioral laboratory and add binocular head free-to-move tracking. Include fiber optic extensions and the same hardware seamlessly becomes the world’s leading MRI or MEG eye-tracker. With outstanding technical specifications, portable options, flexible experiment delivery software, and incredible customer support, SR Research enables academics. Drop by and discuss our latest hardware and software additions.

THOUSLITE

Booth 7
Thousand Lights Lighting
(Changzhou) Limited or THOUSLITE is a high-tech enterprise, focusing on multi-channel LED lighting technology and light quality management. THOUSLITE is a global leading LED-based standard lighting environment provider. THOUSLITE offers full range of multi-channel LED lighting products for lighting & vision research, color viewing assessment, and camera & sensor test. We also provide customization services. THOUSLITE LEDCube any SPD simulator is designed to build customized or large lighting space, and THOUSLITE LEDView lighting cabinet is used for standard lighting space.

VPixx Technologies Inc.

Booths 4 & 5
VPixx Technologies
welcomes the vision community to VSS 2017, and is excited to demonstrate our TRACKPixx 2kHz binocular eye tracker, alongside the PROPixx DLP LED video projector, now supporting refresh rates up to 1440Hz. The PROPixx has been designed specifically for the generation of precise high refresh rate stimuli for gaze-contingent, stereoscopic, and other dynamic applications. The PROPixx is the most flexible display possible for vision research, featuring resolutions up to 1920×1080, and a perfectly linear gamma. The solid state LED light engine has 30x the lifetime of halogen projectors, a wider color gamut, and zero image ghosting for stereo vision applications. Our high speed circular polarizer can project 480Hz stereoscopic stimuli for passive polarizing glasses into MRI and MEG environments. Come and see the SHIELDPixx Faraday cage for installing the PROPixx inside an MRI/MEG room. In addition, the PROPixx includes an embedded data acquisition system, permitting microsecond synchronization between visual stimulation and other types of I/O including eye tracking, EEG, TMS, audio stimulation, button box input, TTL trigger output, analog acquisition, and more! VPixx Technologies will be using the PROPixx/TRACKPixx combination to demonstrate a new set of gaze-contingent paradigms!

WorldViz

Booth 12
WorldViz
is the industry leader in immersion-ready virtual reality (VR) solutions. WorldViz’s interactive visualization and simulation technologies are deployed across 1500+ Fortune 500 companies, academic institutions and government agencies.

WorldViz’s core products are Vizard, a specialized development platform for professional VR application development, and VizMove, the world’s only enterprise-class VR software and hardware solution. WorldViz also offers PPT, a high-precision wide-area motion tracking system, as well as professional consulting and content creation services. WorldViz technology enables users to replace physical processes with immersive virtual meth-ods. Applications range from design visualization and industrial training to interactive education and scientific research.

WorldViz has recently introduced the VR Collaboration Platform code-named ‘Project Skofield’ and will show a demo preview of this platform at VSS 2017.

Advertisers

SR Research

OPAM

2017 Research Fellowship

The purpose of the ARVO/VSS Research Fellowship is to encourage and foster new collaborations between clinical and basic vision researchers to better train young scientists in the area of translational research. These fellowships will provide research funds to support students who wish to acquire training in a cross-disciplinary lab to promote their ability to perform translational research and compete for research funding as their career matures. In concept, trainees working in a clinical environment but desiring a career in translational research would benefit from a mentored program in a more basic science lab and a trainee in a basic research environment would benefit from a mentored program in a lab conducting translational research in a clinical environment.

The 2017 ARVO/VSS Research Fellowship Recipient

Kathryn Bonnen

University of Texas at Austin

Kathryn Bonnen will apply her training in the perception of 3-dimensional motion and sensorimotor control to investigate how individuals with amblyopia use vision to guide action in everyday tasks.

Save

Save

Save

Save

Save

Save

Save

15th Annual Dinner and Demo Night

Monday, May 22, 2017, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall

Please join us Monday evening for the 15th Annual VSS Dinner and Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada, Reno; Arthur Shapiro, American University; Gennady Erlikhman, University of Nevada, Reno and Karen Schloss, Brown University.

Demos are free to view for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a VSS Friends and Family Pass to attend the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. Guest passes may also be purchased at the BBQ function, beginning at 5:45 adjacent to the Salty’s Tiki Bar.

The following demos will be presented from 7:00 to 10:00 pm, in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall:

Rotating squares look like pincushions

Stuart Anstis, Sae Kaneko, UC San Diego

A square that rotates about its own center appears to be distorted into pincushions with concave sides. This illusory shape changes is caused by a perceived compression along the curved path of motion. Corners stick out furthest from the center of rotation so they get apparently pinched the most.

The Rotating Line

Kyle W. Killebrew, Sungjin Im, Gideon Paul Caplovitz, University of Nevada Reno

If a line changes size at it rotates around its center it will appear to speed up and slow down as a function of its length. Speeding up as the line gets longer and slowing down as it gets shorter. Why can’t the visual system get even this simplest of things right?

Biological Motion

Andre Gouws, Tim Andrews, Rob Stone, University of York

A real-time demonstration of biological motion. Walk, jump, dance in front of the sensor and your actions are turned into a point light display, Using an X-box Kinect sensor and our free software, you can produce this effect for yourself.

Thatcherize your face

Andre Gouws, Peter Thompson, University of York

The Margaret Thatcher illusion is one of the best-loved perceptual phenomena. Here you will have the opportunity to see yourself ‘thatcherized’ in real time and we print you a copy of the image to take away.

The Ever-Popular Beuchet Chair

Peter Thompson, Rob Stone, Tim Andrews, University of York

A favorite at demo Night for the past few years, the Beuchet chair is back with yet another modification. The two parts of the chair are at different distances and the visual system fails to apply size constancy appropriately. The result is people can be shrunk or made giants.

Hemifield-specific camouflage and persistence

Zhiheng Zhou, Lars Strother, University of Nevada Reno

Zhou and Strother (2017) recently reported a new psychophysical method of studying contour visibility under conditions of impending camouflage. Here we show that portions of a single contour or two simultaneously visible contours, one viewed in each hemifield, can succumb to camouflage at different times.

Full immersion in VR with remote interactivity

Matthias Pusch, WorldViz

We will immerse two participants at a time with a high end VR system, and have them experience interactivity with a remote (west coast or Europe) set of participants in the same VR session. What can be observed is the level of natural interaction that evolves. Such co-located and/or remote interactivity is an eye opener for understanding the potential and implication of VR for the future of communication and training.

Audio-Visual Perceptual Illusions: Central/Peripheral Flicker Synchronization by Sound

Shinsuke Shimojo, Caltech, Kensuke Shimojo, St. Mark’s School, and Mohammad Shehata, Caltech

We will demonstrate that simultaneously pulsed circular targets (with a flicker frequency of 4 to 6 Hz), one viewed centrally and the other peripherally, appear to pulse at different rates (likely due to differences in the cone and rod systems), but can be synchronized with a pulsed audio stimulus that captures the visual percept.

Audio-Visual Perceptual Illusions: Expanding/Contracting Double Flash and Spatial Double Flash

Bolton Bailey, Caltech, Noelle R. B. Stiles, University of Southern California and Caltech, Shinsuke Shimojo, Caltech, and Armand R. Tanguay, Jr., University of Southern California and Caltech

At VSS 2016 we demonstrated the “Illusory Rabbit” and “Invisible Rabbit” illusions, both of which indicate that auditory stimuli can capture and modify the perceptual structure of visual stimuli postdictively. This year we will demonstrate two novel variants of the classical double flash illusion, one in which the visual stimulus is a circular contrast gradient that appears to vary dynamically in size, and another in which sequential tones from two separated speakers paired with a single flash induce an illusory flash displaced in the direction of apparent auditory motion.

Virtual Reality Real-time Multiple Object Tracking Psychophysics Platform

Steven Oliveira, Mohammed Islam, Elan Barenholtz, Mike Kleinman, Shannon Whitney, Florida Atlantic University

Experimental platform for immersive multiple object tracking experiment using state-of-the-art virtual reality system. Come enjoy the next generation of psychophysics experiments in a fully immersive 3D environment.

Egocentric and egophobic images

Dejan Todorovic, University of Belgrade, Serbia

Some portraits look (generally) at you from (almost) everywhere – but others never do. Likewise, some depicted roads (practically) always point (by and large) at you – but others never do. Check out how salient these effects are simply by inspecting pairs of identical large images spaced widely apart.

Using Mixed Reality to Study the Freezing Rotation Illusion

Max R. Dürsteler, University Hospital Zurich, Dep. of Neurology

Using a Microsoft Hololens, I demonstrate 3D versions of the “Freezing Rotation Illusion”. When using a back and forth rotating tubular structure surrounding a constantly turning air plane model, the plane is perceived a slowing down, when it co-rotates with its surrounds, speeding up otherwise regardless of the observer’s position.

BrainWalk: Exploring the Virtual Brain in immersive virtual reality

Simon Smith, Bas Rokers, Nathaniel Miller, Ross Tredinnick, Chris Racey, Karen B. Schloss, University of Wisconsin – Madison

We will present a Virtual Brain, which uses immersive virtual reality to visualize the human brain. Wearing an Oculus Rift, you can explore a 3D volumetric brain built from real neuroimaging data. You can also play BrainWalk, a game created to help improve the visual design based on user performance.

Augmented BrainWalk: Hands-on Augmented Reality 3D Brain Exploration

Stefano Baldassi, Moqian Tian , Meta Company; Bas Rokers, Nathaniel Miller, Ross Tredinnick, Chris Racey, Karen Schloss, University of Wisconsin, Madison & Wisconsin Institute for Discovery

We present an Augmented Reality tool that allows users to visualize brain structures in 3D and manipulate them directly. This tool has special advantages in education, in that users can see through the real world, allowing direct teacher-student communication while interacting with the same brain model.

See your own Saccades

Peter April, Jean-Francois Hamelin, Danny Michaud, Stephanie-Ann Seguin, VPixx Technologies

VPixx Technologies presents a series of demonstrations which combine the PROPixx 1440Hz refresh rate visual display, and the TRACKPixx 2kHz eye tracker. See your own saccadic eye movement path plotted directly onto your own retina. Question saccadic suppression by examining objects which are visible only during saccades. See what happens when visual stimuli are stabilized on your retina.

High Speed Gaze-Contingent Visual Search

Kurt Debono, Dan McEchron, SR-Research Ltd

Try to find the target in a visual search array which is continuously being updated based on the location of your gaze. High speed video based eye tracking combined with the latest high speed monitors make for a compelling challenge.

Eyes Wide Shut Illusion

Shaul Hochstein, Hebrew University, Jerusalem

The “Eyes Wide Shut” illusion uses a curved/enlarging mirror to observe one eye at a time, and then, surprisingly, both eyes together in one integrated view. It demonstrates mirror action, binocular integration, and how prior assumptions determine how very approximate information from the world creates perception.

Visual Attention EEG Challenge

Lloyd Smith, Jakob Thomassen, Cortech Solutions, Inc., Cambridge Research Systems, Ltd.

Take the EEG Frequency Tagging Challenge to see whether you or your colleagues will take home the prize for most robust visual spatial attention as measured in an EEG SSVEP paradigm. Don’t look away, though, because moving your eyes might be cause for disqualification! Find out once and for all who among you is best able to focus visual attention and avoid distractions.

The Box that Defined a Movement

Joshua E Zosky, Michael D. Dodd, University of Nebraska – Lincoln

By surrounding objects (which can be perceived moving leftward or rightward) with a three- dimensional box that has a clear direction of motion, viewers are induced to see a directionally congruent perception of motion. Examples of the phenomenon include: spinning orb, spinning dancer, and The Orb that Destroys Stars.

The size-weight illusion

Cristina de la Malla, Vrije universiteit Amsterdam

A small object feels heavier than a larger object of the same mass. This is known as the size-weight illusion. We will provide the opportunity to experience several variations of the illusion.

The FechDeck: a handtool for exploring psychophysics

James Ferwerda, Center for Imaging Science, Rochester Institute of Technology

The FechDeck is an ordinary deck of playing cards modified to support exploration of psychophysical methods. The deck allows users to conduct threshold experiments using Fechner’s methods of adjustment, limits, and constant stimuli, scaling experiments using Thurstone’s ranking, pair comparison, and category methods, and Stevens’ method of magnitude estimation.

Going to the movies: Immersion, visual awareness, and memory

Matthew Moran, Derek McClellan , Dr. D. Alexander Varakin, Eastern Kentucky University

The observer will view a movie clip through a scaled down detailed replica of a movie theater that served as the experimental condition of the study. An unexpected stimulus will cross the stage area in front of the movie screen at the 6:36 mark.

StroboPong

VSS Staff

Back by popular demand. Strobe lights and ping pong!

2017 Ken Nakayama Medal for Excellence in Vision Science – Jan J. Koenderink

The Vision Sciences Society is honored to present Jan J. Koenderink with the 2017 Ken Nakayama Medal for Excellence in Vision Science.

The Ken Nakayama Medal is in honor of Professor Ken Nakayama’s contributions to the Vision Sciences Society, as well as his innovations and excellence to the domain of vision sciences.

The winner of the Ken Nakayama Medal receives this honor for high-impact work that has made a lasting contribution in vision science in the broadest sense. The nature of this work can be fundamental, clinical or applied. The Medal is not a lifetime career award and is open to all career stages.

The medal will be presented during the VSS Awards session on Monday, May 22, 2017, 12:30 pm in Talk Room 2.

Jan J. Koenderink

Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Belgium, Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands and Abteilung Allgemeine Psychologie, Justus-Liebig Universität, Giessen, Germany

Only a few scientists can be proud of a real breakthrough in vision science, very few can claim significant advances in multiple aspects of our visual experience, and almost none is an acclaimed researcher in two distinct disciplines. Jan Koenderink is this unique vision scientist. In both human and machine vision, Jan Koenderink has contributed countless breakthroughs towards our understanding of the properties of receptive field profiles, of the different types of optic flow, of the surface characteristics of three-dimensional shape, and more recently of the space of color vision.

Together with his lifelong collaborator Andrea van Doorn, Jan Koenderink has approached each new problem in a humble, meticulous, and elegant way. While some papers may scare the less mathematical inclined reader, a bit of perseverance inevitably leads to the excitement of sharing with him a true insight. These insights have profoundly influenced our understanding of the functioning of the visual system. Some examples include: the structure of images seen through the lens of incremental blurring that led to the now ubiquitous wavelet representation of images, the minimal number of points and views to reconstruct a unique class of three-dimensional structures known as affine representations, the formal description of Alberti’s inventory of shapes from basic differential geometry principles, the careful description of the interplay between illumination and surface reflectance and texture, and many more. The approach of Jan Koenderink to systematically work in parallel on theoretical derivations and on psychophysical experimentations reminds us that behavioral results are uninterpretable without a theoretical framework, and that theoretical advances remain detached from reality without behavioral evidence.

Jan Koenderink trained in astronomy with Maarten Minnaert at the University of Utrecht in the Netherlands, and then in physics and mathematics. He earned his PhD in artificial intelligence and visual psychophysics with Maarten Bouman from Utrecht. He held faculty positions in Utrecht and Groningen in the Netherlands, and guest professorships from Delft University of Technology, MIT in the USA, Oxford in the UK, and KU Leuven in Belgium. Most significantly, he headed the “Physics of Man” department at the University of Utrecht for more than 30 years. Jan Koenderink has authored more than 700 original research articles and published 2 books of more than 700 pages each. He received many honors, among them a Doctor Honoris Causa in Medicine from KU Leuven, the Azriel Rosenfeld lifelong achievement award in Computer Vision, the Wolfgang Metzger award, the Alexander von Humboldt prize, and is a fellow of the Royal Netherlands Academy of Arts and Sciences.

.

Save

Save

2017 Student Workshops

There is no advanced sign-up for workshops. Workshops will be filled on a first-come first-served basis.

VSS Workshop for PhD Students and Postdocs:
Reviewing and Responding to Review

Sunday, May 21, 2017, 1:00 – 2:00 pm, Sabal/Sawgrass (Jacaranda Hall)
Moderator: Jeremy Wolfe
Panelists: David H. Foster, Isabel Gauthier, Cathleen Moore, Jeremy Wolfe

Peer review of papers and grants is far from perfect, but it is, nevertheless, a pillar of our sciences. Writing reviews and responding to reviews are important, time-consuming tasks. How can we do them better? How much is too much when it comes to review? Should I give the author the benefit of my biting wit? Do I need to respond to every point in the review? When is it OK to say that the reviewer is an idiot? The members of our panel will address these and other questions from the vantage point of their roles as journal editors, grant reviewers, and recipients of reviews. Bring your questions and war stories from the trenches of peer review.

David H. Foster, University of Manchester

David H. Foster is Professor of Vision Systems at the University of Manchester. His research interests are in human vision, mathematical and statistical modelling, and applications to machine and biological vision systems. He has served as journal editor for over thirty years, most recently as editor-in-chief of Vision Research. His book, A Concise Guide to Communication in Science & Engineering, which is based on courses given to graduate students and early-career researchers, is due to be published by Oxford University Press in 2017.

Isabel Gauthier, Vanderbilt University

Isabel Gauthier is David K Wilson Professor of Psychology at Vanderbilt University. She received her PhD from Yale in 1998 and is the recipient of several awards, including the Troland award from the National Academy of Sciences. She heads the Object Perception Laboratory, where investigators use behavioral and brain imaging methods to study perceptual expertise, object and face recognition, and individual differences in vision. She has served as associate editor at several journals, is currently outgoing editor of the Journal of Experimental Psychology: General and incoming Editor of the Journal of Experimental Psychology: Human Perception and Performance.

Cathleen Moore, University of Iowa

Cathleen Moore is a Professor of Psychology at the University of Iowa, where she heads up the Iowa Attention and Perception Lab. Her research focuses on visual attention and perceptual organization. She has been on the Governing Board of the Psychonomic Society since 2010, having served as Chair in 2016. She was Editor of Psychonomic Bulletin & Review from 2011-14, and Associate Editor of the same journal from 2002-05. She has written and read a lot of reviews over the years.

Jeremy Wolfe, Harvard Medical School

Jeremy Wolfe is Professor of Ophthalmology and Professor of Radiology at Harvard Medical School. He is Director of the Visual Attention Lab at Brigham and Women’s Hospital. His research focuses on visual search and visual attention with a particular interest in socially important search tasks in areas such as medical image perception (e.g. cancer screening), security (e.g. baggage screening), and intelligence. In the world of reviewing he has served as Editor of Attention, Perception, and Psychophysics and is the founding Editor of the new Psychonomic Society, open access journal; Cognitive Research: Principles and Implications. He will be moderating this session.

VSS Workshop for PhD Students and Postdocs:
Careers in Industry and Government

Sunday, May 21, 2017, 1:00 – 2:00 pm, Jasmine/Palm (Jacaranda Hall)
Moderator: David Brainard
Panelists: Kurt Debono, Kevin MacKenzie, Alex Smolyanskaya, Cheri Wiggs, David Brainard

Scientific training often focuses on preparation for careers in academia, in part because those providing the training are generally academics themselves and thus most familiar with the academic track. This workshop will provide an opportunity for students and post-docs to learn more about career opportunities for vision scientists outside of academia, specifically careers in industry and government. Panelists will provide brief introductory remarks touching on how their scientific training prepared them for their current career, how they obtained their position, and what they have found rewarding about their career path. This will be followed by an audience-driven discussion where panelists will respond to questions and speak to issues raised by audience members.

Kurt Debono, SR Research

Kurt works in eye tracking technology with SR Research Ltd in Brighton UK. He got his PhD in vision science at Giessen University and made his transition from academia five years ago.

Kevin J. MacKenzie, Oculus

Kevin J. MacKenzie is a research scientist at Oculus Research, a multi-disciplinary research team within Oculus. He conducted his PhD work in Laurie Wilcox’s lab at York University’s Centre for Vision Research and held a post-doctoral fellowship at Bangor University, 2008 through 2012 under the tutelage of Simon Watt. Prior to Oculus, he was part of the Microsoft HoloLens team, holding positions as a human factors engineer and user experience researcher.

Alex Smolyanskaya, Stitch Fix

Alex is a data scientist at Stitch Fix in San Francisco, where she works on forecasting demand and macro client behavior. She got her PhD in Neuroscience at Harvard and was a postdoc in Nicole Rust’s lab at the University of Pennsylvania. She made the transition from academia to data science two years ago via Insight Data Science, a post-doctoral fellowship program specifically designed to prepare scientists for interviews and careers in industry.

Cheri Wiggs, National Eye Institute

Cheri Wiggs serves as a Program Director at the National Eye Institute (of the National Institutes of Health). She oversees extramural funding through three programs — Perception & Psychophysics, Myopia & Refractive Errors, and Low Vision & Blindness Rehabilitation. She received her PhD from Georgetown University in 1991 and came to the NIH as a researcher in the Laboratory of Brain and Cognition. She made her jump to the administrative side of science in 1998 as a Scientific Review Officer. She currently represents the NEI on several NIH coordinating committees (including BRAIN, Behavioral and Social Sciences Research, Medical Rehabilitation Research) and was appointed to the NEI Director’s Audacious Goals Initiative Working Group.

David Brainard, University of Pennsylvania

David H. Brainard is the RRL Professor of Psychology at the University of Pennsylvania. He is a fellow of the Optical Society, ARVO and the Association for Psychological Science. At present, he directs Penn’s Vision Research Center, co-directs Penn’s Computational Neuroscience Initiative, co-directs Penn’s NSF funded certificate program in Complex Scene Perception, is on the Board of the Vision Sciences Society, and is a member of the editorial board of the Journal of Vision. His research interests focus on human color vision, which he studies both experimentally and through computational modeling of visual processing. He will be moderating this session.

2017 Young Investigator – Janneke F.M. Jehee

Vision Sciences Society is honored to present Janneke F.M. Jehee with the 2017 Young Investigator Award

Janneke F.M. Jehee

Principal Investigator at the Center for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, the Netherlands

Uncertainty and optimization in human vision

Dr. Jehee will talk during the Awards Session
Monday, May 22, 2017, 12:30 – 1:30 pm, Talk Room 2

We tend to trust our eyes, believing them to be reliable purveyors of information about our visual environment. In truth, however, the signals they produce from moment to moment are noisy and incomplete. How do we ‘decide’ what we see based on such limited and uncertain information? In this talk, I will present theoretical as well as experimental work to address this question. I will first discuss a computational model of predictive neural coding. The model suggests that the visual system may use top-down interactions between areas to reduce the degree of uncertainty in its perceptual representations. I will then present experimental findings on top-down attention and perceptual learning, and show that these processes reduce the uncertainty in the representation of stimulus features in visual cortex. Finally, I will present recent neuroimaging results indicating that the degree of uncertainty in cortical representations can be characterized on a trial-by-trial basis. This work shows that the fidelity of visual representations can be directly linked to the observer’s perceptual decisions.

Biography

Janneke F.M. Jehee is a tenured Principal Investigator at the Center for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behavior, Nijmegen, the Netherlands, where she directs the Visual Computation & Neuroimaging group. She received her Ph.D. in Psychology from the University of Amsterdam under the direction of Victor Lamme. She then moved on to postdoctoral work, first in computational neuroscience at the University of Rochester with Dana Ballard, and then in fMRI research at Vanderbilt University with Frank Tong. Dr. Jehee’s work has been supported by numerous grants and fellowships, including from the Netherlands Organization for Scientific Research and the European Research Council.

Dr. Jehee works on the fundamental problem of understanding how the brain represents the visual properties of the environment. Her contributions have used multiple approaches, including computational modeling, psychophysical experimentation and fMRI, to study the interaction between the bottom-up encoding of stimulus features and top-down influences, such as predictability, attention, and learning. She has developed a series of innovative and rigorous computational models of neural coding, and tested those models against data from single neurons and fMRI, as well as psychophysical observations. In her early work, which was focused on predictive neural coding, she developed models showing that predictive feedback could account for aspects of the tuning properties of cortical neurons, as well as the temporal response properties of neurons in the lateral geniculate nucleus. She also contributed to the development of a neural model of temporal coding based on timed circuits in the gamma frequency range.

In her fMRI research, Dr. Jehee has conducted important studies that have shed light on the neural mechanisms of spatial and feature-based attention, and the impact of perceptual learning on early visual cortical representations. In collaboration with her students and colleagues at the Donders Institute, she tackled an important conundrum regarding predictive neural coding, namely, why neural signals for predictable stimuli are typically suppressed relative to those for novel stimuli, while neural signals for attended stimuli are often enhanced. Jehee showed that while the strength of signals representing highly predictable stimuli may be suppressed, the precision of the neural representation of these stimuli is improved.

In more recent, ground-breaking work, Jehee and her lab developed a new technique that can estimate the neural uncertainty of visuocortical representations of stimuli on a moment-to-moment basis, directly linking neural uncertainty to perceptual decisions of the observer.

In addition to these stellar research accomplishments, Dr. Jehee has participated in the training of many graduate students and postdoctoral fellows, who attest to her creativity, courage and unwavering dedication and devotion to both the work and to the students she is training.

Save

Save

Save

Save

Bruce Bridgeman Memorial Symposium

Friday, May 19, 2017, 9:00 – 11:30 am, Pavilion

Organizer: Susana Martinez-Conde, State University of New York

Speakers: Stephen L. Macknik, Stanley A. Klein, Susana Martinez-Conde, Paul Dassonville, Cathy Reed, and Laura Thomas

Professor Emeritus of Psychology Bruce Bridgeman was tragically killed on July 10, 2016, after being struck by a bus in Taipei, Taiwan. Those who knew Bruce will remember him for his sharp intellect, genuine sense of humor, intellectual curiosity, thoughtful mentorship, gentle personality, musical talent, and committed peace, social justice, and environmental activism. This symposium will highlight some of Bruce’s many important contributions to perception and cognition, which included spatial vision, perception/action interactions, and the functions and neural basis of consciousness.

Please also visit the Bruce Bridgeman Tribute website.

A Small Piece of Bruce’s Legacy

Stephen L. Macknik,  State University of New York

Consciousness and Cognition

Stanley A. Klein, UC Berkeley

Bruce Bridgeman’s Pioneering Work on Microsaccades

Susana Martinez-Conde, State University of New York

The Induced Roelofs Effect in Multisensory Perception and Action

Paul Dassonville, University of Oregon

Anything I Could Do Bruce Could Do Better

Cathy Reed, Claremont Mckenna College

A Legacy of Action

Laura Thomas, North Dakota State University

2017 Student Travel Awards

Kamran Binaee
Rochester Institute of Technology
Advisor: Gabriel J. Diaz
Kathryn Bonnen
University of Texas at Austin
Advisors: Alexander C. Huk, Lawrence K. Cormack
Sasskia Brüers
Université de Toulouse Paul Sabatier
Advisor: Rufin VanRullen
Blaire Dube
University of Guelph
Advisor: Naseem Al-Aidroos
Mizuki Fujita
Osaka University
Advisors: Ichiro Fujita, Kaoru Amano,
Hiroshi Ban
Christine Gamble
Brown University
Advisor: Joo-Hyun Song
Rinat Hilo
Tel-Aviv University
Advisor: Shlomit Yuval-Greenberg
Janis Intoy
Boston University
Advisor: Michele Rucci
Sha Li
University of Minnesota
Advisor: Yuhong Jiang
Matthew Lowe
University of Toronto
Advisors: Dirk Bernhardt-Walther,
Susanne Ferber, Jonathan S. Cant
Long Luu
University of Pennsylvania
Advisor: Alan A. Stocker
Takuma Morimoto
University of Oxford
Advisor: Hannah Smithson
Joel Robitaille
Brock University
Advisor: Stephen M. Emrich
Richard Schweitzer
Humboldt-Universität zu Berlin
Advisor: Martin Rolfs
David Sutterer
University of Chicago
Advisor: Ed Awh
Diana Tonin
University of East Anglia
Advisor: Stephanie Rossit
Ruben van Bergen
Donders Institute for Brain,
Cognition & Behavior

Advisor: Janneke Jehee
Greta Vilidaite
University of York
Advisor: Daniel H. Baker
Vy Vo
Unversity of California, San Diego
Advisor: John Serences
Paul Zerr
Utrecht University
Advisors: Albert Postma,
Stefan Van der Stigchel

2017 Davida Teller Award – Mary Hayhoe

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Vision Sciences Society is honored to present Dr. Mary Hayhoe with the 2017 Davida Teller Award

Mary Hayhoe

Professor of Psychology, Center for Perceptual Systems, University of Texas Austin

Vision in the context of natural behavior

Dr. Hayhoe will talk during the Awards Session
Monday, May 22, 2017, 12:30 – 1:30 pm, Talk Room 2

Investigation of vision in the context of ongoing behavior has contributed a number of insights by highlighting the importance of behavioral goals, and focusing attention on how vision and action play out in time. In this context, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information for making good decisions in order to achieve those goals. I will review the factors that control gaze in natural behavior, including evidence for the role of the task, which defines the immediate goals, the rewards and costs associated with those goals, uncertainty about the state of the world, and prior knowledge.

Biography

Mary Hayhoe is an outstanding scientist who has made a number of highly innovative and important contributions to our understanding of visual sensation, perception and cognition. She received her PhD in 1980 from UC San Diego and served on the faculty at the University of Rochester (1984 – 2005) and University of Texas at Austin (2006 – present). Her scientific career began with a long series of fundamental and elegant studies on visual sensitivity, adaptation and color vision. During this period, Mary was a well‐funded and internationally‐recognized leader in these areas of research; indeed, her work in these areas is still having an important influence.

She then made a dramatic shift in fields, leaving retinal and color psychophysics entirely. With this change, Mary Hayhoe and her colleagues became pioneers in developing a new research area that examines behavior in semi-naturalistic situations. Her research is not about the perceptual or motor system in isolation, but how these systems work together to generate behavior. At the time (the early 1990’s), there had been very few attempts to understand visual and cognitive processing in natural visual tasks. Mary and her colleagues were really the first to develop research methods for rigorously studying visual memory, attention and eye movements in natural everyday tasks (making a sandwich, copying block patterns, walking in cluttered environments etc.). Prior to this work most scientists believed that little of fundamental or general importance could come from working with such complex tasks, because so many neural and motor mechanisms are involved, and because of the difficulty of exerting sufficient experimental control. However, Mary recognized and beautifully exploited the potential of eye, head and body tracking technology, and of virtual‐reality technology, for rigorously addressing the problem of understanding perceptual and cognitive processing in natural tasks.

Mary Hayhoe is one of the founders and acknowledged leaders of a new field where there is much deserved emphasis on behavior in the real world. Her care and imagination are always evident, providing an admirable standard for young men and women alike. Her former graduate students and post‐doctoral researchers readily acknowledge that her mentoring, investment in their futures, and friendship played an important role in their development as scientists and critical thinkers.

Save

Save

Save

Save

Save