Prediction in perception and action

Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany
Presenters: Mary Hayhoe, Miriam Spering, Cristina de la Malla, Katja Fiehler, Kathleen Cullen

< Back to 2018 Symposia

Symposium Description

Prediction is an essential mechanism enabling humans to prepare for future events. This is especially important in a dynamically changing world, which requires rapid and accurate responses to external stimuli. Predictive mechanisms work on different time scales and at various information processing stages. They allow us to anticipate the future state both of the environment and ourselves. They are instrumental to compensate for noise and delays in the transmission of neural signals and allow us to distinguish external events from the sensory consequences of our own actions. While it is unquestionable that predictions play a fundamental role in perception and action, their underlying mechanisms and neural basis are still poorly understood. The goal of this symposium is to integrate recent findings from psychophysics, sensorimotor control, and electrophysiology to update our current understanding of predictive mechanisms in different sensory and motor systems. It brings together a group of leading scientists at different stages in their career who all have made important contributions to this topic. Two prime examples of predictive processes are considered: when interacting with moving stimuli and during self-generated movements. The first two talks from Hayhoe and Spering will focus on the oculomotor system which provides an excellent model for examining predictive behavior. They will show that smooth pursuit and saccadic eye movements significantly contribute to sucessful predictions of future visual events. Moreover, Hayhoe will provide examples for recent advances in the use of virtual reality (VR) techniques to study predictive eye movements in more naturalistic situations with unrestrained head and body movements. De la Malla will extend these findings to the hand movement system by examining interceptive manual movements. She will conclude that predictions are continuously updated and combined with online visual information to optimize behavior. The last two talks from Fiehler and Cullen will take a different perspective by considering predictions during self-generated movements. Such predictive mechanims have been associated with a forward model that predicts the sensory consequences of our own actions and cancels the respective sensory reafferences. Fiehler will focus on such cancellation mechanisms and present recent findings on tactile suppression during hand movements. Based on electrophysiological studies on self-motion in monkeys, Cullen will finally answer where and how the brain compares expected and actual sensory feedback. In sum, this symposium targets the general VSS audience and aims to provide a novel and comprehensive view on predictive mechanisms in perception and action spanning from behavior to neurons and from strictly laboratory tasks to (virtual) real world scenarios.

Presentations

Predictive eye movements in natural vision

Speaker: Mary Hayhoe, Center for Perceptual Systems, University of Texas Austin, USA

Natural behavior can be described as a sequence of sensory motor decisions that serve behavioral goals. To make action decisions the visual system must estimate current world state. However, sensory-motor delays present a problem to a reactive organism in a dynamically changing environment. Consequently it is advantageous to predict future state as well. This requires some kind of experience-based model of how the current state is likely to change over time. It is commonly accepted that the proprioceptive consequences of a planned movement are predicted ahead of time using stored internal models of the body’s dynamics. It is also commonly assumed that prediction is a fundamental aspect of visual perception, but the existence of visual prediction and the particular mechanisms underlying such prediction are unclear. Some of the best evidence for prediction in vision comes from the oculomotor system. In this case, both smooth pursuit and saccadic eye movements reveal prediction of the future visual stimulus. I will review evidence for prediction in interception actions in both real and virtual environments. Subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions, and the head and body are unrestrained. These predictions appear to be used in common by both eye and arm movements. Predictive eye movements reveal that the observer’s best guess at the future state of the environment is based on image data in combination with representations that reflect learnt statistical properties of dynamic visual environments.

Smooth pursuit eye movements as a model of visual prediction

Speaker: Miriam Spering, Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada

Real-world movements, ranging from intercepting prey to hitting a ball, require rapid prediction of an object’s trajectory from a brief glance at its motion. The decision whether, when and where to intercept is based on the integration of current visual evidence, such as the perception of a ball’s direction, spin and speed. However, perception and decision-making are also strongly influenced by past sensory experience. We use smooth pursuit eye movements as a model system to investigate how the brain integrates sensory evidence with past experience. This type of eye movement provides a continuous read-out of information processing while humans look at a moving object and make decisions about whether and how to interact with it. I will present results from two different series of studies: the first utilizes anticipatory pursuit as a means to understand the temporal dynamics of prediction, and probes the modulatory role of expectations based on past experience. The other reveals the benefit of smooth pursuit itself, in tasks that require the prediction of object trajectories for perceptual estimation and manual interception. I will conclude that pursuit is both an excellent model system for prediction, and an important contributor to successful prediction of object motion.

Prediction in interceptive hand movements

Speaker: Cristina de la Malla, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands

Intercepting a moving target requires spatial and temporal precision: the target and the hand need to be at the same position at the same time. Since both the target and the hand move, we cannot just aim for the target’s current position, but need to predict where the target will be by the time we reach it. We normally continuously track targets with our gaze, unless the characteristics of the task or of the target make it impossible to do so. Then, we make saccades and direct our movements towards specific locations where we predict the target will be in the future. If the precise location at which one is to hit the target only becomes evident as the target approaches the interception area, the gaze, head and hand movements towards this area are delayed due to not having the possibility of predicting the target future position. Predictions are continuously updated and combined with online visual information to optimize our actions: the less predictable the target’s motion, the more we have to rely on online visual information to guide our hand to intercept it. Updating predictions with online information allow to correct for any mismatch between the predicted target position and the hand position during an on-going movement, but any perceptual error that is still present at the last moment at which we can update our prediction will result in an equivalent interception error.

Somatosensory predictions in reaching

Speaker: Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany

Movement planning and execution lead to changes in somatosensory perception. For example, tactile stimuli on a moving compared to a resting limb are typically perceived as weaker and later in time. This phenomenon is termed tactile suppression and has been linked to a forward model mechanism which predicts the sensory consequences of the self-generated action and as a result discounts the respective sensory reafferences. As tactile suppression is also evident in passive hand movements, both predictive and postdictive mechanisms may be involved. However, its functional role is still widely unknown. It has been proposed that tactile suppression prevents sensory overload due to the large amount of afferent information generated during movement and therefore facilitates processing of external sensory events. However, if tactile feedback from the moving limb is needed to gain information, e.g. at the fingers involved in grasping, tactile sensitivity is less strongly reduced. In the talk, I will present recent results from a series of psychophysical experiments that show that tactile sensitivity is dynamically modulated during the course of the reaching movement depending on the reach goal and the predicted movement consequences. These results provide first evidence that tactile suppression may indeed free capacities to process other, movement-relevant somatosensory signals. Moreover, the observed perceptual changes were associated with adjustments in the motor system suggesting a close coupling of predictive mechanisms in perception and action.

Prediction during self-motion: the primate cerebellum selectively encodes unexpected vestibular information

Speaker: Kathleen Cullen, Department of Physiology, McGill University, Montréal, Québec, Canada

A prevailing view is that the cerebellum is the site of a forward model that predicts the expected sensory consequences of self-generated action. Changes in motor apparatus and/or environment will cause a mismatch between the cerebellum’s prediction and the actual resulting sensory stimulation. This mismatch – the ‘sensory prediction error,’ – is thought to be vital for updating both the forward model and motor program during motor learning to ensure that sensory-motor pathways remain calibrated. However, where and how the brain compares expected and actual sensory feedback was unknown. In this talk, I will first review experiments that focused on a relatively simple sensory-motor pathway with a well-described organization to gain insight into the computations that drive motor learning. Specifically, the most medial of the deep cerebellar nuclei (rostral fastigial nucleus), constitutes a major output target of the cerebellar cortex and in turn sends strong projections to the vestibular nuclei, reticular formation, and spinal cord to generate reflexes that ensure accurate posture and balance. Trial by trial analysis of these neurons in a motor learning task revealed the output of a computation in which the brain selectively encodes unexpected self-motion (vestibular information). This selectively enables both the i) rapid suppression of descending reflexive commands during voluntary movements and ii) rapid updating of motor programs in the face of changes to either the motor apparatus or external environment. I will then consider the implications of these findings regarding our recent work on the thalamo-cortical processing of vestibular information.

< Back to 2018 Symposia

Advances in temporal models of human visual cortex

Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 2
Organizer(s): Jonathan Winawer, Department of Psychology and Center for Neural Science, New York University. New York, NY
Presenters: Geoffrey K. Aguirre, Christopher J. Honey, Anthony Stigliani, Jingyang Zhou

< Back to 2018 Symposia

Symposium Description

The nervous system extracts meaning from the distribution of light over space and time. Spatial vision has been a highly successful research area, and the spatial receptive field has served as a fundamental and unifying concept that spans perception, computation, and physiology. While there has also been a large interest in temporal vision, the temporal domain has lagged the spatial domain in terms of quantitative models of how signals are transformed across the visual hierarchy (with the notable exception of motion processing). In this symposium, we address the question of how multiple areas in human visual cortex encode information distributed over time. Several groups in recent years made important contributions to measuring and modeling temporal processing in human visual cortex. Some of this work shows parallels with spatial vision. For example, one important development has been the notion of a cortical hierarchy of increasingly long temporal windows, paralleling the hierarchy of spatial receptive fields (Hasson et al, 2009; Honey et al, 2012; Murray et al, 2014). A second type of study, from Geoff Aguirre’s lab, has combined the tradition of repetition suppression (Grill-Spector et al, 1999) with the notion of multiple time scales across the visual pathways to develop a computational model of how sequential stimuli are encoded in multiple visual areas (Mattar et al, 2016). Finally, several groups including the Grill-Spector lab and Winawer lab have extended the tools of population receptive field models from the spatial to the temporal domain, building models that predict how multiple cortical areas respond to arbitrary temporal sequences of visual stimulation (Horiguchi et al, 2009; Stigliani and Grill-Spector, 2017; Zhou et al 2017). Across the groups, there have been some common findings, such as the general tendency toward longer periods of temporal interactions in later visual areas. However, there are also a number of challenges in considering these recent developments together. For example, can (and should) we expect the same kind of theories and models to account for temporal interactions in both early visual areas at the time-scale of tens of milliseconds, and later visual areas at the time-scale of seconds or minutes? How do temporal properties of visual areas depend on spatial aspects of the stimuli? Should we expect principles of spatial computation, such as hierarchical pooling and normalization, to transfer analogously to the temporal domain? To what extent do temporal effects depend on task? Can temporal models at the scale of large neuronal populations (functional MRI, intracranial EEG) be explained in terms of the behavior of single neurons, and should this be a goal? Through this symposium, we aim to present an integrated view of the recent literature in temporal modeling of visual cortex, with each presenter both summarizing a recent topic and answering a common set of questions. The common questions posed to each presenter will be used to assess both the progress and the limits of recent work, with the goal of crystallizing where the field might go next in this important area.

Presentations

Variation in Temporal Stimulus Integration Across Visual Cortex

Speaker: Geoffrey K. Aguirre, Department of Neurology, Perelman School of Medicine, University of Pennsylvania
Additional Authors: Marcelo G. Mattar, Princeton Neuroscience Institute, Princeton University; David A. Kahn, Department of Neuroscience, University of Pennsylvania; Sharon L. Thompson-Schill, Department of Psychology, University of Pennsylvania

Object percept is shaped by the long-term average of experience as well as immediate, comparative context. Measurements of brain activity have demonstrated corresponding neural mechanisms, including norm-based responses reflective of stored prototype representations, and adaptation induced by the immediately preceding stimulus. Our recent work examines the time-scale of integration of sensory information, and explicitly tests the idea that the apparently separate phenomena of norm-based coding and adaptation can arise from a single mechanism of sensory integration operating over varying timescales. We used functional MRI to measure neural responses from the fusiform gyrus while subjects observed a rapid stream of face stimuli. Neural activity at this cortical site was best explained by the integration of sensory experience over multiple sequential stimuli, following a decaying-exponential weighting function. While this neural activity could be mistaken for immediate neural adaptation or long-term, norm-based responses, it in fact reflected a timescale of integration intermediate to both. We then examined the timescale of sensory integration across the cortex. We found a gradient that ranged from rapid sensory integration in early visual areas, to long-term, stable representations towards higher-level, ventral-temporal cortex. These findings were replicated with a new set of face stimuli and subjects. Our results suggest that a cascade of visual areas integrate sensory experience, transforming highly adaptable responses at early stages to stable representations at higher levels.

Temporal Hierarchies in Human Cerebral Cortex

Speaker: Christopher J. Honey, Department of Psychological & Brain Sciences, Johns Hopkins University
Additional Authors: Hsiang-Yun Sherry Chien, Psychological and Brain Sciences, Johns Hopkins University; Kevin Himberger, Psychological and Brain Sciences, Johns Hopkins University

Our understanding of each moment of the visual world depends on the previous moment. We make use of temporal context to segregate objects, to accumulate visual evidence, to comprehend sequences of events, and to generate predictions. Temporal integration — the process of combining past and present information — appears not to be restricted to specialized subregions of the brain, but is widely distributed across the cerebral cortex. In addition, temporal integration processes appear to be systematically organized into a hierarchy, with gradually greater context dependence as one moves toward higher order regions. What is the mechanistic basis of this temporal hierarchy? What are its implications for perception and learning, especially in determining the boundaries between visual events? How does temporal integration relate to the processes supporting working memory and episodic memory? After reviewing the evidence around each of these questions, I will describe a computational model of hierarchical temporal processing in the human cerebral cortex. Finally, I will describe our tests of the predictions of this model for for brain and behavior, in settings where where humans perceive and learn nested temporal structure.

Modeling the temporal dynamics of high-level visual cortex

Speaker: Anthony Stigliani, Department of Psychology, Stanford University
Additional Authors: Brianna Jeska, Department of Psychology, Stanford University; Kalanit Grill-Spector, Department of Psychology, Stanford University

How is temporal information processed in high-level visual cortex? To address this question, we measured cortical responses with fMRI (N = 12) to time-varying stimuli across 3 experiments using stimuli that were either transient, sustained, or contained both transient and sustained stimulation and ranged in duration from 33ms to 20s. Then we implemented a novel temporal encoding model to test how different temporal channels contribute to responses in high-level visual cortex. Different than the standard linear model, which predicts responses directly from the stimulus, the encoding approach first predicts neural responses to the stimulus with fine temporal precision and then derives fMRI responses from these neural predictions. Results show that an encoding model not only explains responses to time varying stimuli in face- and body-selective regions, but also finds differential temporal processing across high-level visual cortex. That is, we discovered that temporal processing differs both across anatomical locations as well as across regions that process different domains. Specifically, face- and body-selective regions in lateral temporal cortex (LTC) are dominated by transient responses, but face- and body-selective regions in lateral occipital cortex (LOC) and ventral temporal cortex (VTC) illustrate both sustained and transient responses. Additionally, the contribution of transient channels in body-selective regions is higher than in neighboring face-selective regions. Together, these results suggest that domain-specific regions are organized in parallel processing streams with differential temporal characteristics and provide evidence that the human visual system contains a separate lateral processing stream that is attuned to changing aspects of the visual input.

Dynamics of temporal summation in human visual cortex

Speaker: Jingyang Zhou, Department of Psychology, New York University
Additional Authors: Noah C. Benson, Psychology, New York University; Kendrick N. Kay, Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Twin Cities; Jonathan Winawer, Psychology and Center for Neural Science, New York University

Later visual areas become increasingly tolerant to variations in image properties such as object size, location, viewpoint, and so on. This phenomenon is often modeled by a cascade of repeated processing stages in which each stage involves pooling followed by a compressive nonlinearity. One result of this sequence is that stimulus-referred measurements show increasingly large receptive fields and stronger normalization. Here, we apply a similar approach to the temporal domain. Using fMRI and intracranial potentials (ECoG), we develop a population receptive field (pRF) model for temporal sequences of visual stimulation. The model consists of linear summation followed by a time-varying divisive normalization. The same model accurately accounts for both ECoG broadband time course and fMRI amplitudes. The model parameters reveal several regularites about temporal encoding in cortex. First, higher visual areas accumulate stimulus information over a longer time period than earlier areas, analogous to the hierarchically organized spatial receptive fields. Second, we found that all visual areas sum sub-linearly in time: e.g., the response to a long stimulus is less than the response to two successive brief stimuli. Third, the degree of compression increases in later visual areas, analogous to spatial vision. Finally, based on published data, we show that our model can account for the time course of single units in macaque V1 and multiunits in humans. This indicates that for space and time, cortex uses a similar processing strategy to achieve higher-level and increasingly invariant representations of the visual world.

< Back to 2018 Symposia

When seeing becomes knowing: Memory in the form perception pathway

Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 2
Organizer(s): Caitlin Mullin, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of the Technology
Presenters: Wilma Bainbridge, Timothy Brady, Gabriel Kreiman, Nicole Rust, Morgan Barense, Nicholas Turk-Browne

< Back to 2018 Symposia

Symposium Description

Classic accounts of how the brain sees and remembers largely describes vision and memory as distinct systems, where information about the content of a scene is processed in the ventral visual stream (VVS) and our memories of scenes past are processed by independent structures in the Medial Temporal Lobe (MTL). However, more recent work has begun to challenge this view by demonstrating interactions and dependencies between visual perception and memory at nearly every stage of the visual processing hierarchy. In this symposium, we will present a series of cutting edge behavioural and neuroscience studies that showcase an array of crossmethodological approaches (psychophysics, fMRI, MEG, single unit recording in monkeys, human E-CoG) to establish that perception and memory are part of a shared, bidirectional, interactive network. Our symposium will begin with Caitlin Mullin providing an overview of the contemporary problems associated with the traditional memory/perception framework. Next, Wilma Bainbridge will describe the factors that give rise to image memorability. Tim Brady will follow with a description of how the limits of encoding affect visual memory storage and retrieval. Gabriel Kreiman will focus on how our brains interpret visual images that we have never encountered before by drawing on memory systems. Nicole Rust will present evidence that one of the same VVS brain areas implicated in visual object recognition, monkey IT cortex, also reflects visual memory signals that are well-aligned with behavioral reports of remembering and forgetting. Morgan Barense will describe the transformation between the neural coding of low level perceptual to high level conceptual features in one brain area that lies within the MTL, perirhinal cortex. Finally, Nick Turk-Browne will describe the role of the hippocampus in generating expectations that work in a top-down manner to influence our perceptions. Our symposium will culminate with a discussion focused on how we can develop an integrative framework that provides a full account of the interactions between vision and memory, including extending state-of-the art computational models of visual processing to also incorporate visual memory, as well as understanding how dysfunction in the interactions between vision and memory systems lead to memory disorders. The findings and resulting discussions presented in this symposium will be targeted broadly and will reveal important considerations for anyone, at any level of their career (student, postdoc or faculty), interested in the interactions between visual perception and memory.

Presentations

Memorability – predicting memory from visual information, and measuring visual information from memory

Speaker: Wilma Bainbridge, National Institute of Mental Health

While much of memory research focuses on the memory behavior of individual participants, little memory work has looked at the visual attributes of the stimulus that influence future memory. However, in recent work, we have found that there are surprising consistencies to the images people remember and forget, and that the stimulus ultimately plays a large part in predicting later memory behavior. This consistency in performance can then be measured as a perceptual property of any stimulus, which we call memorability. Memorability can be easily measured in the stimuli of any experiment, and thus can be used to determine the degree previously found effects could be explained by the stimulus. I will present an example where we find separate neural patterns sensitive to stimulus memorability and individual memory performance, through re-analyzing the data and stimuli from a previously published fMRI memory retrieval experiment (Rissman et al., 2010). I will also show how memorability can be easily taken into account when designing experiments to ask fundamental questions about memory, such as – are there differences between the types of images people can recognize versus the types of images people can recall? I will present ways for experimenters to easily measure or control for memorability in their own experiments, and also some new ways quantify the visual information existing within a memory.

The impact of perceptual encoding on subsequent visual memory

Speaker: Timothy Brady, University of California San Diego

Memory systems are traditionally associated with the end stages of the visual processing sequence: attending to a perceived object allows for object recognition; information about this recognized object is stored in working memory; and eventually this information is encoded into an abstract long-term memory representation. In this talk, I will argue that memories are not truly abstract from perception: perceptual distinctions persist in memory, and our memories are impacted by the perceptual processing that is used to create them. In particular, I will talk about evidence that suggests that both visual working memory and visual long-term memory are limited by the quality and nature of their perceptual encoding, both in terms of the precision of the memories that are formed and their structure.

Rapid learning of meaningful image interpretation

Speaker: Gabriel Kreiman, Harvard University

A single event of visual exposure to new information may be sufficient for interpreting and remembering an image. This rapid form of visual learning stands in stark contrast with modern state-of-the-art deep convolutional networks for vision. Such models thrive in object classification after supervised learning with a large number of training examples. The neural mechanisms subserving rapid visual learning remain largely unknown. I will discuss efforts towards unraveling the neural circuits involved in rapid learning of meaningful image interpretation in the human brain. We studied single neuron responses in human epilepsy patients to instances of single shot learning using Mooney images. Mooney images render objects in binary black and white in such a way that they can be difficult to recognize. After exposure to the corresponding grayscale image (and without any type of supervision), it becomes easier to recognize the objects in the original Mooney image. We will demonstrate a single unit signature of rapid learning in the human medial temporal lobe and provide initial steps to understand the mechanisms by which top-down inputs can rapidly orchestrate plastic changes in neuronal circuitry.

Beyond identification: how your brain signals whether you’ve seen it before

Speaker: Nicole Rust, University of Pennsylvania

Our visual memory percepts of whether we have encountered specific objects or scenes before are hypothesized to manifest as decrements in neural responses in inferotemporal cortex (IT) with stimulus repetition. To evaluate this proposal, we recorded IT neural responses as two monkeys performed variants of a single-exposure visual memory task designed to measure the rates of forgetting with time and the robustness of visual memory to a stimulus parameter known to also impact IT firing rates, image contrast. We found that a strict interpretation of the repetition suppression hypothesis could not account for the monkeys’ behavior, however, a weighted linear read-out of the IT population response accurately predicted forgetting rates, reaction time patterns, individual differences in task performance and contrast invariance. Additionally, the linear weights were largely all the same-sign and consistent with repetition suppression. These results suggest that behaviorally-relevant memory information is in fact reflected in via repetition suppression in IT, but only within an IT subpopulation.

Understanding what we see: Integration of memory and perception in the ventral visual stream

Speaker: Morgan Barense, University of Toronto

A central assumption in most modern theories of memory is that memory and perception are functionally and anatomically segregated. For example, amnesia resulting from medial temporal lobe (MTL) lesions is traditionally considered to be a selective deficit in long-term declarative memory with no effect on perceptual processes. The work I will present offers a new perspective that supports the notion that memory and perception are inextricably intertwined, relying on shared neural representations and computational mechanisms. Specifically, we addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in the perirhinal cortex of the MTL. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully-specified object concepts through the integration of their visual and conceptual features.

Hippocampal contributions to visual learning

Speaker: Nicholas Turk-Browne, Yale University

Although the hippocampus is usually viewed as a dedicated memory system, its placement at the top of, and strong interactions with, the ventral visual pathway (and other sensory systems) suggest that it may play a role in perception. My lab has recently suggested one potential perceptual function of the hippocampus — to learn about regularities in the environment and then to generate expectations based on these regularities that get reinstated in visual cortex to influence processing. I will talk about several of our studies using high-resolution fMRI and multivariate methods to characterize such learning and prediction.

< Back to 2018 Symposia

2018 Symposia

Clinical insights into basic visual processes

Organizer(s): Paul Gamlin, University of Alabama at Birmingham; Ann E. Elsner, Indiana University; Ronald Gregg, University of Louisville
Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 1

This year’s biennial ARVO at VSS symposium features insights into human visual processing at the retinal and cortical level arising from clinical and translational research. The speakers will present recent work based on a wide range of state-of-the art techniques including adaptive optics, brain and retinal imaging, psychophysics and gene therapy. More…

Vision and Visualization: Inspiring novel research directions in vision science

Organizer(s): Christie Nothelfer, Northwestern University; Madison Elliott, UBC, Zoya Bylinskii, MIT, Cindy Xiong, Northwestern University, & Danielle Albers Szafir, University of Colorado Boulder
Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 2

Visualization research seeks design guidelines for efficient visual displays of data. Vision science topics, such as pattern recognition, salience, shape perception, and color perception, all map directly to challenges encountered in visualization, raising new vision science questions and creating a space ripe for collaboration. Four speakers representing both vision science and visualization will discuss recent cross-disciplinary research, closing with a panel to discuss about how vision science and visualization communities can mutually benefit from deeper integration. This symposium will demonstrate that contextualizing vision science research in visualization can expose novel gaps in our knowledge of how perception and attention work. More…

Prediction in perception and action

Organizer(s): Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany
Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 1

Prediction is an essential mechanism enabling humans to prepare for future events. This is especially important in a dynamically changing world, which requires rapid and accurate responses to external stimuli. While it is unquestionable that predictions play a fundamental role in perception and action, their underlying mechanisms and neural basis are still poorly understood. The goal of this symposium is to integrate recent findings from psychophysics, sensorimotor control, and electrophysiology to provide a novel and comprehensive view on predictive mechanisms in perception and action spanning from behavior to neurons and from strictly laboratory tasks to (virtual) real world scenarios. More…

When seeing becomes knowing: Memory in the form perception pathway

Organizer(s): Caitlin Mullin, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of the Technology
Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 2

The established view of perception and memory is that they are dissociable processes that recruit distinct brain structures, with visual perception focused on the ventral visual stream and memory subserved by independent deep structures in the medial temporal lobe. Recent work in cognitive neuroscience has challenged this traditional view by demonstrating interactions and dependencies between perception and memory at nearly every stage of the visual hierarchy. In this symposium, we will present a series of cutting edge studies that showcase cross-methodological approaches to describe how visual perception and memory interact as part of a shared, bidirectional, interactive network. More…

Visual remapping: From behavior to neurons through computation

Organizer(s): James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT & Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany
Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 1

In this symposium we will discuss the neural substrates responsible for maintaining stable visual and attentional representations during active vision. Speakers from three complementary experimental disciplines, psychophysics, neurophysiology and computational modeling, will discuss recent advances in clarifying the role of spatial receptive field “remapping” in stablizing sensory representations across saccadic eye movements. Participants will address new experimental and theoretical methods for characterizing statiotemporal dynamics of visual and attentional remapping, both behavioral and physiological, during active vision and relate these data to recent computational efforts towards modeling oculomotor and visual system interactions. More…

Advances in temporal models of human visual cortex

Organizer(s): Jonathan Winawer, Department of Psychology and Center for Neural Science, New York University. New York, NY
Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 2

How do multiple areas in the human visual cortex encode information distributed over time? We focus on recent advances in modeling the temporal dynamics in the human brain: First, cortical areas have been found to be organized in a temporal hierarchy, with increasingly long temporal windows from earlier to later visual areas. Second, responses in multiple areas can be accurately predicted with temporal population receptive field models. Third, quantitative models have been developed to predict how responses in different visual areas are affected by both the timing and content of the stimulus history (adaptation). More…

Bruce Bridgeman Memorial Symposium

Friday, May 19, 2017, 9:00 – 11:30 am, Pavilion

Organizer: Susana Martinez-Conde, State University of New York

Speakers: Stephen L. Macknik, Stanley A. Klein, Susana Martinez-Conde, Paul Dassonville, Cathy Reed, and Laura Thomas

Professor Emeritus of Psychology Bruce Bridgeman was tragically killed on July 10, 2016, after being struck by a bus in Taipei, Taiwan. Those who knew Bruce will remember him for his sharp intellect, genuine sense of humor, intellectual curiosity, thoughtful mentorship, gentle personality, musical talent, and committed peace, social justice, and environmental activism. This symposium will highlight some of Bruce’s many important contributions to perception and cognition, which included spatial vision, perception/action interactions, and the functions and neural basis of consciousness.

Please also visit the Bruce Bridgeman Tribute website.

A Small Piece of Bruce’s Legacy

Stephen L. Macknik,  State University of New York

Consciousness and Cognition

Stanley A. Klein, UC Berkeley

Bruce Bridgeman’s Pioneering Work on Microsaccades

Susana Martinez-Conde, State University of New York

The Induced Roelofs Effect in Multisensory Perception and Action

Paul Dassonville, University of Oregon

Anything I Could Do Bruce Could Do Better

Cathy Reed, Claremont Mckenna College

A Legacy of Action

Laura Thomas, North Dakota State University

In the Fondest Memory of Bosco Tjan (Memorial Symposium)

Friday, May 19, 2017, 9:00 – 11:30 am, Talk Room 2

Organizers: Zhong-lin Lu, The Ohio State University and Susana Chung, University of California, Berkeley

Speakers: Zhong-lin Lu, Gordon Legge, Irving Biederman, Anirvan Nandy, Rachel Millin, Zili Liu, and Susana Chung

Professor Bosco S. Tjan was murdered at the pinnacle of a flourishing academic career on December 2, 2016. The vision science and cognitive neuroscience community lost a brilliant scientist and incisive commentator. I will briefly introduce Bosco’s life and career, and his contributions to vision science and cognitive neuroscience.

Bosco Tjan: An ideal scientific role model

Zhong-Lin Lu, The Ohio State University

Professor Bosco S. Tjan was murdered at the pinnacle of a flourishing academic career on December 2, 2016. The vision science and cognitive neuroscience community lost a brilliant scientist and incisive commentator. I will briefly introduce Bosco’s life and career, and his contributions to vision science and cognitive neuroscience.

Bosco Tjan: A Mentor’s Perspective on Ideal Observers and an Ideal Student

Gordon Legge, University of Minnesota

I will share my perspective on Bosco’s early history in vision science, focusing on his interest in the theoretical framework of ideal observers. I will discuss examples from his work on 3D object recognition, letter recognition and reading.

Bosco Tjan: The Contributions to Our Understanding of Higher Level Vision Made by an Engineer in Psychologist’s Clothing

Irving Biederman, University of Southern California

Bosco maintained a long-standing interest in shape recognition. In an extensive series of collaborations, he provided invaluable input and guidance to research: a) assessing the nature of the representation of faces, b) applying ideal observer and reverse correlation methodologies to understanding face recognition, c) exploring what the defining operations for the localization of LOC, the region critical for shape recognition, were actually reflecting, and d) key contributions to the design and functioning of USC’s Dornsife Imaging Center for Cognitive Neuroscience.

Bosco Tjan: A Beautiful Mind

Anirvan Nandy, Salk Institute for Biological Studies

Bosco was fascinated with the phenomenon of visual crowding – our striking inability to recognize objects in clutter, especially in the peripheral visual fields. Bosco realized that the study of crowding provided an unique window into the study of object recognition, since crowding represents a “natural breakdown” of the object recognition system that we otherwise take for granted. I will talk about a parsimonious theory that Bosco & I had proposed and which aimed to unify several disparate aspects of crowding within a common framework.

Bosco’s insightful approach to fMRI

Rachel Millin, University of Washington

Bosco was both a brilliant vision scientist and a creative methodologist. Through his work using fMRI to study visual processing, he became interested in how we could apply our limited understanding of the fMRI signal to better understand our experimental results. I will discuss a model that Bosco and I developed to simulate fMRI in V1, which aims to distinguish neural from non-neural contributions to fMRI results in studies of visual perception.

BOLD-o-metric Function in Motion Discrimination

Zili Liu, UCLA

We investigated fMRI BOLD responses in random-dot motion direction discrimination, in both event-related and blocked designs. Behaviorally, we obtained the expected psychometric functions as the angular difference between the motion direction and reference direction was systematically varied. Surprisingly, however, we found little BOLD modulation in the visual cortex as the task demand varied. (In collaboration with Bosco Tjan, Ren Na, Taiyong Bi, and Fang Fang)

Bosco Tjan: The Translator

Susana Chung, University of California, Berkeley

Bosco was not a clinician, yet, he had a strong interest in translating his knowledge and skills in basic science to issues that relate to people with impaired vision. I will present some of my collaboration work with Bosco that had shed light on how the brain adapts to vision loss in patients with macular disease.

VSS@ARVO 2010

Understanding the Functional Mechanisms of Visual Performance

Time/Room: Wednesday, May 5, 2010, 12:00 – 1:30 pm, Broward County Convention Center, Fort Lauderdale, FL
Organizers: David R. Williams, Wilson S. Geisler
Speakers: David H. Brainard, Martin S. Banks, David J. Heeger

Every year, VSS and ARVO collaborate in a symposium – VSS at ARVO or ARVO at VSS – designed to highlight
and present work from one society at the annual meeting of the other. This year’s symposium is at ARVO.

In recent years, considerable progress has been made in understanding the functional mechanisms underlying
human visual performance. This progress has been achieved by attacking the same questions from different
directions using a variety of rigorous approaches, including careful psychophysics, functional imaging,
computational analysis, analysis of natural tasks and natural scene statistics, and the development of theories
of optimal Bayesian performance. This symposium highlights some of the exciting recent progress that has
been made by combining two or more of these approaches in addressing fundamental issues in color coding,
distance coding and object recognition.

VSS@ARVO 2014

Cortical influences on eye movements, integrating work from human observers and non-human primates

Time/Room: Sunday, May 4, 2014, 1:30 – 3:00 pm
Organizers: Tony Norcia, Stanford University and Susana Chung, UC Berkeley
Speakers: Jeff Schall, Eileen Kowler, Bosco Tjan

The mechanisms responsible for guiding and controlling gaze shifts.

Speaker: Jeff Schall, Department of Psychology, Vanderbilt University

This presentation will survey the mechanisms responsible for guiding and controlling gaze shifts. Computational models provide a framework through which to understand how distinct populations of neurons select targets for gaze shifts, control the initiation of saccades and monitor the outcome of gaze behavior. Alternative computational models are evaluated based on fits to performance of macaque monkeys and humans guiding and controlling saccades during visual search and stopping tasks. The dynamics of model components are evaluated in relation to neurophysiological data collected from the frontal lobe and midbrain of macaque monkeys performing visual search and stopping tasks. The insights gained provide guidance on possible diagnosis and treatment of high level gaze disorders.

The role of prediction and expectations in the planning of smooth pursuit and saccadic eye movements.

Speaker: Eileen Kowler, Department of Psychology, Rutgers University

Eye movements – saccades or smooth pursuit – ensure that the line of sight remains near objects of interest, thus establishing the retinal conditions that support high quality vision. Effective control of eye movements relies on more than the analysis of sensory signals.  Eye movements must also be sensitive to high-level decisions about which regions of the environment deserve immediate attention and visual analysis.  One important high level signal that contributes to effective eye movements is the ability to generate predictions.  For example:  Anticipatory smooth pursuit eye movements in the direction of upcoming future target motion are elicited by symbolic cues that disclose the future path of moving targets, as well as (for self-moved targets) signals that represent our own motor plans.  These responses are automatic and require no learning or effort.  Anticipatory behavior is also seen in saccades, where subtle adjustments in fixation time are made on the basis of the expected difficulty of the visual discrimination.  By taking advantage of our ability to interpret the environment and monitor our own cognitive states, predictive eye movements serve a vital role in natural oculomotor behavior.  They reduce sensorimotor delays, reduce the load attached to processing sensory input, and allow a pattern of efficient decision-making that frees central resources for higher level aspects of the task.

Gaze Control without a Fovea

Speaker: Bosco Tjan

Form vision is an active process. With normal foveal vision, the oculomotor system continually brings targets of interest onto the fovea with saccadic eye-movements. The loss of foveal vision means that these foveating saccades will be counterproductive. Central field loss (CFL) patients often develop a preferred retinal locus (PRL) in their periphery for fixation (Crossland et al., 2005). This adjustment appears idiosyncratic and lengthy. Neither the time course of this adjustment nor the determining factors for the eventual location of a PRL is well understood. This is because it is nearly impossible to infer the conditions prior to the onset of CFL for any individual patient or to track a patient from CFL onset. To make progress, we studied PRL development in normally sighted individuals. We used a gaze-contingent display to simulate a visible circular central scotoma 5° or 6°in radius in two experiments. In one experiment, subjects were told to “look at” an object as it was randomly repositioned against a uniform background. This object was the target for a visual-search trial immediately following this observation period. In the other experiment, a different group of subjects used eye movements to control a highlighted ring, which marked the edge of the simulated scotoma, to make contact with a small target disc, which was randomly placed on the screen in each trial.  In both experiments, a PRL emerged spontaneously within a few hours of experiment time (spread out over several days). Saccades were also re-referenced to the PRL, but at a slower rate. We found that the developed PRL was retained over weeks without additional practice. Furthermore, the PRL stayed at the same retinal location when tested with a different task or when using an invisible simulated scotoma. Losing the fovea replaces a unique locus on the retina by a set of equally probable peripheral loci. Rather than selecting the optimal retinal locus for every saccade, the oculomotor system opts for a minimal change in its control strategy by adopting a single retinal locus for all saccades. This leads to a speedy adjustment and refinement of the controller. The quality of the error signals (invisible natural scotoma vs. visible simulated scotoma) may explain why CFL patients appear to take much longer in developing PRL than our normally sighted subjects.

VSS@ARVO 2012

Visual Rehabilitation

Time: Wednesday, May 9, 2012, 12:00 – 1:30 pm, Room 315 (Fort Lauderdale Convention Center)
Chair: Pascal Mamassian, University of Glasgow
Speakers:
Dennis Levi, School of Optometry, University of California, Berkeley
Krystel R. Huxlin. Flaum Eye Institute, University of Rochester
Arash Sahraie. College of Life Sciences and Medicine, University of Aberdeen

Every year, VSS and ARVO collaborate in a symposium – VSS at ARVO or ARVO at
VSS – designed to highlight and present work from one society at the annual
meeting of the other. This year’s symposium is at ARVO.

Experience-dependent plasticity is closely linked with the development of sensory function. However, there is also growing evidence for plasticity in the adult visual system. This symposium re-examines the notions of critical period and sensitive period for a variety of visual functions. One critical issue is the extent to which alternative neural structures are recruited to restore these visual functions. Recent experimental and clinical evidence will be discussed for the rehabilitation of amblyopia and blindsight.

VSS@ARVO 2017

Functional Brain Imaging in Development and Disorder

Tuesday, May 9, 1:00 – 2:30 pm at ARVO 2017, Baltimore, Maryland
Presenters: Geoffrey K. Aguirre, Jan Atkinson, Tessa M. Dekker, Deborah Giaschi

This symposium will feature four talks that apply functional brain imaging to the study of both visual development and visual disorders. Functional brain imaging, primarily fMRI, enables non-invasive and quantitative assessment of neural function in the human brain. The four talks in the symposium will cover topics that include the reorganization of visual cortex in blindness, studies of cortical response in children with amblyopia, the normal development of population receptive fields in visual cortex, and the effect of early cortical damage on visual development.

Post-retinal structure and function in human blindness

Speaker: Geoffrey K. Aguirre, Department of Neurology, University of Pennsylvania

Neuroimaging the typical and atypical developing visual brain: dorsal vulnerability and cerebral visual impairment

Speaker: Professor Jan Atkinson Ph.D, FMedSci; Acad. Europaea; FBA, Emeritus Professor of Psychology and Developmental Cognitive Neuroscience, University College London, Visiting Professor, University of Oxford

Development of retinotopic representations in visual cortex during childhood

Speaker: Tessa M. Dekker, Division of Psychology and Language Sciences & Institute of Ophthalmology, University College London

Neural correlates of motion perception deficits in amblyopia

Speaker: Deborah Giaschi, Department of Ophthalmology and Visual Science, University of British Columbia

Vision Sciences Society