[email protected] 2018

Clinical insights into basic visual processes

Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 1
Organizer(s): Paul Gamlin, University of Alabama at Birmingham; Ann E. Elsner, Indiana University; Ronald Gregg, University of Louisville
Presenters: Geunyoung Yoon, Artur Cideciyan, Ione Fine, MiYoung Kwon

< Back to 2018 Symposia

Symposium Description

This year’s biennial ARVO at VSS symposium features insights into human visual processing at the retinal and cortical level arising from clinical and translational research. The speakers will present recent work based on a wide range of state-of-the art techniques including adaptive optics, brain and retinal imaging, psychophysics and gene therapy.

Presentations

Neural mechanisms of long-term adaptation to the eye’s habitual aberration

Speaker: Geunyoung Yoon, Flaum Eye Institute, Center for Visual Science, The Institute of Optics, University of Rochester

Understanding the limits of human vision requires fundamental insights into both optical and neural factors in vision. Although the eye’s optics are far from perfect, contributions of the optical factors to neural processing are largely underappreciated. Specifically, how neural processing of images formed on the retina is altered by the long-term visual experience with habitual optical blur has remained unexplored. With technological advances in an adaptive optics vision simulator, it is now possible to manipulate ocular optics precisely. I will highlight our recent investigations on underlying mechanisms of long-term neural adaptation to the optics of the eye and its impact on spatial vision in the normally developed adult visual system.

Human Melanopic Circuit in Isolation from Photoreceptor Input: Light Sensitivity and Temporal Profile

Speaker: Artur Cideciyan, Scheie Eye Institute, Perelman School of Medicine, University of Pennsylvania

Leber congenital amaurosis refers to a group of severe early-onset inherited retinopathies. There are more than 20 causative genes with varied pathophysiological mechanisms resulting in vision loss at the level of the photoreceptors. Some eyes retain near normal photoreceptor and inner retinal structure despite the severe retina-wide loss of photoreceptor function. High luminance stimuli allow recording of pupillary responses driven directly by melanopsin-expressing intrinsically photosensitive retinal ganglion cells. Analyses of these pupillary responses help clarify the fidelity of transmission of light signals from the retina to the brain for patients with no light perception undergoing early phase clinical treatment trials. In addition, these responses serve to define the sensitivity and temporal profile of the human melanopic circuit in isolation from photoreceptor input.

Vision in the blind

Speaker: Ione Fine, Department of Psychology, University of Washington

Individuals who are blind early in life show cross-modal plasticity – responses to auditory and tactile stimuli within regions of occipital cortex that are purely visual in the normally sighted. If vision is restored later in life, as occurs in a small number of sight recovery individuals, this cross-modal plasticity persists, even while some visual responsiveness is regained. Here I describe the relationship between cross-modal responses and persisting residual vision. Our results suggest the intriguing possibility that the dramatic changes in function that are observed as a result of early blindness are implemented in the absence of major changes in neuroanatomy at either the micro or macro scale: analogous to reformatting a Windows computer to Linux.

Impact of retinal ganglion cell loss on human pattern recognition

Speaker: MiYoung Kwon, Department of Ophthalmology, University of Alabama at Birmingham

The processing of human pattern detection and recognition requires integrating visual information across space. In the human visual system, the retinal ganglion cells (RGCs) are the output neurons of the retina, and human pattern recognition is built from the neural representation of the RGCs. Here I will present our recent work demonstrating how a loss of RGCs due to either normal aging or pathological conditions such as glaucoma undermines pattern recognition and alters spatial integration properties. I will further highlight the role of the RGCs in determining the spatial extent over which visual inputs are combined. Our findings suggest that understanding the structural and functional integrity of RGCs would help not only better characterize visual deficits associated eye disorders, but also understand the front-end sensory requirements for human pattern recognition.

< Back to 2018 Symposia

Prediction in perception and action

Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany
Presenters: Mary Hayhoe, Miriam Spering, Cristina de la Malla, Katja Fiehler, Kathleen Cullen

< Back to 2018 Symposia

Symposium Description

Prediction is an essential mechanism enabling humans to prepare for future events. This is especially important in a dynamically changing world, which requires rapid and accurate responses to external stimuli. Predictive mechanisms work on different time scales and at various information processing stages. They allow us to anticipate the future state both of the environment and ourselves. They are instrumental to compensate for noise and delays in the transmission of neural signals and allow us to distinguish external events from the sensory consequences of our own actions. While it is unquestionable that predictions play a fundamental role in perception and action, their underlying mechanisms and neural basis are still poorly understood. The goal of this symposium is to integrate recent findings from psychophysics, sensorimotor control, and electrophysiology to update our current understanding of predictive mechanisms in different sensory and motor systems. It brings together a group of leading scientists at different stages in their career who all have made important contributions to this topic. Two prime examples of predictive processes are considered: when interacting with moving stimuli and during self-generated movements. The first two talks from Hayhoe and Spering will focus on the oculomotor system which provides an excellent model for examining predictive behavior. They will show that smooth pursuit and saccadic eye movements significantly contribute to sucessful predictions of future visual events. Moreover, Hayhoe will provide examples for recent advances in the use of virtual reality (VR) techniques to study predictive eye movements in more naturalistic situations with unrestrained head and body movements. De la Malla will extend these findings to the hand movement system by examining interceptive manual movements. She will conclude that predictions are continuously updated and combined with online visual information to optimize behavior. The last two talks from Fiehler and Cullen will take a different perspective by considering predictions during self-generated movements. Such predictive mechanims have been associated with a forward model that predicts the sensory consequences of our own actions and cancels the respective sensory reafferences. Fiehler will focus on such cancellation mechanisms and present recent findings on tactile suppression during hand movements. Based on electrophysiological studies on self-motion in monkeys, Cullen will finally answer where and how the brain compares expected and actual sensory feedback. In sum, this symposium targets the general VSS audience and aims to provide a novel and comprehensive view on predictive mechanisms in perception and action spanning from behavior to neurons and from strictly laboratory tasks to (virtual) real world scenarios.

Presentations

Predictive eye movements in natural vision

Speaker: Mary Hayhoe, Center for Perceptual Systems, University of Texas Austin, USA

Natural behavior can be described as a sequence of sensory motor decisions that serve behavioral goals. To make action decisions the visual system must estimate current world state. However, sensory-motor delays present a problem to a reactive organism in a dynamically changing environment. Consequently it is advantageous to predict future state as well. This requires some kind of experience-based model of how the current state is likely to change over time. It is commonly accepted that the proprioceptive consequences of a planned movement are predicted ahead of time using stored internal models of the body’s dynamics. It is also commonly assumed that prediction is a fundamental aspect of visual perception, but the existence of visual prediction and the particular mechanisms underlying such prediction are unclear. Some of the best evidence for prediction in vision comes from the oculomotor system. In this case, both smooth pursuit and saccadic eye movements reveal prediction of the future visual stimulus. I will review evidence for prediction in interception actions in both real and virtual environments. Subjects make accurate predictions of visual target motion, even when targets follow trajectories determined by the complex dynamics of physical interactions, and the head and body are unrestrained. These predictions appear to be used in common by both eye and arm movements. Predictive eye movements reveal that the observer’s best guess at the future state of the environment is based on image data in combination with representations that reflect learnt statistical properties of dynamic visual environments.

Smooth pursuit eye movements as a model of visual prediction

Speaker: Miriam Spering, Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada

Real-world movements, ranging from intercepting prey to hitting a ball, require rapid prediction of an object’s trajectory from a brief glance at its motion. The decision whether, when and where to intercept is based on the integration of current visual evidence, such as the perception of a ball’s direction, spin and speed. However, perception and decision-making are also strongly influenced by past sensory experience. We use smooth pursuit eye movements as a model system to investigate how the brain integrates sensory evidence with past experience. This type of eye movement provides a continuous read-out of information processing while humans look at a moving object and make decisions about whether and how to interact with it. I will present results from two different series of studies: the first utilizes anticipatory pursuit as a means to understand the temporal dynamics of prediction, and probes the modulatory role of expectations based on past experience. The other reveals the benefit of smooth pursuit itself, in tasks that require the prediction of object trajectories for perceptual estimation and manual interception. I will conclude that pursuit is both an excellent model system for prediction, and an important contributor to successful prediction of object motion.

Prediction in interceptive hand movements

Speaker: Cristina de la Malla, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands

Intercepting a moving target requires spatial and temporal precision: the target and the hand need to be at the same position at the same time. Since both the target and the hand move, we cannot just aim for the target’s current position, but need to predict where the target will be by the time we reach it. We normally continuously track targets with our gaze, unless the characteristics of the task or of the target make it impossible to do so. Then, we make saccades and direct our movements towards specific locations where we predict the target will be in the future. If the precise location at which one is to hit the target only becomes evident as the target approaches the interception area, the gaze, head and hand movements towards this area are delayed due to not having the possibility of predicting the target future position. Predictions are continuously updated and combined with online visual information to optimize our actions: the less predictable the target’s motion, the more we have to rely on online visual information to guide our hand to intercept it. Updating predictions with online information allow to correct for any mismatch between the predicted target position and the hand position during an on-going movement, but any perceptual error that is still present at the last moment at which we can update our prediction will result in an equivalent interception error.

Somatosensory predictions in reaching

Speaker: Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany

Movement planning and execution lead to changes in somatosensory perception. For example, tactile stimuli on a moving compared to a resting limb are typically perceived as weaker and later in time. This phenomenon is termed tactile suppression and has been linked to a forward model mechanism which predicts the sensory consequences of the self-generated action and as a result discounts the respective sensory reafferences. As tactile suppression is also evident in passive hand movements, both predictive and postdictive mechanisms may be involved. However, its functional role is still widely unknown. It has been proposed that tactile suppression prevents sensory overload due to the large amount of afferent information generated during movement and therefore facilitates processing of external sensory events. However, if tactile feedback from the moving limb is needed to gain information, e.g. at the fingers involved in grasping, tactile sensitivity is less strongly reduced. In the talk, I will present recent results from a series of psychophysical experiments that show that tactile sensitivity is dynamically modulated during the course of the reaching movement depending on the reach goal and the predicted movement consequences. These results provide first evidence that tactile suppression may indeed free capacities to process other, movement-relevant somatosensory signals. Moreover, the observed perceptual changes were associated with adjustments in the motor system suggesting a close coupling of predictive mechanisms in perception and action.

Prediction during self-motion: the primate cerebellum selectively encodes unexpected vestibular information

Speaker: Kathleen Cullen, Department of Physiology, McGill University, Montréal, Québec, Canada

A prevailing view is that the cerebellum is the site of a forward model that predicts the expected sensory consequences of self-generated action. Changes in motor apparatus and/or environment will cause a mismatch between the cerebellum’s prediction and the actual resulting sensory stimulation. This mismatch – the ‘sensory prediction error,’ – is thought to be vital for updating both the forward model and motor program during motor learning to ensure that sensory-motor pathways remain calibrated. However, where and how the brain compares expected and actual sensory feedback was unknown. In this talk, I will first review experiments that focused on a relatively simple sensory-motor pathway with a well-described organization to gain insight into the computations that drive motor learning. Specifically, the most medial of the deep cerebellar nuclei (rostral fastigial nucleus), constitutes a major output target of the cerebellar cortex and in turn sends strong projections to the vestibular nuclei, reticular formation, and spinal cord to generate reflexes that ensure accurate posture and balance. Trial by trial analysis of these neurons in a motor learning task revealed the output of a computation in which the brain selectively encodes unexpected self-motion (vestibular information). This selectively enables both the i) rapid suppression of descending reflexive commands during voluntary movements and ii) rapid updating of motor programs in the face of changes to either the motor apparatus or external environment. I will then consider the implications of these findings regarding our recent work on the thalamo-cortical processing of vestibular information.

< Back to 2018 Symposia

Advances in temporal models of human visual cortex

Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 2
Organizer(s): Jonathan Winawer, Department of Psychology and Center for Neural Science, New York University. New York, NY
Presenters: Geoffrey K. Aguirre, Christopher J. Honey, Anthony Stigliani, Jingyang Zhou

< Back to 2018 Symposia

Symposium Description

The nervous system extracts meaning from the distribution of light over space and time. Spatial vision has been a highly successful research area, and the spatial receptive field has served as a fundamental and unifying concept that spans perception, computation, and physiology. While there has also been a large interest in temporal vision, the temporal domain has lagged the spatial domain in terms of quantitative models of how signals are transformed across the visual hierarchy (with the notable exception of motion processing). In this symposium, we address the question of how multiple areas in human visual cortex encode information distributed over time. Several groups in recent years made important contributions to measuring and modeling temporal processing in human visual cortex. Some of this work shows parallels with spatial vision. For example, one important development has been the notion of a cortical hierarchy of increasingly long temporal windows, paralleling the hierarchy of spatial receptive fields (Hasson et al, 2009; Honey et al, 2012; Murray et al, 2014). A second type of study, from Geoff Aguirre’s lab, has combined the tradition of repetition suppression (Grill-Spector et al, 1999) with the notion of multiple time scales across the visual pathways to develop a computational model of how sequential stimuli are encoded in multiple visual areas (Mattar et al, 2016). Finally, several groups including the Grill-Spector lab and Winawer lab have extended the tools of population receptive field models from the spatial to the temporal domain, building models that predict how multiple cortical areas respond to arbitrary temporal sequences of visual stimulation (Horiguchi et al, 2009; Stigliani and Grill-Spector, 2017; Zhou et al 2017). Across the groups, there have been some common findings, such as the general tendency toward longer periods of temporal interactions in later visual areas. However, there are also a number of challenges in considering these recent developments together. For example, can (and should) we expect the same kind of theories and models to account for temporal interactions in both early visual areas at the time-scale of tens of milliseconds, and later visual areas at the time-scale of seconds or minutes? How do temporal properties of visual areas depend on spatial aspects of the stimuli? Should we expect principles of spatial computation, such as hierarchical pooling and normalization, to transfer analogously to the temporal domain? To what extent do temporal effects depend on task? Can temporal models at the scale of large neuronal populations (functional MRI, intracranial EEG) be explained in terms of the behavior of single neurons, and should this be a goal? Through this symposium, we aim to present an integrated view of the recent literature in temporal modeling of visual cortex, with each presenter both summarizing a recent topic and answering a common set of questions. The common questions posed to each presenter will be used to assess both the progress and the limits of recent work, with the goal of crystallizing where the field might go next in this important area.

Presentations

Variation in Temporal Stimulus Integration Across Visual Cortex

Speaker: Geoffrey K. Aguirre, Department of Neurology, Perelman School of Medicine, University of Pennsylvania
Additional Authors: Marcelo G. Mattar, Princeton Neuroscience Institute, Princeton University; David A. Kahn, Department of Neuroscience, University of Pennsylvania; Sharon L. Thompson-Schill, Department of Psychology, University of Pennsylvania

Object percept is shaped by the long-term average of experience as well as immediate, comparative context. Measurements of brain activity have demonstrated corresponding neural mechanisms, including norm-based responses reflective of stored prototype representations, and adaptation induced by the immediately preceding stimulus. Our recent work examines the time-scale of integration of sensory information, and explicitly tests the idea that the apparently separate phenomena of norm-based coding and adaptation can arise from a single mechanism of sensory integration operating over varying timescales. We used functional MRI to measure neural responses from the fusiform gyrus while subjects observed a rapid stream of face stimuli. Neural activity at this cortical site was best explained by the integration of sensory experience over multiple sequential stimuli, following a decaying-exponential weighting function. While this neural activity could be mistaken for immediate neural adaptation or long-term, norm-based responses, it in fact reflected a timescale of integration intermediate to both. We then examined the timescale of sensory integration across the cortex. We found a gradient that ranged from rapid sensory integration in early visual areas, to long-term, stable representations towards higher-level, ventral-temporal cortex. These findings were replicated with a new set of face stimuli and subjects. Our results suggest that a cascade of visual areas integrate sensory experience, transforming highly adaptable responses at early stages to stable representations at higher levels.

Temporal Hierarchies in Human Cerebral Cortex

Speaker: Christopher J. Honey, Department of Psychological & Brain Sciences, Johns Hopkins University
Additional Authors: Hsiang-Yun Sherry Chien, Psychological and Brain Sciences, Johns Hopkins University; Kevin Himberger, Psychological and Brain Sciences, Johns Hopkins University

Our understanding of each moment of the visual world depends on the previous moment. We make use of temporal context to segregate objects, to accumulate visual evidence, to comprehend sequences of events, and to generate predictions. Temporal integration — the process of combining past and present information — appears not to be restricted to specialized subregions of the brain, but is widely distributed across the cerebral cortex. In addition, temporal integration processes appear to be systematically organized into a hierarchy, with gradually greater context dependence as one moves toward higher order regions. What is the mechanistic basis of this temporal hierarchy? What are its implications for perception and learning, especially in determining the boundaries between visual events? How does temporal integration relate to the processes supporting working memory and episodic memory? After reviewing the evidence around each of these questions, I will describe a computational model of hierarchical temporal processing in the human cerebral cortex. Finally, I will describe our tests of the predictions of this model for for brain and behavior, in settings where where humans perceive and learn nested temporal structure.

Modeling the temporal dynamics of high-level visual cortex

Speaker: Anthony Stigliani, Department of Psychology, Stanford University
Additional Authors: Brianna Jeska, Department of Psychology, Stanford University; Kalanit Grill-Spector, Department of Psychology, Stanford University

How is temporal information processed in high-level visual cortex? To address this question, we measured cortical responses with fMRI (N = 12) to time-varying stimuli across 3 experiments using stimuli that were either transient, sustained, or contained both transient and sustained stimulation and ranged in duration from 33ms to 20s. Then we implemented a novel temporal encoding model to test how different temporal channels contribute to responses in high-level visual cortex. Different than the standard linear model, which predicts responses directly from the stimulus, the encoding approach first predicts neural responses to the stimulus with fine temporal precision and then derives fMRI responses from these neural predictions. Results show that an encoding model not only explains responses to time varying stimuli in face- and body-selective regions, but also finds differential temporal processing across high-level visual cortex. That is, we discovered that temporal processing differs both across anatomical locations as well as across regions that process different domains. Specifically, face- and body-selective regions in lateral temporal cortex (LTC) are dominated by transient responses, but face- and body-selective regions in lateral occipital cortex (LOC) and ventral temporal cortex (VTC) illustrate both sustained and transient responses. Additionally, the contribution of transient channels in body-selective regions is higher than in neighboring face-selective regions. Together, these results suggest that domain-specific regions are organized in parallel processing streams with differential temporal characteristics and provide evidence that the human visual system contains a separate lateral processing stream that is attuned to changing aspects of the visual input.

Dynamics of temporal summation in human visual cortex

Speaker: Jingyang Zhou, Department of Psychology, New York University
Additional Authors: Noah C. Benson, Psychology, New York University; Kendrick N. Kay, Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Twin Cities; Jonathan Winawer, Psychology and Center for Neural Science, New York University

Later visual areas become increasingly tolerant to variations in image properties such as object size, location, viewpoint, and so on. This phenomenon is often modeled by a cascade of repeated processing stages in which each stage involves pooling followed by a compressive nonlinearity. One result of this sequence is that stimulus-referred measurements show increasingly large receptive fields and stronger normalization. Here, we apply a similar approach to the temporal domain. Using fMRI and intracranial potentials (ECoG), we develop a population receptive field (pRF) model for temporal sequences of visual stimulation. The model consists of linear summation followed by a time-varying divisive normalization. The same model accurately accounts for both ECoG broadband time course and fMRI amplitudes. The model parameters reveal several regularites about temporal encoding in cortex. First, higher visual areas accumulate stimulus information over a longer time period than earlier areas, analogous to the hierarchically organized spatial receptive fields. Second, we found that all visual areas sum sub-linearly in time: e.g., the response to a long stimulus is less than the response to two successive brief stimuli. Third, the degree of compression increases in later visual areas, analogous to spatial vision. Finally, based on published data, we show that our model can account for the time course of single units in macaque V1 and multiunits in humans. This indicates that for space and time, cortex uses a similar processing strategy to achieve higher-level and increasingly invariant representations of the visual world.

< Back to 2018 Symposia

2018 Symposia

Clinical insights into basic visual processes

Organizer(s): Paul Gamlin, University of Alabama at Birmingham; Ann E. Elsner, Indiana University; Ronald Gregg, University of Louisville
Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 1

This year’s biennial ARVO at VSS symposium features insights into human visual processing at the retinal and cortical level arising from clinical and translational research. The speakers will present recent work based on a wide range of state-of-the art techniques including adaptive optics, brain and retinal imaging, psychophysics and gene therapy. More…

Vision and Visualization: Inspiring novel research directions in vision science

Organizer(s): Christie Nothelfer, Northwestern University; Madison Elliott, UBC, Zoya Bylinskii, MIT, Cindy Xiong, Northwestern University, & Danielle Albers Szafir, University of Colorado Boulder
Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 2

Visualization research seeks design guidelines for efficient visual displays of data. Vision science topics, such as pattern recognition, salience, shape perception, and color perception, all map directly to challenges encountered in visualization, raising new vision science questions and creating a space ripe for collaboration. Four speakers representing both vision science and visualization will discuss recent cross-disciplinary research, closing with a panel to discuss about how vision science and visualization communities can mutually benefit from deeper integration. This symposium will demonstrate that contextualizing vision science research in visualization can expose novel gaps in our knowledge of how perception and attention work. More…

Prediction in perception and action

Organizer(s): Katja Fiehler, Department of Psychology and Sports Science, Giessen University, Giessen, Germany
Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 1

Prediction is an essential mechanism enabling humans to prepare for future events. This is especially important in a dynamically changing world, which requires rapid and accurate responses to external stimuli. While it is unquestionable that predictions play a fundamental role in perception and action, their underlying mechanisms and neural basis are still poorly understood. The goal of this symposium is to integrate recent findings from psychophysics, sensorimotor control, and electrophysiology to provide a novel and comprehensive view on predictive mechanisms in perception and action spanning from behavior to neurons and from strictly laboratory tasks to (virtual) real world scenarios. More…

When seeing becomes knowing: Memory in the form perception pathway

Organizer(s): Caitlin Mullin, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of the Technology
Time/Room: Friday, May 18, 2018, 2:30 – 4:30 pm, Talk Room 2

The established view of perception and memory is that they are dissociable processes that recruit distinct brain structures, with visual perception focused on the ventral visual stream and memory subserved by independent deep structures in the medial temporal lobe. Recent work in cognitive neuroscience has challenged this traditional view by demonstrating interactions and dependencies between perception and memory at nearly every stage of the visual hierarchy. In this symposium, we will present a series of cutting edge studies that showcase cross-methodological approaches to describe how visual perception and memory interact as part of a shared, bidirectional, interactive network. More…

Visual remapping: From behavior to neurons through computation

Organizer(s): James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT & Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany
Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 1

In this symposium we will discuss the neural substrates responsible for maintaining stable visual and attentional representations during active vision. Speakers from three complementary experimental disciplines, psychophysics, neurophysiology and computational modeling, will discuss recent advances in clarifying the role of spatial receptive field “remapping” in stablizing sensory representations across saccadic eye movements. Participants will address new experimental and theoretical methods for characterizing statiotemporal dynamics of visual and attentional remapping, both behavioral and physiological, during active vision and relate these data to recent computational efforts towards modeling oculomotor and visual system interactions. More…

Advances in temporal models of human visual cortex

Organizer(s): Jonathan Winawer, Department of Psychology and Center for Neural Science, New York University. New York, NY
Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 2

How do multiple areas in the human visual cortex encode information distributed over time? We focus on recent advances in modeling the temporal dynamics in the human brain: First, cortical areas have been found to be organized in a temporal hierarchy, with increasingly long temporal windows from earlier to later visual areas. Second, responses in multiple areas can be accurately predicted with temporal population receptive field models. Third, quantitative models have been developed to predict how responses in different visual areas are affected by both the timing and content of the stimulus history (adaptation). More…

2018 Keynote – Kenneth C. Catania

Kenneth C. Catania

Stevenson Professor of Biological Sciences
Vanderbilt University
Department of Biological Sciences

More than meets the eye: the extraordinary brains and behaviors of specialized predators.

Saturday, May 19, 2018, 7:15 pm, Talk Room 1-2

Predator-prey interactions are high stakes for both participants and have resulted in the evolution of high-acuity senses and dramatic attack and escape behaviors.  I will describe the neurobiology and behavior of some extreme predators, including star-nosed moles, tentacled snakes, and electric eels.  Each species has evolved special senses and each provides unique perspectives on the evolution of brains and behavior.

Biography

A neuroscientist by training, Ken Catania has spent much of his career investigating the unusual brains and behaviors of specialized animals.  These have included star-nosed moles, tentacled snakes, water shrews, alligators, crocodiles, and most recently electric eels. His studies often focus on predators that have evolved special senses and weapons to find and overcome elusive prey.  He is considered an expert in extreme animal behaviors and studies specialized species to reveal general principles about brain organization and sensory systems. Catania was named a MacArthur Fellow in 2006, a Guggenheim Fellow in 2014, and in 2013 he received the Pradel Research Award in Neurosciences from the National Academy of Sciences.  Catania received a BS in zoology from the University of Maryland (1989), a Ph.D. (1994) in neurosciences from the University of California, San Diego, and is currently a Stevenson Professor of Biological Sciences at Vanderbilt University.

Vision Sciences Society