2010 Keynote – Carla Shatz

Carla Shatz

Carla Shatz

Professor of Biology and Neurobiology Director, Bio-X, Stanford University

Audio and slides from the 2010 Keynote Address are available on the Cambridge Research Systems website.

Releasing the Brake on Ocular Dominance Plasticity

Saturday, May 8, 2010, 7:45 pm, Royal Palm Ballroom 4-5

Connections in adult visual system are highly precise, but they do not start out that way. Precision emerges during critical periods of development as synaptic connections remodel, a process requiring neural activity and involving regression of some synapses and strengthening and stabilization of others. Activity also regulates neuronal genes; in an unbiased PCR-based differential screen, we discovered unexpectedly that MHC Class I genes are expressed in neurons and are regulated by spontaneous activity and visual experience (Corriveau et al, 1998; Goddard et al, 2007). To assess requirements for MHCI in the CNS, mice lacking expression of specific MHCI genes were examined. Synapse regression in developing visual system did not occur, synaptic strengthening was greater than normal in adult hippocampus, and ocular dominance (OD) plasticity in visual cortex was enhanced (Huh et al, 2000; Datwani et al, 2009). We searched for receptors that could interact with neuronal MHCI and carry out these activity-dependent processes. mRNA for PirB, an innate immune receptor, was found highly expressed in neurons in many regions of mouse CNS. We generated mutant mice lacking PirB function and discovered that OD plasticity is also enhanced (Syken et al., 2006), as is hippocampal LTP. Thus, MHCI ligands signaling via PirB receptor may function to “brake” activity- dependent synaptic plasticity. Together, results imply that these molecules, thought previously to function only in the immune system, may also act at neuronal synapses to limit how much- or perhaps how quickly- synapse strength changes in response to new experience. These molecules may be crucial for controlling circuit excitability and stability in developing as well as adult brain, and changes in their function may contribute to developmental disorders such as Autism, Dyslexia and even Schizophrenia.

Supported by NIH Grants EY02858, MH071666, the Mathers Charitable Foundation and the Dana Foundation

Biography

Carla Shatz is professor of biology and neurobiology and director of Bio-X at Stanford University. Dr. Shatz’s research focuses on the development of the mammalian visual system, with an overall goal of better understanding critical periods of brain wiring and developmental disorders such as autism, dyslexia and schizophrenia, and also for understanding how the nervous and immune systems interact. Dr. Shatz graduated from Radcliffe College in 1969 with a B.A. in Chemistry. She was honored with a Marshall Scholarship to study at University College London, where she received an M.Phil. in Physiology in 1971. In 1976, she received a Ph.D. in Neurobiology from Harvard Medical School, where she studied with Nobel Laureates David Hubel and Torsten Wiesel. During this period, she was appointed as a Harvard Junior Fellow. From 1976 to 1978 she obtained postdoctoral training with Dr. Pasko Rakic in the Department of Neuroscience, Harvard Medical School. In 1978, Dr. Shatz moved to Stanford University, where she attained the rank of Professor of Neurobiology in 1989. In 1992, she moved her laboratory to the University of California, Berkeley, where she was Professor of Neurobiology and an Investigator of the Howard Hughes Medical Institute. In 2000, she assumed the Chair of the Department of Neurobiology at Harvard Medical School as the Nathan Marsh Pusey Professor of Neurobiology. Dr. Shatz received the Society for Neuroscience Young Investigator Award in 1985, the Silvo Conte Award from the National Foundation for Brain Research in 1993, the Charles A. Dana Award for Pioneering Achievement in Health and Education in 1995, the Alcon Award for Outstanding Contributions to Vision Research in 1997, the Bernard Sachs Award from the Child Neurology Society in 1999, the Weizmann Institute Women and Science Award in 2000 and the Gill Prize in Neuroscience in 2006. In 1992, she was elected to the American Academy of Arts and Sciences, in 1995 to the National Academy of Sciences, in 1997 to the American Philosophical Society, and in 1999 to the Institute of Medicine. In 2009 she received the Salpeter Lifetime achievement award from the Society for Neuroscience.

2011 Keynote – Daniel M. Wolpert

Daniel M. Wolpert

Daniel M. Wolpert

Professor of Engineering, University of Cambridge

Audio and slides from the 2011 Keynote Address are available on the Cambridge Research Systems website.

Probabilistic models of human sensorimotor control

Saturday, May 7, 2011, 7:00 – 8:15 pm, Royal Palm Ballroom 4-5

The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. While computers can now beat grandmasters at chess, no computer can yet control a robot to manipulate a chess piece with the dexterity of a six-year-old child. I will review our recent work on how the humans learn to make skilled movements covering probabilistic models of learning, including Bayesian and structural learning, how the brain makes and uses motor predictions, and the interaction between decision making and sensorimotor control.

Biography

Daniel Wolpert is Professor of Engineering at the University of Cambridge and a Fellow of Trinity College. Daniel’s research focuses on computational and experimental approaches to human sensorimotor control. Daniel read medical sciences at Cambridge and clinical medicine at Oxford. After working as a medical doctor for a year he completed a D. Phil. in the Physiology Department in Oxford. He then worked as a postdoctoral fellow and Fulbright Scholar at MIT, before moving to the Institute of Neurology, UCL. In 2005 he took up his current post in Cambridge. He was elected a Fellow of the Academy of Medical Sciences in 2004 and was awarded the Royal Society Francis Crick Prize Lecture (2005) and has given the Fred Kavli Distinguished International Scientist Lecture at the Society for Neuroscience (2009). Further details can be found on www.wolpertlab.com.

2012 Keynote – Ranulfo Romo

Ranulfo Romo, M.D., D.Sc.

Ranulfo Romo, M.D., D.Sc.

Professor of Neuroscience at the Institute of Cellular Physiology, National Autonomous University of Mexico (UNAM)

Audio and slides from the 2012 Keynote Address are available on the Cambridge Research Systems website.

Conversion of sensory signals into perceptual decisions

Saturday, May 12, 2012, 7:00 pm, Royal Ballroom 4-5

Most perceptual tasks require sequential steps to be carried out. This must be the case, for example, when subjects discriminate the difference in frequency between two mechanical vibrations applied sequentially to their fingertips. This perceptual task can be understood as a chain of neural operations: encoding the two consecutive stimulus frequencies, maintaining the first stimulus in working memory, comparing the second stimulus to the memory trace left by the first stimulus, and communicating the result of the comparison to the motor apparatus. Where and how in the brain are these cognitive operations executed? We addressed this problem by recording single neurons from several cortical areas while trained monkeys executed the vibrotactile discrimination task. We found that primary somatosensory cortex (S1) drives higher cortical areas where past and current sensory information are combined, such that a comparison of the two evolves into a decision. Consistent with this result, direct activation of the S1 can trigger quantifiable percepts in this task. These findings provide a fairly complete panorama of the neural dynamics that underlies the transformation of sensory information into an action and emphasize the importance of studying multiple cortical areas during the same behavioral task.

Biography

Ranulfo Romo is Professor of Neuroscience at the Institute of Cellular Physiology of the National Autonomous University of Mexico (UNAM). He received his M.D. degree from UNAM and a D.Sc. in the field of neuroscience from the University of Paris in France. His postdoctoral work was done with Wolfram Schultz at the University of Fribourg in Switzerland and Vernon Mountcastle at The Johns Hopkins University in Baltimore. Romo has received the Demuth Prize in Neuroscience from the Demuth Foundation, the National Prize on Sciences and Arts from the Mexican government and the Prize in Basic Medical Sciences from the Academy of Sciences for the Developing World (TWAS). He is a member of the Mexican Academy of Sciences, the Neurosciences Research Program headed by Nobel Prize Gerald Edelman and a Foreign Associate of the US National Academy of Sciences. Since 1991 Romo is a Howard Hughes International Research Scholar and recently was elected member of El Colegio Nacional.

2013 Keynote – Dora Angelaki

Dora Angelaki, Ph.D.

Dora Angelaki, Ph.D.

Dept of Neuroscience, Baylor College of Medicine
Website

Audio and slides from the 2013 Keynote Address are available on the Cambridge Research Systems website.

Optimal integration of sensory evidence: Building blocks and canonical computations

Saturday, May 11, 2013, 7:00 pm, Royal Ballroom 4-5

A fundamental aspect of our sensory experience is that information from different modalities is often seamlessly integrated into a unified percept. Recent computational and behavioral studies have shown that humans combine sensory cues according to a statistically optimal scheme derived from Bayesian probability theory; they perform better when two sensory cues are combined. We have explored multisensory cue integration for self-motion (heading) perception based on visual (optic flow) and vestibular (linear acceleration) signals. Neural correlates of optimal cue integration during a multimodal heading discrimination task are found in the activity of single neurons in the macaque visual cortex. Neurons with congruent heading preferences for visual and vestibular stimuli (‘congruent cells’) show improved sensitivity under cue combination. In contrast, neurons with opposite heading preferences (‘opposite cells’) show diminished sensitivity under cue combination. Responses of congruent neurons also reflect trial-by-trial re-weighting of visual and vestibular cues, as expected from optimal integration, and population responses can predict the main features of perceptual cue weighting that have been observed many times in humans. The trial-by-trial re-weighting can be simulated using a divisive normalization model extended to multisensory integration. Deficits in behavior after reversible chemical inactivation provide further support of the hypothesis that extrastriate visual cortex mediates multisensory integration for self-motion perception.

However, objects that move through the environment can distort optic flow and bias perceptual estimates of heading.  In biologically-constrained simulations, we show that decoding a mixed population of congruent and opposite cells according to their vestibular heading preferences can allow estimates of heading to be dissociated from object motion. These theoretical predictions are further supported by perceptual and neural responses: (1) Combined visual and vestibular stimulation reduces perceptual biases during object and heading discrimination tasks. (2) As predicted by model simulations, visual/vestibular integration creates a more robust representation of heading in congruent cells and a more robust representation of object motion in opposite cells.

In summary, these findings provide direct evidence for a biological basis of the benefits of multisensory integration, both for improving sensitivity and for resolving sensory ambiguities. The studies we summarize identify both the computations and neuronal mechanisms that may form the basis for cue integration. Diseases, such as autism spectrum disorders, might suffer from deficits in one or more of these canonical computations, which are fundamental in helping merge our senses to interpret and interact with the world.

Biography

Dr. Angelaki is the Wilhelmina Robertson Professor & Chair of the Department of Neuroscience, Baylor College of Medicine, with a joint appointment in the Departments of Electrical & Computer Engineering and Psychology, Rice University. She holds Diploma and PhD degrees in Electrical and Biomedical Engineering from the National Technical University of Athens and University of Minnesota.  Her general area of interest is computational, cognitive and systems neuroscience.  Within this broad field, she specializes in the neural mechanisms of spatial orientation and navigation using humans and non-human primates as a model.  She is interested in neural coding and how complex, cognitive behavior is produced by neuronal populations. She has received many honors and awards, including the inaugural Pradal Award in Neuroscience from the National Academy of Sciences (2012), the Grass lectureship from the Society of Neuroscience (2011), the Halpike-Nylen medal from the Barany Society (2006) and the Presidential Early Career Award for Scientists and Engineers (1996). Dr. Angelaki maintains a very active research laboratory funded primarily by the National Institute of Health and a strong presence in the Society for Neuroscience and other international organizations.

2014 Keynote – Mandyam V. Srinivasan

Mandyam V. Srinivasan, Ph.D.

Mandyam V. Srinivasan, Ph.D.

Queensland Brain Institute and School of Information Technology and Electrical Engineering, University of Queensland
Website

Audio and slides from the 2014 Keynote Address are available on the Cambridge Research Systems website.

MORE THAN A HONEY MACHINE: Vision and Navigation in Honeybees and Applications to Robotics

Saturday, May 17, 2014, 7:15 pm, Talk Room 1-2

Flying insects are remarkably adept at seeing and perceiving the world and navigating effectively in it, despite possessing a brain that weighs less than a milligram and carries fewer than 0.01% as many neurons as ours does. Although most insects lack stereo vision, they use a number of ingenious strategies for perceiving their world in three dimensions and navigating successfully in it.

The talk will describe how honeybees use their vision to stabilize and control their flight, and navigate to food sources. Bees and birds negotiate narrow gaps safely by balancing the apparent speeds of the images in the two eyes. Flight speed is regulated by holding constant the average image velocity as seen by both eyes. Visual cues based on motion are also used to compensate for crosswinds, and to avoid collisions with other flying insects. Bees landing on a surface hold constant the magnitude of the optic flow that they experience as they approach the surface, thus automatically ensuring that flight speed decreases to zero at touchdown. Foraging bees gauge distance flown by integrating optic flow: they possess a visually-driven “odometer” that is robust to variations in wind, body weight, energy expenditure, and the properties of the visual environment. Mid-air collisions are avoided by sensing cues derived from visual parallax, and using appropriate flight control maneuvers.

Some of the insect-based strategies described above are being used to design, implement and test biologically-inspired algorithms for the guidance of autonomous terrestrial and aerial vehicles. Application to manoeuvres such as attitude stabilization, terrain following, obstacle avoidance, automated landing, and the execution of extreme aerobatic manoeuvres will be described.

This research was supported by ARC Centre of Excellence in Vision Science Grant CE0561903, ARC Discovery Grant DP0559306, and by a Queensland Smart State Premier’s Fellowship.

Biography

Srinivasan’s research focuses on the principles of visual processing, perception and cognition in simple natural systems, and on the application of these principles to machine vision and robotics.
He holds an undergraduate degree in Electrical Engineering from Bangalore University, a Master’s degree in Electronics from the Indian Institute of Science, a Ph.D. in Engineering and Applied Science from Yale University, a D.Sc. in Neuroethology from the Australian National University, and an Honorary Doctorate from the University of Zurich. Srinivasan is presently Professor of Visual Neuroscience at the Queensland Brain Institute and the School of Information Technology and Electrical Engineering of the University of Queensland. Among his awards are Fellowships of the Australian Academy of Science, of the Royal Society of London, and of the Academy of Sciences for the Developing World, the 2006 Australia Prime Minister’s Science Prize, the 2008 U.K. Rank Prize for Optoelectronics, the 2009 Distinguished Alumni Award of the Indian Institute of Science, and the Membership of the Order of Australia (AM) in 2012

2015 Keynote – Bruno Olshausen

Bruno Olshausen, Ph.D.

Professor, Helen Wills Neuroscience Institute and School of Optometry, UC Berkeley; Director, Redwood Center for Theoretical Neuroscience
Website

Audio and slides from the 2015 Keynote Address are available on the Cambridge Research Systems website.

Vision in brains and machines

Saturday, May 16, 2015, 7:15 pm, Talk Room 1-2

The past twenty years have seen important advances in both our understanding of visual representation in brains and in the development of algorithms that enable machines to ‘see.’  What is perhaps most remarkable about these advances is how they emerged from the confluence of ideas from different disciplines:  Findings from signal analysis and statistics shed new light on the possible coding principles underlying image representations in visual cortex, and cortical models in turn inspired the development of multilayer neural network architectures which are now achieving breakthrough performance at object recognition tasks (deep learning).  Here I shall review these developments, and I shall discuss what further insights stand to be gained from this cross-fertilization of ideas.

Biography

Bruno Olshausen received B.S. and M.S. degrees in electrical engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. From 1996-2005 he was Assistant and subsequently Associate Professor in the Departments of Psychology and Neurobiology, Physiology and Behavior at UC Davis. Since 2005 he has been at UC Berkeley where he is currently Professor in the Helen Wills Neuroscience Institute and School of Optometry.

He also serves as Director of the Redwood Center for Theoretical Neuroscience, an interdisciplinary research group focusing on mathematical and computational models of brain function. Olshausen’s research aims to understand the information processing strategies employed by the brain for doing tasks such as object recognition and scene analysis.

Vision Sciences Society