2012 Keynote – Ranulfo Romo

Ranulfo Romo, M.D., D.Sc.

Ranulfo Romo, M.D., D.Sc.

Professor of Neuroscience at the Institute of Cellular Physiology, National Autonomous University of Mexico (UNAM)

Audio and slides from the 2012 Keynote Address are available on the Cambridge Research Systems website.

Conversion of sensory signals into perceptual decisions

Saturday, May 12, 2012, 7:00 pm, Royal Ballroom 4-5

Most perceptual tasks require sequential steps to be carried out. This must be the case, for example, when subjects discriminate the difference in frequency between two mechanical vibrations applied sequentially to their fingertips. This perceptual task can be understood as a chain of neural operations: encoding the two consecutive stimulus frequencies, maintaining the first stimulus in working memory, comparing the second stimulus to the memory trace left by the first stimulus, and communicating the result of the comparison to the motor apparatus. Where and how in the brain are these cognitive operations executed? We addressed this problem by recording single neurons from several cortical areas while trained monkeys executed the vibrotactile discrimination task. We found that primary somatosensory cortex (S1) drives higher cortical areas where past and current sensory information are combined, such that a comparison of the two evolves into a decision. Consistent with this result, direct activation of the S1 can trigger quantifiable percepts in this task. These findings provide a fairly complete panorama of the neural dynamics that underlies the transformation of sensory information into an action and emphasize the importance of studying multiple cortical areas during the same behavioral task.

Biography

Ranulfo Romo is Professor of Neuroscience at the Institute of Cellular Physiology of the National Autonomous University of Mexico (UNAM). He received his M.D. degree from UNAM and a D.Sc. in the field of neuroscience from the University of Paris in France. His postdoctoral work was done with Wolfram Schultz at the University of Fribourg in Switzerland and Vernon Mountcastle at The Johns Hopkins University in Baltimore. Romo has received the Demuth Prize in Neuroscience from the Demuth Foundation, the National Prize on Sciences and Arts from the Mexican government and the Prize in Basic Medical Sciences from the Academy of Sciences for the Developing World (TWAS). He is a member of the Mexican Academy of Sciences, the Neurosciences Research Program headed by Nobel Prize Gerald Edelman and a Foreign Associate of the US National Academy of Sciences. Since 1991 Romo is a Howard Hughes International Research Scholar and recently was elected member of El Colegio Nacional.

2013 Keynote – Dora Angelaki

Dora Angelaki, Ph.D.

Dora Angelaki, Ph.D.

Dept of Neuroscience, Baylor College of Medicine
Website

Audio and slides from the 2013 Keynote Address are available on the Cambridge Research Systems website.

Optimal integration of sensory evidence: Building blocks and canonical computations

Saturday, May 11, 2013, 7:00 pm, Royal Ballroom 4-5

A fundamental aspect of our sensory experience is that information from different modalities is often seamlessly integrated into a unified percept. Recent computational and behavioral studies have shown that humans combine sensory cues according to a statistically optimal scheme derived from Bayesian probability theory; they perform better when two sensory cues are combined. We have explored multisensory cue integration for self-motion (heading) perception based on visual (optic flow) and vestibular (linear acceleration) signals. Neural correlates of optimal cue integration during a multimodal heading discrimination task are found in the activity of single neurons in the macaque visual cortex. Neurons with congruent heading preferences for visual and vestibular stimuli (‘congruent cells’) show improved sensitivity under cue combination. In contrast, neurons with opposite heading preferences (‘opposite cells’) show diminished sensitivity under cue combination. Responses of congruent neurons also reflect trial-by-trial re-weighting of visual and vestibular cues, as expected from optimal integration, and population responses can predict the main features of perceptual cue weighting that have been observed many times in humans. The trial-by-trial re-weighting can be simulated using a divisive normalization model extended to multisensory integration. Deficits in behavior after reversible chemical inactivation provide further support of the hypothesis that extrastriate visual cortex mediates multisensory integration for self-motion perception.

However, objects that move through the environment can distort optic flow and bias perceptual estimates of heading.  In biologically-constrained simulations, we show that decoding a mixed population of congruent and opposite cells according to their vestibular heading preferences can allow estimates of heading to be dissociated from object motion. These theoretical predictions are further supported by perceptual and neural responses: (1) Combined visual and vestibular stimulation reduces perceptual biases during object and heading discrimination tasks. (2) As predicted by model simulations, visual/vestibular integration creates a more robust representation of heading in congruent cells and a more robust representation of object motion in opposite cells.

In summary, these findings provide direct evidence for a biological basis of the benefits of multisensory integration, both for improving sensitivity and for resolving sensory ambiguities. The studies we summarize identify both the computations and neuronal mechanisms that may form the basis for cue integration. Diseases, such as autism spectrum disorders, might suffer from deficits in one or more of these canonical computations, which are fundamental in helping merge our senses to interpret and interact with the world.

Biography

Dr. Angelaki is the Wilhelmina Robertson Professor & Chair of the Department of Neuroscience, Baylor College of Medicine, with a joint appointment in the Departments of Electrical & Computer Engineering and Psychology, Rice University. She holds Diploma and PhD degrees in Electrical and Biomedical Engineering from the National Technical University of Athens and University of Minnesota.  Her general area of interest is computational, cognitive and systems neuroscience.  Within this broad field, she specializes in the neural mechanisms of spatial orientation and navigation using humans and non-human primates as a model.  She is interested in neural coding and how complex, cognitive behavior is produced by neuronal populations. She has received many honors and awards, including the inaugural Pradal Award in Neuroscience from the National Academy of Sciences (2012), the Grass lectureship from the Society of Neuroscience (2011), the Halpike-Nylen medal from the Barany Society (2006) and the Presidential Early Career Award for Scientists and Engineers (1996). Dr. Angelaki maintains a very active research laboratory funded primarily by the National Institute of Health and a strong presence in the Society for Neuroscience and other international organizations.

2014 Keynote – Mandyam V. Srinivasan

Mandyam V. Srinivasan, Ph.D.

Mandyam V. Srinivasan, Ph.D.

Queensland Brain Institute and School of Information Technology and Electrical Engineering, University of Queensland
Website

Audio and slides from the 2014 Keynote Address are available on the Cambridge Research Systems website.

MORE THAN A HONEY MACHINE: Vision and Navigation in Honeybees and Applications to Robotics

Saturday, May 17, 2014, 7:15 pm, Talk Room 1-2

Flying insects are remarkably adept at seeing and perceiving the world and navigating effectively in it, despite possessing a brain that weighs less than a milligram and carries fewer than 0.01% as many neurons as ours does. Although most insects lack stereo vision, they use a number of ingenious strategies for perceiving their world in three dimensions and navigating successfully in it.

The talk will describe how honeybees use their vision to stabilize and control their flight, and navigate to food sources. Bees and birds negotiate narrow gaps safely by balancing the apparent speeds of the images in the two eyes. Flight speed is regulated by holding constant the average image velocity as seen by both eyes. Visual cues based on motion are also used to compensate for crosswinds, and to avoid collisions with other flying insects. Bees landing on a surface hold constant the magnitude of the optic flow that they experience as they approach the surface, thus automatically ensuring that flight speed decreases to zero at touchdown. Foraging bees gauge distance flown by integrating optic flow: they possess a visually-driven “odometer” that is robust to variations in wind, body weight, energy expenditure, and the properties of the visual environment. Mid-air collisions are avoided by sensing cues derived from visual parallax, and using appropriate flight control maneuvers.

Some of the insect-based strategies described above are being used to design, implement and test biologically-inspired algorithms for the guidance of autonomous terrestrial and aerial vehicles. Application to manoeuvres such as attitude stabilization, terrain following, obstacle avoidance, automated landing, and the execution of extreme aerobatic manoeuvres will be described.

This research was supported by ARC Centre of Excellence in Vision Science Grant CE0561903, ARC Discovery Grant DP0559306, and by a Queensland Smart State Premier’s Fellowship.

Biography

Srinivasan’s research focuses on the principles of visual processing, perception and cognition in simple natural systems, and on the application of these principles to machine vision and robotics.
He holds an undergraduate degree in Electrical Engineering from Bangalore University, a Master’s degree in Electronics from the Indian Institute of Science, a Ph.D. in Engineering and Applied Science from Yale University, a D.Sc. in Neuroethology from the Australian National University, and an Honorary Doctorate from the University of Zurich. Srinivasan is presently Professor of Visual Neuroscience at the Queensland Brain Institute and the School of Information Technology and Electrical Engineering of the University of Queensland. Among his awards are Fellowships of the Australian Academy of Science, of the Royal Society of London, and of the Academy of Sciences for the Developing World, the 2006 Australia Prime Minister’s Science Prize, the 2008 U.K. Rank Prize for Optoelectronics, the 2009 Distinguished Alumni Award of the Indian Institute of Science, and the Membership of the Order of Australia (AM) in 2012

2015 Keynote – Bruno Olshausen

Bruno Olshausen, Ph.D.

Professor, Helen Wills Neuroscience Institute and School of Optometry, UC Berkeley; Director, Redwood Center for Theoretical Neuroscience
Website

Audio and slides from the 2015 Keynote Address are available on the Cambridge Research Systems website.

Vision in brains and machines

Saturday, May 16, 2015, 7:15 pm, Talk Room 1-2

The past twenty years have seen important advances in both our understanding of visual representation in brains and in the development of algorithms that enable machines to ‘see.’  What is perhaps most remarkable about these advances is how they emerged from the confluence of ideas from different disciplines:  Findings from signal analysis and statistics shed new light on the possible coding principles underlying image representations in visual cortex, and cortical models in turn inspired the development of multilayer neural network architectures which are now achieving breakthrough performance at object recognition tasks (deep learning).  Here I shall review these developments, and I shall discuss what further insights stand to be gained from this cross-fertilization of ideas.

Biography

Bruno Olshausen received B.S. and M.S. degrees in electrical engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. From 1996-2005 he was Assistant and subsequently Associate Professor in the Departments of Psychology and Neurobiology, Physiology and Behavior at UC Davis. Since 2005 he has been at UC Berkeley where he is currently Professor in the Helen Wills Neuroscience Institute and School of Optometry.

He also serves as Director of the Redwood Center for Theoretical Neuroscience, an interdisciplinary research group focusing on mathematical and computational models of brain function. Olshausen’s research aims to understand the information processing strategies employed by the brain for doing tasks such as object recognition and scene analysis.

Vision Sciences Society