No Keynote was presented at the V-VSS 2020 meeting.
Associate Professor of Psychology and
Associate Director for Communications, Vanderbilt Brain Institute
Suzana Herculano-Houzel, Ph.D., is a biologist and neuroscientist at Vanderbilt University, where she is Associate Professor in the Departments of Psychology and Biological Sciences. Her research focuses on what different brains are made of; what that matters in terms of cognition, energy cost, and longevity; and how the human brain is remarkable, but not special, in its makeup. She is the author of The Human Advantage (MIT Press, 2016), in which she tells the story of her discoveries on how many neurons different species have—and how the number of neurons in the cerebral cortex of humans is the largest of them all, thanks to the calories amassed with a very early technology developed by our ancestors: cooking. She spoke at TEDGlobal 2013 and TEDxNashville 2018 and is an avid communicator of science to the general public.
To learn more about Professor Herculano-Houzel and her research, please visit her website.
Whatever works: Celebrating diversity in brain scaling and evolution
Saturday, May 22, 2021, 1:00 pm EDT
Animals come in many sizes and shapes, and one would be hard-pressed to say that any one is better than the other, because all of them have passed the test of evolution: they’re here, so they have obviously been good enough. Still, what weighs on the trade-off scale when animals and their brains vary in size? What can be said about scaling of the visual system, in particular? What does it cost to have more neurons? Is it even necessary for larger animals to have more neurons? This talk will tackle the old topic of scaling in a new light that celebrates diversity, rather than assume that biology is improved through natural selection.
William T. Freeman
Thomas and Gerd Perkins Professor of Electrical Engineering
and Computer Science, Massachusetts Institute of Technology, Google Research
William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there. He was the Associate Department Head from 2011 – 2014.
Dr. Freeman’s current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time awards for papers from 1990, 1995 and 2005. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops.
He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.
To learn more about Professor Freeman and his research, please visit his website.
Visualizations of imperceptible visual signals
Saturday, May 18, 2019, 7:15 pm, Talk Room 1-2
Many useful visual stimuli are below the threshold of perception. By amplifying tiny motions and small photometric changes we can reveal a rich world of sub-threshold imagery.
Using an image representation modeled after features of V1, we have developed a “motion microscope” that rerenders a video with the small motions amplified. I’ll show motion magnified videos of singers, dancers, bridges, robots, and pipes, revealing properties that are otherwise hidden. Small photometric changes can also be measured and amplified. This can reveal the human pulse on skin, or people moving in an adjacent room.
Unseen intensity changes also occur when an occluder modulates light from a scene, creating an “accidental camera”. I’ll describe the invisible signals caused by corners and plants, and show how they can reveal imagery that is otherwise out of view.
I’ll close by describing my white whale, the Earth selfie. This is an effort to photograph the Earth from space with ground-based equipment by using the Moon as a camera. I’ll explain why this project matters, and will summarize recent progress.
Kenneth C. Catania
Stevenson Professor of Biological Sciences
Department of Biological Sciences
More than meets the eye: the extraordinary brains and behaviors of specialized predators.
Saturday, May 19, 2018, 7:15 pm, Talk Room 1-2
Predator-prey interactions are high stakes for both participants and have resulted in the evolution of high-acuity senses and dramatic attack and escape behaviors. I will describe the neurobiology and behavior of some extreme predators, including star-nosed moles, tentacled snakes, and electric eels. Each species has evolved special senses and each provides unique perspectives on the evolution of brains and behavior.
A neuroscientist by training, Ken Catania has spent much of his career investigating the unusual brains and behaviors of specialized animals. These have included star-nosed moles, tentacled snakes, water shrews, alligators, crocodiles, and most recently electric eels. His studies often focus on predators that have evolved special senses and weapons to find and overcome elusive prey. He is considered an expert in extreme animal behaviors and studies specialized species to reveal general principles about brain organization and sensory systems. Catania was named a MacArthur Fellow in 2006, a Guggenheim Fellow in 2014, and in 2013 he received the Pradel Research Award in Neurosciences from the National Academy of Sciences. Catania received a BS in zoology from the University of Maryland (1989), a Ph.D. (1994) in neurosciences from the University of California, San Diego, and is currently a Stevenson Professor of Biological Sciences at Vanderbilt University.
Katherine J. Kuchenbecker
Director of the new Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
Associate Professor (on leave), Mechanical Engineering and Applied Mechanics Department, University of Pennsylvania, Philadelphia, USA
Haptography: Capturing and Displaying Touch
Saturday, May 20, 2017, 7:15 pm, Talk Room 1-2
When you touch objects in your surroundings, you can discern each item’s physical properties from the rich array of haptic cues that you feel, including both the tactile sensations in your skin and the kinesthetic cues from your muscles and joints. Although physical interaction with the world is at the core of human experience, very few robotic and computer interfaces provide the user with high-fidelity touch feedback, limiting their intuitiveness. By way of two detailed examples, this talk will describe the approach of haptography, which uses biomimetic sensors and signal processing to capture tactile sensations, plus novel algorithms and actuation systems to display realistic touch cues to the user. First, we invented a novel way to map deformations and vibrations sensed by a robotic fingertip to the actuation of a fingertip tactile display in real time. We then demonstrated the striking utility of such cues in a simulated tissue palpation task through integration with a da Vinci surgical robot. Second, we created the world’s most realistic haptic virtual surfaces by recording and modeling what a user feels when touching real objects with an instrumented stylus. The perceptual effects of displaying the resulting data-driven friction forces, tapping transients, and texture vibrations were quantified by having users compare the original surfaces to their virtual versions. While much work remains to be done, we are starting to see the tantalizing potential of systems that leverage tactile cues to allow a user to interact with distant or virtual environments as though they were real and within reach.
Katherine J. Kuchenbecker is Director of the new Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She is currently on leave from her appointment as Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, where she held the Class of 1940 Bicentennial Endowed Term Chair and a secondary appointment in Computer and Information Science. Kuchenbecker earned a PhD (2006) in Mechanical Engineering at Stanford University and was a postdoctoral fellow at the Johns Hopkins University before joining the faculty at Penn in 2007. Her research centers on haptic interfaces, which enable a user to touch virtual and distant objects as though they were real and within reach, as well as haptic sensing systems, which allow robots to physically interact with and feel real objects. She delivered a widely viewed TEDYouth talk on haptics in 2012, and she has received several honors including a 2009 NSF CAREER Award, the 2012 IEEE Robotics and Automation Society Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, and many best paper and best demonstration awards.
Sabine Kastner, Ph.D.
Professor of Neuroscience and Psychology in the Princeton Neuroscience Institute and Department of Psychology
Neural dynamics of the primate attention network
Saturday, May 14, 2016, 7:15 pm, Talk Room 1-2
The selection of information from our cluttered sensory environments is one of the most fundamental cognitive operations performed by the primate brain. In the visual domain, the selection process is thought to be mediated by a static spatial mechanism – a ‘spotlight’ that can be flexibly shifted around the visual scene. This spatial search mechanism has been associated with a large-scale network that consists of multiple nodes distributed across all major cortical lobes and includes also subcortical regions. To identify the specific functions of each network node and their functional interactions is a major goal for the field of cognitive neuroscience. In my lecture, I will challenge two common notions of attention research. First, I will show behavioral and neural evidence that the attentional spotlight is neither stationary or unitary. In the appropriate behavioral context, even when spatial attention is sustained at a given location, additional spatial mechanisms operate flexibly and automatically in parallel to monitor the visual environment. Second, spatial attention is assumed to be under ‘top-down’ control of higher order cortex. In contrast, I will provide neural evidence indicating that attentional control is exerted through thalamo-cortical interactions. Together, this evidence indicates the need for major revisions of traditional attention accounts.
Sabine Kastner is a Professor of Neuroscience and Psychology in the Princeton Neuroscience Institute and Department of Psychology. She also serves as the Scientific Director of Princeton’s neuroimaging facility and heads the Neuroscience of Attention and Perception Laboratory. Kastner earned an M.D. (1993) and PhD (1994) degree and received postdoctoral training at the Max-Planck-Institute for Biophysical Chemistry and NIMH before joining the faculty at Princeton University in 2000. She studies the neural basis of visual perception, attention, and awareness in the primate brain and has published more than 100 articles in journals and books and has co-edited the ‘Handbook of Attention’ (OUP), published in 2013. Kastner serves on several editorial boards and is currently an editor at eLife. Kastner enjoys a number of outreach activities such as fostering the career of young women in science (Young Women’s Science Fair, Synapse project), promoting neuroscience in schools (Saturday Science lectures, science projects in elementary schools, chief editor for Frontiers of young minds’ understanding neuroscience section) and exploring intersections of neuroscience and art (events at Kitchen, Rubin museum in NYC).
Robert H. Wurtz
Laboratory of Sensorimotor Research, National Eye Institute, NIH, Bethesda, MD
Audio and slides from the 2009 Keynote Address are available on the Cambridge Research Systems website.
Brain Circuits for Stable Visual Perception
Saturday, May 9, 2009, 7:30 pm, Royal Palm Ballroom
In the 19th century von Helmholtz detailed the need for signals in the brain that provide information about each impending eye movement. He argued that such signals could interact with the visual input from the eye to preserve stable visual perception in spite of the incessant saccadic eye movements that continually displace the image of the visual world on the retina. In the 20th century, Sperry as well as von Holst and Mittelstaedt provided experimental evidence in fish and flies for such signals for the internal monitoring of movement, signals they termed corollary discharge or efference copy, respectively. Experiments in the last decade (reviewed by Sommer and Wurtz, 2008) have established a corollary discharge pathway in the monkey brain that accompanies saccadic eye movements. This corollary activity originates in the superior colliculus and is transmitted to frontal cortex through the major thalamic nucleus related to frontal cortex, the medial dorsal nucleus. The corollary discharge has been demonstrated to contribute to the programming of saccades when visual guidance is not available. It might also provide the internal movement signal invoked by Helmholtz to produce stable visual perception. A specific neuronal mechanism for such stability was proposed by Duhamel, Colby, and Goldberg (1992) based upon their observation that neurons in monkey frontal cortex shifted the location of their maximal sensitivity with each impending saccade. Such shifting receptive fields must depend on input from a corollary discharge, and this is just the input to frontal cortex recently identified. Inactivating the corollary discharge to frontal cortex at its thalamic relay produced a reduction in the shift. This dependence of the shifting receptive fields on an identified corollary discharge provides direct experimental evidence for modulation of visual processing by a signal within the brain related to the generation of movement – an interaction proposed by Helmholtz for maintaining stable visual perception.
Robert H. Wurtz is a NIH Distinguished Scientist and Chief of the Section on Visuomotor Integration at the National Eye Institute. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and has received many awards. His work is centered on the visual and oculomotor system of the primate brain that controls the generation of rapid or saccadic eye movements, and the use of the monkey as a model of human visual perception and the control of movement. His recent work has concentrated on the inputs to the cerebral cortex that underlie visual attention and the stability of visual perception.
Professor of Biology and Neurobiology Director, Bio-X, Stanford University
Audio and slides from the 2010 Keynote Address are available on the Cambridge Research Systems website.
Releasing the Brake on Ocular Dominance Plasticity
Saturday, May 8, 2010, 7:45 pm, Royal Palm Ballroom 4-5
Connections in adult visual system are highly precise, but they do not start out that way. Precision emerges during critical periods of development as synaptic connections remodel, a process requiring neural activity and involving regression of some synapses and strengthening and stabilization of others. Activity also regulates neuronal genes; in an unbiased PCR-based differential screen, we discovered unexpectedly that MHC Class I genes are expressed in neurons and are regulated by spontaneous activity and visual experience (Corriveau et al, 1998; Goddard et al, 2007). To assess requirements for MHCI in the CNS, mice lacking expression of specific MHCI genes were examined. Synapse regression in developing visual system did not occur, synaptic strengthening was greater than normal in adult hippocampus, and ocular dominance (OD) plasticity in visual cortex was enhanced (Huh et al, 2000; Datwani et al, 2009). We searched for receptors that could interact with neuronal MHCI and carry out these activity-dependent processes. mRNA for PirB, an innate immune receptor, was found highly expressed in neurons in many regions of mouse CNS. We generated mutant mice lacking PirB function and discovered that OD plasticity is also enhanced (Syken et al., 2006), as is hippocampal LTP. Thus, MHCI ligands signaling via PirB receptor may function to “brake” activity- dependent synaptic plasticity. Together, results imply that these molecules, thought previously to function only in the immune system, may also act at neuronal synapses to limit how much- or perhaps how quickly- synapse strength changes in response to new experience. These molecules may be crucial for controlling circuit excitability and stability in developing as well as adult brain, and changes in their function may contribute to developmental disorders such as Autism, Dyslexia and even Schizophrenia.
Supported by NIH Grants EY02858, MH071666, the Mathers Charitable Foundation and the Dana Foundation
Carla Shatz is professor of biology and neurobiology and director of Bio-X at Stanford University. Dr. Shatz’s research focuses on the development of the mammalian visual system, with an overall goal of better understanding critical periods of brain wiring and developmental disorders such as autism, dyslexia and schizophrenia, and also for understanding how the nervous and immune systems interact. Dr. Shatz graduated from Radcliffe College in 1969 with a B.A. in Chemistry. She was honored with a Marshall Scholarship to study at University College London, where she received an M.Phil. in Physiology in 1971. In 1976, she received a Ph.D. in Neurobiology from Harvard Medical School, where she studied with Nobel Laureates David Hubel and Torsten Wiesel. During this period, she was appointed as a Harvard Junior Fellow. From 1976 to 1978 she obtained postdoctoral training with Dr. Pasko Rakic in the Department of Neuroscience, Harvard Medical School. In 1978, Dr. Shatz moved to Stanford University, where she attained the rank of Professor of Neurobiology in 1989. In 1992, she moved her laboratory to the University of California, Berkeley, where she was Professor of Neurobiology and an Investigator of the Howard Hughes Medical Institute. In 2000, she assumed the Chair of the Department of Neurobiology at Harvard Medical School as the Nathan Marsh Pusey Professor of Neurobiology. Dr. Shatz received the Society for Neuroscience Young Investigator Award in 1985, the Silvo Conte Award from the National Foundation for Brain Research in 1993, the Charles A. Dana Award for Pioneering Achievement in Health and Education in 1995, the Alcon Award for Outstanding Contributions to Vision Research in 1997, the Bernard Sachs Award from the Child Neurology Society in 1999, the Weizmann Institute Women and Science Award in 2000 and the Gill Prize in Neuroscience in 2006. In 1992, she was elected to the American Academy of Arts and Sciences, in 1995 to the National Academy of Sciences, in 1997 to the American Philosophical Society, and in 1999 to the Institute of Medicine. In 2009 she received the Salpeter Lifetime achievement award from the Society for Neuroscience.
Daniel M. Wolpert
Professor of Engineering, University of Cambridge
Audio and slides from the 2011 Keynote Address are available on the Cambridge Research Systems website.
Probabilistic models of human sensorimotor control
Saturday, May 7, 2011, 7:00 – 8:15 pm, Royal Palm Ballroom 4-5
The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. While computers can now beat grandmasters at chess, no computer can yet control a robot to manipulate a chess piece with the dexterity of a six-year-old child. I will review our recent work on how the humans learn to make skilled movements covering probabilistic models of learning, including Bayesian and structural learning, how the brain makes and uses motor predictions, and the interaction between decision making and sensorimotor control.
Daniel Wolpert is Professor of Engineering at the University of Cambridge and a Fellow of Trinity College. Daniel’s research focuses on computational and experimental approaches to human sensorimotor control. Daniel read medical sciences at Cambridge and clinical medicine at Oxford. After working as a medical doctor for a year he completed a D. Phil. in the Physiology Department in Oxford. He then worked as a postdoctoral fellow and Fulbright Scholar at MIT, before moving to the Institute of Neurology, UCL. In 2005 he took up his current post in Cambridge. He was elected a Fellow of the Academy of Medical Sciences in 2004 and was awarded the Royal Society Francis Crick Prize Lecture (2005) and has given the Fred Kavli Distinguished International Scientist Lecture at the Society for Neuroscience (2009). Further details can be found on www.wolpertlab.com.
Ranulfo Romo, M.D., D.Sc.
Professor of Neuroscience at the Institute of Cellular Physiology, National Autonomous University of Mexico (UNAM)
Audio and slides from the 2012 Keynote Address are available on the Cambridge Research Systems website.
Conversion of sensory signals into perceptual decisions
Saturday, May 12, 2012, 7:00 pm, Royal Ballroom 4-5
Most perceptual tasks require sequential steps to be carried out. This must be the case, for example, when subjects discriminate the difference in frequency between two mechanical vibrations applied sequentially to their fingertips. This perceptual task can be understood as a chain of neural operations: encoding the two consecutive stimulus frequencies, maintaining the first stimulus in working memory, comparing the second stimulus to the memory trace left by the first stimulus, and communicating the result of the comparison to the motor apparatus. Where and how in the brain are these cognitive operations executed? We addressed this problem by recording single neurons from several cortical areas while trained monkeys executed the vibrotactile discrimination task. We found that primary somatosensory cortex (S1) drives higher cortical areas where past and current sensory information are combined, such that a comparison of the two evolves into a decision. Consistent with this result, direct activation of the S1 can trigger quantifiable percepts in this task. These findings provide a fairly complete panorama of the neural dynamics that underlies the transformation of sensory information into an action and emphasize the importance of studying multiple cortical areas during the same behavioral task.
Ranulfo Romo is Professor of Neuroscience at the Institute of Cellular Physiology of the National Autonomous University of Mexico (UNAM). He received his M.D. degree from UNAM and a D.Sc. in the field of neuroscience from the University of Paris in France. His postdoctoral work was done with Wolfram Schultz at the University of Fribourg in Switzerland and Vernon Mountcastle at The Johns Hopkins University in Baltimore. Romo has received the Demuth Prize in Neuroscience from the Demuth Foundation, the National Prize on Sciences and Arts from the Mexican government and the Prize in Basic Medical Sciences from the Academy of Sciences for the Developing World (TWAS). He is a member of the Mexican Academy of Sciences, the Neurosciences Research Program headed by Nobel Prize Gerald Edelman and a Foreign Associate of the US National Academy of Sciences. Since 1991 Romo is a Howard Hughes International Research Scholar and recently was elected member of El Colegio Nacional.