2024 Keynote – Dora Biro

Dora Biro

Beverly Petterson Bishop and Charles W. Bishop Professor of Brain and Cognitive Sciences at the University of Rochester.

Dora Biro received her undergraduate and PhD degrees from the University of Oxford and subsequently held a JSPS postdoctoral research fellowship and a visiting professorship at the Primate Research Institute of Kyoto University, Japan, before returning to Oxford as a Royal Society University Research Fellow and later Professor of Animal Behaviour. She is the recipient of a L’Oreal-UNESCO “For Women in Science” fellowship, with research interests centered on animal cognition and collective animal behavior, including navigation, tool use, culture, and collective decision-making.

To learn more about Professor Dora Biro and her research, please visit her website.

Eye in the sky: visually-guided navigation in birds

Saturday, May 18, 2024, 7:15 – 8:15 pm, Talk Room 1-2

Vision is critically important to many aspects of a bird’s life, from finding food to avoiding predators. Correspondingly, birds have evolved the largest eyes relative to body size in the Animal Kingdom, and avian vision benefits from a range of adaptations including tetrachromacy, dual foveas, wide fields of view, and high visual acuity. My research focuses on the role of visual landmarks in avian navigation through familiar landscapes: how do birds perceive and map space using visual information and how do flocks of birds combine their individually acquired knowledge of a complex visual landscape to arrive at directional decisions as a group? I explore these questions using biologging technologies that allow us to track free-flying birds’ travel paths (through on-board miniature GPS) as well as strategies for visually scanning the environment (through head-mounted inertial measurement units), as they navigate home from distant sites either solo or in groups of various sizes and compositions. With these data, we are able to experimentally address a range of questions related to basic processes of perception and cognition (learning and memory), as well as more complex collective outcomes such as collective problem-solving, conflict resolution, collective vigilance, the ‘wisdom of the crowd’, and the cultural accumulation of collective knowledge.

2023 Keynote – Hany Farid

Hany Farid

Electrical Engineering & Computer Sciences and the School of Information, University of California Berkeley

Hany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information. His research focuses on digital forensics, forensic science, misinformation, and human perception. Dr. Farid received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, a M.S. in Computer Science from SUNY Albany in 1992, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, Hany Farid joined the faculty at Dartmouth College in 1999 where he remained until 2019. Dr. Farid is the recipient of an Alfred P. Sloan Fellowship, a John Simon Guggenheim Fellowship, and is a Fellow of the National Academy of Inventors.

To learn more about Professor Hany Farid and his research, please visit his website.

Creating, (Mis)using, and Detecting Deep Fakes

Saturday, May 20, 2023, 7:15 – 8:15 pm, Talk Room 1-2

Synthetic media – so-called deep fakes – have captured the imagination of some and struck fear in others. These stunningly realistic images, audio, and videos are the product of AI-powered synthesis tools. Although just the latest in a long line of techniques used to manipulate reality, deep fakes pose new opportunities and risks due to their ease of use and their democratized accessibility. I will describe how deep fakes are created, how they are being used and misused, and if and how they can be perceptually and computationally distinguished from reality.

2022 Keynote – Geoffrey Hinton

Coordinate frames and shape perception in neural nets

Wednesday, June 1, 2022, 10:30 – 11:30 am EDT

Geoffrey Hinton

University Professor Emeritus at the University of Toronto; Engineering Fellow at Google Research; and Chief scientific adviser at (and co-founder of) the Vector Institute for Artificial Intelligence in Toront

Geoffrey Hinton Ph.D., Geoffrey Hinton received his PhD in Artificial Intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie-Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto where he is now an emeritus professor. He is also a VP Engineering fellow at Google and Chief Scientific Adviser at the Vector Institute.

Geoffrey Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification.

Geoffrey Hinton is a fellow of the UK Royal Society and a foreign member of the US National Academy of Engineering and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart prize, the IJCAI award for research excellence, the Killam prize for Engineering, the IEEE Frank Rosenblatt medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold medal, the NEC C&C award, the BBVA award, the Honda Prize and the Turing Award.

To learn more about Professor Geoffrey Hinton and his research, please visit his website.

2021 Keynote – Suzana Herculano-Houzel

Suzana Herculano-Houzel

Associate Professor of Psychology and
Associate Director for Communications, Vanderbilt Brain Institute

Suzana Herculano-Houzel, Ph.D., is a biologist and neuroscientist at Vanderbilt University, where she is Associate Professor in the Departments of Psychology and Biological Sciences. Her research focuses on what different brains are made of; what that matters in terms of cognition, energy cost, and longevity; and how the human brain is remarkable, but not special, in its makeup. She is the author of The Human Advantage (MIT Press, 2016), in which she tells the story of her discoveries on how many neurons different species have—and how the number of neurons in the cerebral cortex of humans is the largest of them all, thanks to the calories amassed with a very early technology developed by our ancestors: cooking. She spoke at TEDGlobal 2013 and TEDxNashville 2018 and is an avid communicator of science to the general public.

To learn more about Professor Herculano-Houzel and her research, please visit her website.

Whatever works: Celebrating diversity in brain scaling and evolution

Saturday, May 22, 2021, 1:00 pm EDT

Animals come in many sizes and shapes, and one would be hard-pressed to say that any one is better than the other, because all of them have passed the test of evolution: they’re here, so they have obviously been good enough. Still, what weighs on the trade-off scale when animals and their brains vary in size? What can be said about scaling of the visual system, in particular? What does it cost to have more neurons? Is it even necessary for larger animals to have more neurons? This talk will tackle the old topic of scaling in a new light that celebrates diversity, rather than assume that biology is improved through natural selection.

2019 Keynote – William T. Freeman

William T. Freeman

Thomas and Gerd Perkins Professor of Electrical Engineering
and Computer Science, Massachusetts Institute of Technology, Google Research

William T. Freeman is the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) there. He was the Associate Department Head from 2011 – 2014.

Dr. Freeman’s current research interests include machine learning applied to computer vision, Bayesian models of visual perception, and computational photography. He received outstanding paper awards at computer vision or machine learning conferences in 1997, 2006, 2009 and 2012, and test-of-time awards for papers from 1990, 1995 and 2005. Previous research topics include steerable filters and pyramids, orientation histograms, the generic viewpoint assumption, color constancy, computer vision for computer games, and belief propagation in networks with loops.

He is active in the program or organizing committees of computer vision, graphics, and machine learning conferences. He was the program co-chair for ICCV 2005, and for CVPR 2013.

To learn more about Professor Freeman and his research, please visit his website.

Visualizations of imperceptible visual signals

Saturday, May 18, 2019, 7:15 pm, Talk Room 1-2

Many useful visual stimuli are below the threshold of perception.  By amplifying tiny motions and small photometric changes we can reveal a rich world of sub-threshold imagery.

Using an image representation modeled after features of V1, we have developed a “motion microscope” that rerenders a video with the small motions amplified.  I’ll show motion magnified videos of singers, dancers, bridges, robots, and pipes, revealing properties that are otherwise hidden.  Small photometric changes can also be measured and amplified.  This can reveal the human pulse on skin, or people moving in an adjacent room.

Unseen intensity changes also occur when an occluder modulates light from a scene, creating an “accidental camera”.  I’ll describe the invisible signals caused by corners and plants, and show how they can reveal imagery that is otherwise out of view.

I’ll close by describing my white whale, the Earth selfie.  This is an effort to photograph the Earth from space with ground-based equipment by using the Moon as a camera.  I’ll explain why this project matters, and will summarize recent progress.

2018 Keynote – Kenneth C. Catania

Kenneth C. Catania

Stevenson Professor of Biological Sciences
Vanderbilt University
Department of Biological Sciences

More than meets the eye: the extraordinary brains and behaviors of specialized predators.

Saturday, May 19, 2018, 7:15 pm, Talk Room 1-2

Predator-prey interactions are high stakes for both participants and have resulted in the evolution of high-acuity senses and dramatic attack and escape behaviors.  I will describe the neurobiology and behavior of some extreme predators, including star-nosed moles, tentacled snakes, and electric eels.  Each species has evolved special senses and each provides unique perspectives on the evolution of brains and behavior.


A neuroscientist by training, Ken Catania has spent much of his career investigating the unusual brains and behaviors of specialized animals.  These have included star-nosed moles, tentacled snakes, water shrews, alligators, crocodiles, and most recently electric eels. His studies often focus on predators that have evolved special senses and weapons to find and overcome elusive prey.  He is considered an expert in extreme animal behaviors and studies specialized species to reveal general principles about brain organization and sensory systems. Catania was named a MacArthur Fellow in 2006, a Guggenheim Fellow in 2014, and in 2013 he received the Pradel Research Award in Neurosciences from the National Academy of Sciences.  Catania received a BS in zoology from the University of Maryland (1989), a Ph.D. (1994) in neurosciences from the University of California, San Diego, and is currently a Stevenson Professor of Biological Sciences at Vanderbilt University.

2017 Keynote – Katherine J. Kuchenbecker

Katherine J. Kuchenbecker

Director of the new Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany

Associate Professor (on leave), Mechanical Engineering and Applied Mechanics Department, University of Pennsylvania, Philadelphia, USA

Haptography: Capturing and Displaying Touch

Saturday, May 20, 2017, 7:15 pm, Talk Room 1-2

When you touch objects in your surroundings, you can discern each item’s physical properties from the rich array of haptic cues that you feel, including both the tactile sensations in your skin and the kinesthetic cues from your muscles and joints. Although physical interaction with the world is at the core of human experience, very few robotic and computer interfaces provide the user with high-fidelity touch feedback, limiting their intuitiveness. By way of two detailed examples, this talk will describe the approach of haptography, which uses biomimetic sensors and signal processing to capture tactile sensations, plus novel algorithms and actuation systems to display realistic touch cues to the user. First, we invented a novel way to map deformations and vibrations sensed by a robotic fingertip to the actuation of a fingertip tactile display in real time. We then demonstrated the striking utility of such cues in a simulated tissue palpation task through integration with a da Vinci surgical robot. Second, we created the world’s most realistic haptic virtual surfaces by recording and modeling what a user feels when touching real objects with an instrumented stylus. The perceptual effects of displaying the resulting data-driven friction forces, tapping transients, and texture vibrations were quantified by having users compare the original surfaces to their virtual versions. While much work remains to be done, we are starting to see the tantalizing potential of systems that leverage tactile cues to allow a user to interact with distant or virtual environments as though they were real and within reach.


Katherine J. Kuchenbecker is Director of the new Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She is currently on leave from her appointment as Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, where she held the Class of 1940 Bicentennial Endowed Term Chair and a secondary appointment in Computer and Information Science. Kuchenbecker earned a PhD (2006) in Mechanical Engineering at Stanford University and was a postdoctoral fellow at the Johns Hopkins University before joining the faculty at Penn in 2007. Her research centers on haptic interfaces, which enable a user to touch virtual and distant objects as though they were real and within reach, as well as haptic sensing systems, which allow robots to physically interact with and feel real objects. She delivered a widely viewed TEDYouth talk on haptics in 2012, and she has received several honors including a 2009 NSF CAREER Award, the 2012 IEEE Robotics and Automation Society Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, and many best paper and best demonstration awards.

2016 Keynote – Sabine Kastner

Sabine Kastner, Ph.D.

Professor of Neuroscience and Psychology in the Princeton Neuroscience Institute and Department of Psychology

Neural dynamics of the primate attention network

Saturday, May 14, 2016, 7:15 pm, Talk Room 1-2

The selection of information from our cluttered sensory environments is one of the most fundamental cognitive operations performed by the primate brain. In the visual domain, the selection process is thought to be mediated by a static spatial mechanism – a ‘spotlight’ that can be flexibly shifted around the visual scene. This spatial search mechanism has been associated with a large-scale network that consists of multiple nodes distributed across all major cortical lobes and includes also subcortical regions.  To identify the specific functions of each network node and their functional interactions is a major goal for the field of cognitive neuroscience.  In my lecture, I will challenge two common notions of attention research.  First, I will show behavioral and neural evidence that the attentional spotlight is neither stationary or unitary. In the appropriate behavioral context, even when spatial attention is sustained at a given location, additional spatial mechanisms operate flexibly and automatically in parallel to monitor the visual environment. Second, spatial attention is assumed to be under ‘top-down’ control of higher order cortex. In contrast, I will provide neural evidence indicating that attentional control is exerted through thalamo-cortical interactions.  Together, this evidence indicates the need for major revisions of traditional attention accounts.


Sabine Kastner is a Professor of Neuroscience and Psychology in the Princeton Neuroscience Institute and Department of Psychology. She also serves as the Scientific Director of Princeton’s neuroimaging facility and heads the Neuroscience of Attention and Perception Laboratory. Kastner earned an M.D. (1993) and PhD (1994) degree and received postdoctoral training at the Max-Planck-Institute for Biophysical Chemistry and NIMH before joining the faculty at Princeton University in 2000. She studies the neural basis of visual perception, attention, and awareness in the primate brain and has published more than 100 articles in journals and books and has co-edited the ‘Handbook of Attention’ (OUP), published in 2013. Kastner serves on several editorial boards and is currently an editor at eLife. Kastner enjoys a number of outreach activities such as fostering the career of young women in science (Young Women’s Science Fair, Synapse project), promoting neuroscience in schools (Saturday Science lectures, science projects in elementary schools, chief editor for Frontiers of young minds’ understanding neuroscience section) and exploring intersections of neuroscience and art (events at Kitchen, Rubin museum in NYC).

2009 Keynote – Robert H. Wurtz

Robert H. Wurtz

Robert H. Wurtz

Laboratory of Sensorimotor Research, National Eye Institute, NIH, Bethesda, MD
NIH Distinguished Scientist and Chief of the Section on Visuomotor Integration at the National Eye Institute

Audio and slides from the 2009 Keynote Address are available on the Cambridge Research Systems website.

Brain Circuits for Stable Visual Perception

Saturday, May 9, 2009, 7:30 pm, Royal Palm Ballroom

In the 19th century von Helmholtz detailed the need for signals in the brain that provide information about each impending eye movement.  He argued that such signals could interact with the visual input from the eye to preserve stable visual perception in spite of the incessant saccadic eye movements that continually displace the image of the visual world on the retina.  In the 20th century, Sperry as well as von Holst and Mittelstaedt provided experimental evidence in fish and flies for such signals for the internal monitoring of movement, signals they termed corollary discharge or efference copy, respectively.  Experiments in the last decade (reviewed by Sommer and Wurtz, 2008) have established a corollary discharge pathway in the monkey brain that accompanies saccadic eye movements.  This corollary activity originates in the superior colliculus and is transmitted to frontal cortex through the major thalamic nucleus related to frontal cortex, the medial dorsal nucleus.  The corollary discharge has been demonstrated to contribute to the programming of saccades when visual guidance is not available. It might also provide the internal movement signal invoked by Helmholtz to produce stable visual perception.  A specific neuronal mechanism for such stability was proposed by Duhamel, Colby, and Goldberg (1992) based upon their observation that neurons in monkey frontal cortex shifted the location of their maximal sensitivity with each impending saccade.  Such shifting receptive fields must depend on input from a corollary discharge, and this is just the input to frontal cortex recently identified.  Inactivating the corollary discharge to frontal cortex at its thalamic relay produced a reduction in the shift.  This dependence of the shifting receptive fields on an identified corollary discharge provides direct experimental evidence for modulation of visual processing by a signal within the brain related to the generation of movement – an interaction proposed by Helmholtz for maintaining stable visual perception.


Robert H. Wurtz is a NIH Distinguished Scientist and Chief of the Section on Visuomotor Integration at the National Eye Institute. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and has received many awards. His work is centered on the visual and oculomotor system of the primate brain that controls the generation of rapid or saccadic eye movements, and the use of the monkey as a model of human visual perception and the control of movement. His recent work has concentrated on the inputs to the cerebral cortex that underlie visual attention and the stability of visual perception.

Vision Sciences Society