Time: Saturday, May 16, 7:15 pm, Talk Room 1-2
Vision in brains and machines
The past twenty years have seen important advances in both our understanding of visual representation in brains and in the development of algorithms that enable machines to 'see.' What is perhaps most remarkable about these advances is how they emerged from the confluence of ideas from different disciplines: Findings from signal analysis and statistics shed new light on the possible coding principles underlying image representations in visual cortex, and cortical models in turn inspired the development of multilayer neural network architectures which are now achieving breakthrough performance at object recognition tasks (deep learning). Here I shall review these developments, and I shall discuss what further insights stand to be gained from this cross-fertilization of ideas.
Bruno Olshausen received B.S. and M.S. degrees in electrical engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. From 1996-2005 he was Assistant and subsequently Associate Professor in the Departments of Psychology and Neurobiology, Physiology and Behavior at UC Davis. Since 2005 he has been at UC Berkeley where he is currently Professor in the Helen Wills Neuroscience Institute and School of Optometry.
He also serves as Director of the Redwood Center for Theoretical Neuroscience, an interdisciplinary research group focusing on mathematical and computational models of brain function. Olshausen's research aims to understand the information processing strategies employed by the brain for doing tasks such as object recognition and scene analysis.