2019 Davida Teller Award – Barbara Dosher

The Vision Sciences Society is honored to present Dr. Barbara Dosher with the 2019 Davida Teller Award

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding female vision scientist in recognition of her exceptional, lasting contributions to the field of vision science.

Barbara Dosher

Distinguished Professor, University of California, Irvine

Barbara Dosher is a researcher in the areas of visual attention and learning. She received her PhD in 1977 from the University of Oregon and served on the faculty at Columbia University (1977 – 1992) and the University of California, Irvine (1992 – present). Her early career investigated temporal properties of retrieval from long-term and working memory, and priming using pioneering speed-accuracy tradeoff methods. She then transitioned to work largely in vision, bringing some of the concepts of cue combination in memory to initiate work on combining cues in visual perception. This was followed by work to develop observer models using external noise methods that went on to be the basis for proposing that changing templates, stimulus amplification, and noise filtering were the primary functions of attention. This and similar work then constrained and motivated new generative network models of visual perceptual learning that have been used to understand the roles of feedback in unsupervised and supervised learning, the induction of bias in perception, and the central contributions of reweighting evidence to a decision in visual learning.

Barbara Dosher is an elected member of the Society for Experimental Psychologists and the National Academy of Sciences, and is a recipient of the Howard Crosby Warren Medal (2013) and the Atkinson Prize (2018).

Learning and Attention in Visual Perception

Dr. Dosher will speak during the Awards session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2.

Visual perception functions in the context of a dynamic system that is affected by experience and by top-down goals and strategies. Both learning and attention can improve perception that is limited by the noisiness of internal visual processes and noise in the environment. This brief talk will illustrate several examples of how learning and attention can improve how well we see by amplifying relevant stimuli while filtering others—and how important it is to model the coding or transformation of early features in the development of truly generative quantitative models of perceptual performance.