Monday, May 18, 2026, 4:00 – 5:30 pm, Banyan Breezeway
Students and postdocs are invited to the 11th annual “Meet the Professors” event. This year’s event will follow a similar format to last year’s. There will be up to five, short, 15-minute meetings in small groups. Chat about science, VSS, career issues, work/life balance, or whatever comes up. Or just connect with a new VSS colleague.
Space will be limited and assigned on a first-come, first-served basis. Each student/postdoc will meet with five professors. See below for this year’s professors.
If you would like to attend Meet the Professors, please complete this Registration Form. Registration will close on April 17, 2026, or when all spaces are filled.
Participating Professors
David Brainard (RRL Professor of Psychology, University of Pennsylvania) Human physiological optics, retinal imaging, color vision, psychophysical performance, and models thereof.
Angela Brown (Professor, The Ohio State University) Prof. Angela Brown and Prof. Delwin Lindsey are a married couple who work collaboratively on a program of research on color cognition. They are currently using fMRI to study neurophysiological correlates of color appearance. They are also studying how humans understand color and communicate about color, using behavioral, cross-cultural, and computational approaches. New projects compare the naming and classification of color to other surface properties, such as texture, to reach a more general understanding of how people perceive, classify, and name the material qualities of objects.
Marisa Carrasco (Julius Silver Professor of Psychology and Neural Science at NYU) investigates psychological and neural mechanisms of visual perception and attention. Her laboratory integrates human psychophysics, neuroimaging, neurostimulation, and computational modeling to characterize how these processes modulate visual performance.
Brad Duchaine (Professor, Dartmouth college) My research focuses on face perception. I’m interested in cognitive, neural, and developmental questions. Testing neuropsychological participants with prosopagnosia and prosopometamorphopsia has been my primary method, but I also run studies in typical participants that use psychophysics, fMRI, and eye-tracking.
Miguel Eckstein (Professor, University of California, Santa Barbara) Miguel Eckstein (Professor of Psychological & Brain Sciences, UC, Santa Barbara) studies attention, search, eye movements, learning, face and medical image perception using psychophysics, Bayesian computational modeling, deep neural networks and EEG/fMRI techniques. He worked at Cedars Sinai Medical Center and NASA Ames before joining UC Santa Barbara.
Sabrina Hansmann-Roth (Assistant Professor, University of Iceland) Sabrina is an Assistant Professor at the Icelandic Vision Lab at the University of Iceland in Reykjavik. Her lab uses behavioural methods and computational modelling to investigate mechanisms of visual memory, material perception, and peripheral vision, with clinical applications.
Krystel Huxlin (James V. Aquavella Professor of Ophthalmology, University of Rochester) Dr. Huxlin is Professor of Ophthalmology at the University of Rochester where her research seeks to understand how visual functions can be restored after cortical damage in adulthood. She studies human patients and animal models of visual cortical damage using tools that include psychophysics, fMRI, cell and molecular biology.
Rachel Jack (Professor, University of Glasgow) I study the perception of dynamic facial expressions within and across cultures using an interdisciplinary approach combining psychophysics, social psychology, dynamic 3D computer graphics, and communication/information theory. My work has challenged the dominant view that six basic facial expressions are culturally universal by revealing cultural specificities in facial expressions, that four, not six, expressive patterns are common across cultures, and that facial expressions transmit information in a hierarchical structure over time. My work now informs the design of artificial agents.
Daniel Kaiser (Professor for Neural Computation, Justus Liebig University Giessen) My research investigates how our brain processes natural visual information contained in complex scenes or videos. I am interested in (a) how the brain organizes object information in complex scenes, (b) how feedforward and feedback information flows contribute to scene processing, (c) how individuals perceive natural inputs in different ways, and (d) how brain activity relates to the subjective liking of visual inputs. In my research, I use multivariate analyses of fMRI and EEG data, neurostimulation, and deep neural network models.
Kohitij Kar (Assistant Professor, York University) Dr. Kohitij Kar, an Assistant Professor in Biology at York University and Canada Research Chair in Visual Neuroscience explores the intersection of visual intelligence and artificial intelligence. Named a Future Leader in Canadian Brain Research in 2022, Dr. Kar previously worked at MIT’s McGovern Institute with Dr. James DiCarlo. His research integrates neurophysiological studies of non-human primates with computational models to uncover visual processing mechanisms. Dr. Kar is also developing a non-human primate model of autism to advance neuroscience and AI applications.
Delwin Lindsey (Professor, The Ohio State University) Dr. Delwin Lindsey and Dr. Angela Brown are a married couple who work collaboratively on a program of research focused on color cognition. They use neurophysiological and behavioral approaches to study color appearance, and they use cross-cultural and computational approaches to study how humans understand and communicate about color. Currently, they are studying fMRI correlates of color appearance, and the naming and classification of material properties other than color, such as the object surface texture, to reach a more general understanding of how people perceive, classify, and name the material qualities of objects.
Kristina Nielsen (Associate Professor, Johns Hopkins University) My lab works on the development and function of higher visual cortex. In terms of development, we focus on questions like when certain visual functions develop and how that development is organized across multiple visual areas. We are now also beginning to investigate developmental disorders like amblyopia. In adults, we focus on recovery from visual stroke. All of this work is done in animals, using tools like extracellular recordings, two-photon imaging and behavior.
Jennifer O’Brien (Associate Professor, University of South Florida) My research focuses on human attention and factors that impact available attentional resources. Early in my career, these “factors” fell into the category of the motivation and rewards/punishments. Over more recent years, my focus has shifted to declines in attention during healthy and abnormal aging and how training attentional mechanisms may slow or ameliorate decline. I am the PI of a multi-site, NIH-funded clinical trial evaluating the effectiveness of computerized cognitive (attention) training on reducing the incidence of mild cognitive impairment or dementia in healthy older adults.
Philippe Schyns (Professor, University of Glasgow) My lab investigates face, object, and scene recognition to uncover how the brain perceives and categorizes the world. Leveraging generative modeling to control the features of faces, objects, and scenes, we reveal where, when, and how the brain represents and computes these features during perception and categorization tasks. A key strength of our approach is testing the alignment between brain computations and Deep Neural Networks (DNNs) across three levels: response equivalence (same outputs), feature equivalence (same processed features), and algorithmic equivalence (same computations).
Aaron Seitz (Professor, Northeastern University) Seitz’s research program aims to understand mechanisms of learning and to apply this knowledge for public benefit. His research has led to new insights regarding the roles of reinforcement, attention, multisensory interactions, and different brain systems in learning, computational approaches to learning, translational neuroscience and perceptual/cognitive enhancement, among others.
Sarah Shomstein (Professor, George Washington University) The first question concerns the representations, or units, from which selection occurs and this line of research focuses primarily on the behavioral and neural correlates of attentional selection. The second question concerns the computations involved in the selection per se and this research investigates the neural source of the attentional signal and the impact this signal exerts on the neural trace of the sensory stimulus before and after it has been attentionally selected. To explore these issues, I employ multiple methodologies (psychophysics, neuroimaging, eye-tracking, etc.)
Caglar Tas (Assistant Professor, University of Tennessee, Knoxville) My lab studies perceptual and memory processes across saccadic eye movements with the aim of understanding how transsaccadic visual stability is achieved.
Maryam Vaziri-Pashkam (Assistant Professor, University of Delaware) I am a cognitive neuroscientist interested in the intersection of visual cognition and action. My research aims to advance our understanding of the computational and neural mechanisms that enable real-time interaction with objects and people. To do this, I combine multiple methodologies, including body movement tracking, collection and analysis of large datasets of human behavior in naturalistic settings, neuroimaging, and computational methods. Mu studies bridge traditional field boundaries and link cognitive, social, and motor neuroscience.
Jonathan Victor (Professor, Weill Cornell Medical College) Our lab uses psychophysical and mathematical approaches to study spatial vision, especially visual texture, and the structure of perceptual spaces. Collaborative work with Michele Rucci centers on active sensation in vision; collaborations in the Odor2Action group focus on active sensation in olfaction.
Jeremy Wilmer (Professor of Psychology, Wellesley College) I have spent the past 17 years teaching and conducting research at Wellesley College, an undergraduate-only liberal arts college, and I’m always glad to discuss the distinct joys and opportunities of such an institution. My research probes individual differences in vision and cognition and seeks to establish and disseminate best-practices in visual data communication. I am the founder of showmydata.org, a co-leader of testmybrain.org, and my graph interpretation research was covered here: https://www.sciencefriday.com/segments/bar-graph. A particular focus of my lab over time has been the creation and validation of new measures. A driving thesis of our data visualization work is that the concreteness of individual data points adds accessibility and impact to graphs, even for non-expert audiences.
Benjamin Wolfe (Assistant Professor, University of Toronto Mississauga) I’m a Director of the Applied Perception and Psychophysics Lab (www.applylab.org), and my work is use-inspired vision science; I’m interested in how we use vision in the world, particularly peripheral vision, scene perception, eye movements and visual attention. Most of the lab’s work focuses on driving and how drivers learn about the road environment, and readability, or how the appearance of text can change to help each of us read more efficiently. I’m originally trained as a psychophysicist, and now do a mixture of human factors work with engineers and fundamental work in vision science.
Jeremy Wolfe (Professor, Brigham & Women’s Hospital / Harvard Medical School) I run the Visual Attention Lab at Brigham and Women’s Hospital. My expertise is in vision and visual attention. My research focuses on visual search with a particular interest in socially important search tasks in areas such as medical image perception. How do you find what you are looking for and how can you miss what is right in front of your eyes? For present purposes, it may be relevant that I have been a journal editor (APP & CRPI) and I have ‘survived’ on soft money (grant funding) for 30+ years.
Li Zhaoping (Professor, University of Tuebingen, and Max Planck Institute for Biological Cybernetics) My research focus on understanding the how’s and why’s in vision using Systems Vision Science approaches. Since early 1990s, I have been trying to combine computational principles, neural mechanisms, and visual behavior to understand vision starting from retina, and then V1, and currently onto V1 + beyond. In late 1990s, I proposed the V1 Saliency Hypothesis (V1SH) to understand V1’s functional role, and the progress on V1SH has led to the Central-Peripheral Dichotomy theory starting 2010s. These theories have provided insights or accounts of some existing neural and behavioral data as well as new discoveries through theoretical predictions. I am also interested in teaching on Systems Vision Science, have written a textbook “Understanding vision: theory, models, and data”, am offering free online courses on vision, and lead the organization of an annual summer school on systems vision science. Here are some of my video seminars and here is a 3-minute very very short summary.
Registration
Please use our online Meet the Professors Registration Form. Online registration closes on April 17, 2026.