ARVO@VSS 2016

Information processing in a simple network: What the humble retina tells the brain

Time/Room: Friday, May 13, 2016, 5:00 – 7:00 pm, Talk Room 1-2
Organizers: Scott Nawy, PhD, University of Nebraska Medical Center and Anthony Norcia, Stanford University
Presenters: Greg Field, Michael Crair, William Guido, Wei Wei

< Back to 2016 Symposia

This year’s biennial ARVO at VSS symposium features a selection of recent work on circuit-level analyses of retinal, thalamic and collicular systems that are relevant to understanding of cortical mechanisms of vision. The speakers deploy a range of state-of-the art methods that bring an unprecedented level of precision to dissecting these important visual circuits.

Circuitry and computation in the mammalian retina.

Speaker: Greg Field; USC

The mammalian retina is composed of ~80 distinct neuronal cell types. These neurons work in concert to parcel visual information into ~30 different RGC types, each of which transmits a different message about the visual scene to the brain. I will describe ongoing work in my lab to define the functional role of different cell types in the mammalian retina via the combination of large-scale multi-electrode array recordings and chemogenetic manipulation of genetically defined cell types. This combination of approaches is revealing the specialized roles played by different cell types to encode visual scenes for perception and behavior.

Retinal activity guides visual circuit development prior to sensory experience

Speaker: Michael C. Crair; Yale

Classic models emphasize an important role of sensory experience in the development of visual circuits in the mammalian brain. However, recent evidence indicates that fundamental features of visual circuits in the thalamus, cortex and superior colliculus emerge prior to the emergence of form vision. I will summarize our latest experiments that use in vivo optical imaging techniques and molecular-genetic manipulations in mice to demonstrate that spontaneous retinal activity, generated prior to vision, plays an essential role in sculpting the development of visual circuits in the mammalian brain.

Dissecting circuits in the mouse visual thalamus.

Speaker: William Guido; University of Louisville

The contemporary view of the dorsal lateral geniculate nucleus (dLGN) of thalamus is that of a visual relay, where the gain of signal transmission is modulated by a diverse set of inputs that arise from non-retinal sources. I will highlight our recent studies in the mouse, an animal model that provides unprecedented access into the circuitry underlying these operations.

Neural mechanisms of direction selectivity in the retina

Speaker: Wei Wei; Department of Neurobiology, The University of Chicago
Authors: Qiang Chen, David Koren and Wei Wei, Department of Neurobiology, The University of Chicago

The direction selective circuit in the retina computes motion directions and conveys this information to higher brain areas via the spiking activity of direction selective ganglion cells. While multiple synaptic mechanisms have been implicated in the generation of direction selectivity in the retina, it is unclear how individual mechanism modulates the firing patterns of direction selective ganglion cells. Here, we aim to unambiguously differentiate the contributions of distinct circuit components to direction selectivity by loss-of-function studies using genetic, electrophysiological and functional imaging methods. Our results highlight the concerted actions of synaptic and cell-intrinsic mechanisms required for robust direction selectivity in the retina, and provide critical insights into how patterned excitation and inhibition collectively implement sensory processing in the brain.

< Back to 2016 Symposia

VSS Logos

VSS logo

VSS_Logo_300.jpg

300 pixel JPG image, 30KB, (shown at left)
Color, with shadow, white background
1100 pixel JPG image, 110KB
Color, with shadow, white background

VSS_logo_color.png

1100 pixel PNG image, 110KB
Color, no shadow, transparent background

VSS_logo_color_white.png

1100 pixel PNG image, 110KB
Colored, white image, no shadow, transparent background

VSS_logo.png

1100 pixel PNG image, 110KB
No color, no shadow, transparent background

VSS_logo.ai

Adobe Illustrator File, 1.5MB
Color, with shadow, white background

VSS_logo_white.ai

Adobe Illustrator File, 1.0MB
Color, no shadow, white text

VSS_logo.eps

Encapsulated Postscript File, 1.4MB
Color, with shadow, white background

VSS_logo.pdf

Adobe Portable Document Format, 1.1MB
Color, with shadow, white background

2016 Keynote – Sabine Kastner

Sabine Kastner, Ph.D.

Professor of Neuroscience and Psychology in the Princeton Neuroscience Institute and Department of Psychology
Website

Neural dynamics of the primate attention network

Saturday, May 14, 2016, 7:15 pm, Talk Room 1-2

The selection of information from our cluttered sensory environments is one of the most fundamental cognitive operations performed by the primate brain. In the visual domain, the selection process is thought to be mediated by a static spatial mechanism – a ‘spotlight’ that can be flexibly shifted around the visual scene. This spatial search mechanism has been associated with a large-scale network that consists of multiple nodes distributed across all major cortical lobes and includes also subcortical regions.  To identify the specific functions of each network node and their functional interactions is a major goal for the field of cognitive neuroscience.  In my lecture, I will challenge two common notions of attention research.  First, I will show behavioral and neural evidence that the attentional spotlight is neither stationary or unitary. In the appropriate behavioral context, even when spatial attention is sustained at a given location, additional spatial mechanisms operate flexibly and automatically in parallel to monitor the visual environment. Second, spatial attention is assumed to be under ‘top-down’ control of higher order cortex. In contrast, I will provide neural evidence indicating that attentional control is exerted through thalamo-cortical interactions.  Together, this evidence indicates the need for major revisions of traditional attention accounts.

Biography

Sabine Kastner is a Professor of Neuroscience and Psychology in the Princeton Neuroscience Institute and Department of Psychology. She also serves as the Scientific Director of Princeton’s neuroimaging facility and heads the Neuroscience of Attention and Perception Laboratory. Kastner earned an M.D. (1993) and PhD (1994) degree and received postdoctoral training at the Max-Planck-Institute for Biophysical Chemistry and NIMH before joining the faculty at Princeton University in 2000. She studies the neural basis of visual perception, attention, and awareness in the primate brain and has published more than 100 articles in journals and books and has co-edited the ‘Handbook of Attention’ (OUP), published in 2013. Kastner serves on several editorial boards and is currently an editor at eLife. Kastner enjoys a number of outreach activities such as fostering the career of young women in science (Young Women’s Science Fair, Synapse project), promoting neuroscience in schools (Saturday Science lectures, science projects in elementary schools, chief editor for Frontiers of young minds’ understanding neuroscience section) and exploring intersections of neuroscience and art (events at Kitchen, Rubin museum in NYC).

Recipient of the 2013 Davida Teller Award

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Vision Sciences Society is honored to announce Dr. Eileen Kowler as the inaugural recipient of the 2013 Davida Teller Award.

Eileen Kowler

Department of Psychology, Rutgers University

eileen_kowlerDr. Eileen Kowler, Professor at Rutgers University, is the inaugural winner of the VSS Davida Teller Award. Eileen transformed the field of eye movement research that eye movements are not reflexive visuomotor responses, but are driven by and tightly linked to attention, prediction, and cognition.

Perhaps the most significant scientific contribution by Eileen was the demonstration that saccadic eye movements and visual perception share attentional resources. This seminal paper has become the starting point for hundreds of subsequent studies about vision and eye movements. By convincingly demonstrating that the preparation of eye movements shares resources with the allocation of visual attention, this paper also established the validity of using eye movements as a powerful tool for investigating the mechanisms of visual attention and perception, which provides a precision and reliability that is otherwise difficult, if not impossible, to achieve. This work forms the basis of most of the work on eye movements that is presented at VSS every year!

Before her landmark studies on saccades and attention, Eileen made a major contribution by showing that cognitive expectations exert strong influences on smooth pursuit eye movements. At that time smooth pursuit eye movements were thought to be driven in a machine-like fashion by retinal error signals. Eileen’s wonderfully creative experiments (e.g., pursuit targets moving through Y-shaped tubes) convinced the field that smooth pursuit is guided in part by higher-level visual processes related to expectations, memory, and cognition.

Anticipatory behavior of human eye movements

Monday, May 13, 1:00 pm, Royal Palm 4-5

The planning and control of eye movements is one of the most important tasks accomplished by the brain because of the close connection between eye movements and visual function.   Classical approaches assumed that eye movements are solely or primarily reactions to one or another type of sensory cue, but we now know that eye movements also display anticipatory responses to predicted signals or events. This talk will illustrate several examples of anticipatory behavior of both smooth pursuit eye movements and saccades.   These anticipatory responses are automatic and effortless, depend on the decoding of symbolic environmental cues and on memory for recent events, and can be found in typical individuals and in those with autism spectrum disorder.   Anticipatory responses show that oculomotor control is driven by internal models that take into account both the capacity limits of the motor system and the states of the surrounding visual environment.

.

What do deep neural networks tell us about biological vision?

Time/Room: Friday, May 13, 2016, 2:30 – 4:30 pm, Talk Room 1-2
Organizer(s): Radoslaw Martin Cichy; Department of Psychology and Education, Free University Berlin, Berlin, Germany
Presenters: Kendrick Kay, Seyed-Mahdi Khaligh-Razavi, Daniel Yamins, Radoslaw Martin Cichy, Tomoyasu Horikawa, Kandan Ramakrishnan

< Back to 2016 Symposia

Symposium Description

Visual cognition in humans is mediated by complex, hierarchical, multi-stage processing of visual information, propagated rapidly as neural activity in a distributed network of cortical regions. Understanding visual cognition in cortex thus requires a predictive and quantitative model that captures the complexity of the underlying spatio-temporal dynamics and explains human behavior. Very recently, brain-inspired deep neural networks (DNNs) have taken center stage as an artificial computational model for understanding human visual cognition. A major reason for their emerging dominance is that DNNs perform near human-level performance on tasks such as object recognition (Russakovsky et al., 2014). While DNNs were initially developed by computer scientists to solve engineering problems, research comparing visual representations in DNNs and primate brains have found a striking correspondence, creating excitement in vision research (Kriegeskorte 2015, Ann Rev Vis, Keynote VSS 2014 Bruno Olshausen; Jones 2014; Nature). The aim of this symposium is three-fold: One aim is to describe cutting-edge research efforts that use DNNs to understand human visual cognition. A second aim is to establish which results reproduce across studies and thus create common ground for further research. A third aim is to provide a venue for critical discussion of the theoretical implications of the results. To introduce and frame the debate for a wide audience, Kendrick Kay will start with thorough introduction to the DNN approach in the beginning and formulate questions and challenges to which the individual speakers will respond in their talks. The individual talks will report on recent DNN-related biological vision research. The talks will cover a wide range of results: brain data recorded in different species (human, monkey), with different techniques (electrophysiology, fMRI, M/EEG), for static as well as movie stimuli, using a wide range of analysis techniques (decoding and encoding models, representational similarity analysis). Major questions addressed will be: what do DNNs tell us about visual processing in the brain? What is the theoretical impact of finding a correspondence between DNNs and representations in human brains? Do these insights extend to visual cognition such as imagery? What analysis techniques and methods are available to relate DNNs to human brain function? What novel insights can be gained from comparison of DNNs to human brains? What effects reproduce across studies? A final 20-min open discussion between speakers and the audience will close the symposium, encouraging discussion on what aims the DNN approach has reached already, where it fails, what future challenges lie ahead, and how to tackle them. As DNNs address visual processing across low to mid- to high-level vision, we believe this symposium will be of interest to a broad audience, including students, postdocs and faculty. This symposium is a grass-roots first-author-based effort, bringing together junior researchers from around the world (US, Germany, Netherlands, and Japan).

Presentations

What are deep neural networks and what are they good for?

Speaker: Kendrick Kay; Center for Magnetic Resonance Research, University of Minnesota, Twin Cities

In this talk, I will provide a brief introduction to deep neural networks (DNN) and discuss their usefulness with respect to modeling and understanding visual processing in the brain. To assess the potential benefits of DNN models, it is important to step back and consider generally the purpose of computational modeling and how computational models and experimental data should be integrated. Is the only goal to match experimental data? Or should we derive understanding from computational models? What kinds of information can be derived from a computational model that cannot be derived through simpler analyses? Given that DNN models can be quite complex, it is also important to consider how to interpret these models. Is it possible to identify the key feature of a DNN model that is responsible for a specific experimental effect? Is it useful to perform ‘in silico’ experiments with a DNN model? Should we should strive to perform meta-modeling, that is, developing a (simple) model of a (complex DNN) model in order to help understand the latter? I will discuss these and related issues in the context of DNN models and compare DNN modeling to an alternative modeling approach that I have pursued in past research.

Mixing deep neural network features to explain brain representations

Speaker: Seyed-Mahdi Khaligh-Razavi; CSAIL, MIT, MA, USA
Authors: Linda Henriksson, Department of Neuroscience and Biomedical Engineering, Aalto University, Aalto, Finland Kendrick Kay, Center for Magnetic Resonance Research, University of Minnesota, Twin Cities Nikolaus Kriegeskorte, MRC-CBU, University of Cambridge, UK

Higher visual areas present a difficult explanatory challenge and can be better studied by considering the transformation of representations across the stages of the visual hierarchy from lower- to higher-level visual areas. We investigated the progress of visual information through the hierarchy of visual cortex by comparing the representational geometry of several brain regions with a wide range of object-vision models, ranging from unsupervised to supervised, and from shallow to deep models. The shallow unsupervised models tended to have higher correlations with early visual areas; and the deep supervised models were more correlated with higher visual areas. We also presented a new framework for assessing the pattern-similarity of models with brain areas, mixed representational similarity analysis (RSA), which bridges the gap between RSA and voxel-receptive-field modelling, both of which have been used separately but not in combination in previous studies (Kriegeskorte et al., 2008a; Nili et al., 2014; Khaligh-Razavi and Kriegeskorte, 2014; Kay et al., 2008, 2013). Using mixed RSA, we evaluated the performance of many models and several brain areas. We show that higher visual representations (i.e. lateral occipital region, inferior temporal cortex) were best explained by the higher layers of a deep convolutional network after appropriate mixing and weighting of its feature set. This shows that deep neural network features form the essential basis for explaining the representational geometry of higher visual areas.

Using DNNs To Compare Visual and Auditory Cortex

Speaker: Daniel Yamins; Department of Brain and Cognitive Sciences, MIT, MA, USA
Authors: Alex Kell, Department of Brain and Cognitive Sciences, MIT, MA, USA

A slew of recent studies have shown how deep neural networks (DNNs) optimized for visual tasks make effective models of neural response patterns in the ventral visual stream. Analogous results have also been discovered in auditory cortex, where optimizing DNNs for speech-recognition tasks has produced quantitatively accurate models of neural response patterns in auditory cortex. The existence of computational models within the same architectural class for two apparently very different sensory representations begs several intriguing questions: (1) to what extent do visual models predict auditory response patterns, and to what extent to do auditory models predict visual response patterns? (2) In what ways are the vision and auditory models models similar, and what ways do they diverge? (3) What do the answers to the above questions tell us about the relationships between the natural statistics of these two sensory modalities — and the underlying generative processes behind them? I’ll describe several quantitative and qualitative modeling results, involving electrophysiology data from macaques and fMRI data from humans, that shed some initial light on these questions.

Deep Neural Networks explain spatio-temporal dynamics of visual scene and object processing

Speaker: Radoslaw Martin Cichy; Department of Psychology and Education, Free University Berlin, Berlin, Germany
Authors: Aditya Khosla, CSAIL, MIT, MA, USA Dimitrios Pantazis, McGovern Institute of Brain and Cognitive Sciences, MIT, MA, USA Antonio Torralba, CSAIL, MIT, MA, USA Aude Oliva, CSAIL, MIT, MA, USA

Understanding visual cognition means knowing where and when what is happening in the brain when we see. To address these questions in a common framework we combined deep neural networks (DNNs) with fMRI and MEG by representational similarity analysis. We will present results from two studies. The first study investigated the spatio-temporal neural dynamics during visual object recognition. Combining DNNs with fMRI, we showed that DNNs predicted a spatial hierarchy of visual representations in both the ventral, and the dorsal visual stream. Combining DNNs with MEG, we showed that DNNs predicted a temporal hierarchy with which visual representations emerged. This indicates that 1) DNNs predict the hierarchy of visual brain dynamics in space and time, and 2) provide novel evidence for object representations in parietal cortex. The second study investigated how abstract visual properties, such as scene size, emerge in the human brain in time. First, we identified an electrophysiological marker of scene size processing using MEG. Then, to explain how scene size representations might emerge in the brain, we trained a DNN on scene categorization. Representations of scene size emerged naturally in the DNN without it ever being trained to do so, and DNN accounted for scene size representations in the human brain. This indicates 1) that DNNs are a promising model for the emergence of abstract visual properties representations in the human brain, and 2) gives rise to the idea that the cortical architecture in human visual cortex is the result of task constraints imposed by visual tasks.

Generic decoding of seen and imagined objects using features of deep neural networks

Speaker: Tomoyasu Horikawa; Computational Neuroscience Laboratories, ATR, Kyoto, Japan
Authors: Yukiyasu Kamitani; Graduate School of Informatics, Kyoto University, Kyoto, Japan

Object recognition is a key function in both human and machine vision. Recent studies support that a deep neural network (DNN) can be a good proxy for the hierarchically structured feed-forward visual system for object recognition. While brain decoding enabled the prediction of mental contents represented in our brain, the prediction is limited to training examples. Here, we present a decoding approach for arbitrary objects seen or imagined by subjects by employing DNNs and a large image database. We assume that an object category is represented by a set of features rendered invariant through hierarchical processing, and show that visual features can be predicted from fMRI patterns and that greater accuracy is achieved for low/high-level features with lower/higher-level visual areas, respectively. Furthermore, visual feature vectors predicted by stimulus-trained decoders can be used to identify seen and imagined objects (extending beyond decoder training) from a set of computed features for numerous objects. Successful object identification for imagery-induced brain activity suggests that feature-level representations elicited in visual perception may also be used for top-down visual imagery. Our results demonstrate a tight link between the cortical hierarchy and the levels of DNNs and its utility for brain-based information retrieval. Because our approach enabled us to predict arbitrary object categories seen or imagined by subjects without pre-specifying target categories, we may be able to apply our method to decode the contents of dreaming. These results contribute to a better understanding of the neural representations of the hierarchical visual system during perception and mental imagery.

Mapping human visual representations by deep neural networks

Speaker: Kandan Ramakrishnan; Intelligent Sensory Information Systems, UvA, Netherlands
Authors: H.Steven Scholte; Department of Psychology, Brain and Cognition, UvA, Netherlands, Arnold Smeulders, Intelligent Sensory Information Systems, UvA, Netherlands, Sennay Ghebreab; Intelligent Sensory Information Systems, UvA, Netherlands

A number of recent studies have shown that deep neural networks (DNN) map to the human visual hierarchy. However, based on a large number of subjects and accounting for the correlations between DNN layers, we show that there is no one-to-one mapping of DNN layers to the human visual system. This suggests that the depth of DNN, which is also critical to its impressive performance in object recognition, has to be investigated for its role in explaining brain responses. On the basis of EEG data collected from a large set of natural images we analyzed different DNN architectures – a 7 layer, 16 layer and a 22 layer DNN network using weibull distribution for the representations at each layer. We find that the DNN architectures reveal temporal dynamics of object recognition, with early layers driving responses earlier in time and higher layers driving the responses later in time. Surprisingly the layers from the different architectures explain brain responses to a similar degree. However, by combining the representations of the DNN layers we observe that in the higher brain areas we explain more brain activity. This suggests that the higher areas in the brain are composed of multiple non-linearities that are not captured by the individual DNN layers. Overall, while DNNs form a highly promising model to map the human visual hierarchy, the representations in the human brain go beyond the simple one-to-one mapping of the DNN layers to the human visual hierarchy.

< Back to 2016 Symposia

What can we learn from #TheDress – in search for an explanation

Time/Room: Friday, May 13, 2016, 2:30 – 4:30 pm, Pavilion
Organizer(s): Annette Werner; Institute for Ophthalmic Research, Tübingen University, Germany
Presenters: Annette Werner, Anya Hurlbert, Christoph Witzel, Keiji Uchikawa, Bevil Conway, Lara Schlaffke

< Back to 2016 Symposia

Symposium Description

Few topics in colour research have generated so much interest in the science community and public alike, as the recent #TheDress. The phenomenon refers to the observation that observers cannot agree on colour names for a dress seen in a particular photograph, i.e. colour judgements fall into at least two categories, namely blue&black and white&gold. Although individual differences in colour perception are well known, this phenomenon is still unprecendented since it uncovers a surprising ambiguity in colour vision – surprising because our visual brain was thought to reconstruct surface colour so successfully that it is experienced by the naive observer as an inherent property of objects. Understanding the origin of the perceptual dichotomy of #TheDress is therefore important not only in the context of the phenomenon itself but also for our comprehension of the neural computations of colour in general. Since it’s discovery, a number of hypotheses have been put forward, in order to explain the phenomenon; these include individual differences in peripheral or sensory properties such as variations in the entopic filters of the eye, or in the spectral sensitivities of the chromatic pathways; „high end“ explanations concern differences at cognitive stages, e.g. regarding the interpretation of the lightfield in a scene, or the use of priors for estimating the illuminant or the surface colour. The ambiguity in case of #TheDress may arise because of the peculiar distribution of surface colours in the photo and the lack of further information in the background. The symposium shall gather the actual experimental evidence, and provide a profound basis for the discussion and evaluation of existing and novel hypotheses. The topic will be introduced by the organizer and concluded by a general discussion of the experimental findings of all presentations. Because of the wide spread interest for the topic of #TheDress and it’s general importance for colour vision, we expect a large VSS audience, including students, postdocs, and senior scientists from all fields in vision research.

Presentations

The #Dress phenomenon – an empirical investigation into the role of the background

Speaker: Annette Werner; Institute for Ophthalmic Research, Tübingen University, Germany
Authors: Alisa Schmidt, Institute for Ophthalmic Research, Tübingen University, Germany

The #TheDress phenomenon refers to a dichotomy in colour perception, which is specific to the foto of a blue&black dress, namely that most observers judge its colours as either blue/black or white/gold. Hypotheses explaining the phenomenon include individual variations of information pocessing at sensory as well as cognitive stages. In particular it has been proposed that the lack of/ambiguity in background information leads observers to different conclusions about the illuminant and the light field. We will present result of matching experiments involving the presentations of the real blue/black dress, mounted on differently colour backgrounds and under the illuminations of two slide projectors, thereby mimicking the ambiguity of the photo. The results identify the use of information from the background as a source for the observed individual differences. The results are discussed in the context of the aquisition, the content and the use of “scene knowledge“.

Is that really #thedress? Individual variations in colour constancy for real illuminations and objects

Speaker: Anya Hurlbert; Institute of Neuroscience, University of Newcastle upon Tyne, UK
Authors: Stacey Aston, Bradley Pearce: Institute of Neuroscience, University of Newcastle upon Tyne, UK

One popular explanation for the individual variation in reported colours of #thedress is an individual variation in the underlying colour constancy mechanisms, which cause differences in the illumination estimated and subsequently discounted. Those who see the dress as ‘white/gold’ are discounting a ‘blueish’ illumination, while those who see it as ‘blue/black’ are discounting a ‘yellowish’ illumination. These underlying differences are brought into relief by the ambiguity of the original photograph. If this explanation holds, then similarly striking individual differences in colour constancy might also be visible in colour matching and naming tasks using real objects under real illuminations, and the conditions under which they are elicited may help to explain the particular power of #thedress. I will discuss results of colour constancy measurements using the real dress, which is almost universally reported to be ‘blue/black’ when illuminated by neutral, broad-band light, yet elicits similar variability in colour naming to the original photograph, across observers within certain illumination conditions, most pronouncedly for ambiguous and/or atypical illuminations. Colour constancy by both naming and matching is in fact relatively poor for the real dress and other unfamiliar items of clothing, but better for “blueish” illuminations than other chromatic illuminations or ambiguous multiple-source illuminations. Overall, individual variations in colour constancy are significant, and depend on age and other individual factors.

Variation of subjective white-points along the daylight axis and the colour of the dress

Speaker: Christoph Witzel; Laboratoire Psychologie de la Perception, University Paris Descartes, France
Authors: Sophie Wuerger, University of Liverpool, UK, Anya Hurlbert, Institute of Neuroscience, University of Newcastle upon Tyne, UK

We review the evidence, from different data sets collected under different viewing conditions, illumination sources, and measurement protocols, for intra- and interobserver variability in “generic subjective white-point” settings along the daylight locus. By “generic subjective white-point” we mean the subjective white-point independent of the specific context. We specifically examine the evidence across all datasets for a “blue” bias in subjective white-points (i.e. increased variability or reduced sensitivity in the bluish direction). We compare the extent of daylight-locus variability generally and variability in the “bluish” direction specifically of subjective white points across these data sets (for different luminance levels and light source types). The variability in subjective white-point may correspond to subjective “priors” on illumination chromaticity. In turn, individual differences in assumptions about the specific illumination chromaticity on “the dress” (in the recent internet phenomenon) is widely thought to explain the individual differences in reported dress colours. We therefore compare the variability in generic white-point settings collated across these datasets with the variability in generic white-point settings made in the specific context of the dress (Witzel and O’Regan, ECVP 2015). Our analysis suggests that (1) there is an overall “blue” bias in generic subjective white-point settings and (2) the variability in generic subjective white-point settings is insufficient to explain the variability in reported dress colours. Instead, the perceived colors of the dress depend on assumptions about the illumination that are specific to that particular photo of the dress.

Prediction for individual differences in appearance of the “dress” by the optimal color hypothesis

Speaker: Keiji Uchikawa; Department of Information Processing, Tokyo Institute of Technology, Japan
Authors: Takuma Morimoto, Tomohisa Matsumoto; Department of Information Processing, Tokyo Institute of Technology, Japan

When luminances of pixels in the blue-black/white-gold “dress” image were plotted on the MacLeod-Boynton chromaticity diagram they appeared to have two clusters. They corresponded to the white/blue and the gold/black parts. The approach we took to solve the dress problem was to apply our optimal color hypothesis to estimate an illuminant in the dress image. In the optimal color hypothesis, the visual system picks the optimal color distribution, which best fits to the scene luminance distribution. The peak of the best-fit optimal color distribution corresponds to the illuminant chromaticity. We tried to find the best-fit optimal color distribution to the dress color distribution. When illuminant level was assumed to be low, the best-fit color temperature was high (20000K). Under this dark-blue illuminant the dress colors should look white-gold. When illuminant level was assumed to be high, the lower temperature optimal color distribution (5000K) fitted the best. Under this bright-white illuminant the dress colors should appear blue-black. Thus, for the dress image the best-fit optimal color distribution changed depending on illuminant intensity. This dual-stable illuminant estimations may cause the individual difference in appearance of the dress. If you choose a bright (or dark) illuminant the dress appears blue-black (or white-gold). When the chromaticity of the dress was rotated in 180 degree in the chromaticity diagram it appeared blue-gold without individual difference. In this case the optimal color hypothesis predicted an illuminant with almost no ambiguity. We tested individual differences using simple patterns in experiments. The results supported our prediction.

Mechanisms of color perception and cognition covered by #thedress

Speaker: Bevil Conway; Department of Brain and Cognitive Sciences, MIT, Cambridge MA, USA
Authors: Rosa Lafer-Sousa, Katherine Hermann

Color is notoriously ambiguous—many color illusions exist—but until now it has been thought that all people with normal color vision experience color illusions the same way. How does the visual system resolve color ambiguity? Here, we present work that addresses this question by quantifying people’s perception of a particularly ambiguous image, ‘the dress’ photograph. The colors of the individual pixels in the photograph when viewed in isolation are light-blue or brown, but popular accounts suggest the dress appears either white/gold or blue/black. We tested more than 1400 people, both on-line and under controlled laboratory conditions. Subjects first completed the sentence: “this is a ___and___dress”. Then they performed a color-matching experiment that did not depend on language. Surprisingly, the results uncovered three groups of subjects: white/gold observers, blue/black observers and blue/brown observers. Our findings show that the brain resolves ambiguity in ‘the dress’ into one of three stable states; a minority of people switched which colors they saw (~11%). It is clear that what we see depends on both retinal stimulation and internal knowledge about the world. Cases of multi-stability such as ‘the dress’ provide a rare opportunity to investigate this interplay. In particular, we go on to demonstrate that ‘the dress’ photograph can be used as a tool to discover that skin reflectance is a particularly important implicit cue used by the brain to estimate the color of the light source, to resolve color ambiguity, shedding light on the role of high-level cues in color perception.

The Brain’s Dress Code: How The Dress allows to decode the neuronal pathway of an optical illusion

Speaker: Lara Schlaffke; Department of Neurology, BG University Hospital Bergmannsheil, Bochum, Germany
Authors: Anne Golisch , Lauren M. Haag, Melanie Lenz, Stefanie Heba, Silke Lissek, Tobias Schmidt-Wilcke, Ulf T. Eysel , Martin Tegenthoff

Optical illusions have broadened our understanding of the brains role in visual perception 1–3. A modern day optical illusion emerged from a posted photo of a striped dress, which some perceived as white and gold and others as blue and black. Theories on the differences have been proposed and included e.g. colour constancy, contextual integration, and the principle of ambiguous forms4, however no consensus has yet been reached. The fact that one group sees a white/gold dress, instead of the actual blue/black dress, provides a control and therefore a unique opportunity in vision research, where two groups perceive the same object differently. Using functional magnetic resonance imaging (fMRI) we can identify human brain regions that are involved in this optical illusion concerning colour perception and investigate the neural correlates that underlie the observed differences. Furthermore open questions in visual neuroscience concerning the computation of complex visual scenes can be addressed. Here we show, using fMRI, that those who perceive The Dress as white/gold (n=14) have higher activation in response to The Dress in brain regions critically involved in visual processing and conflict management (V2, V4, as well as frontal and parietal brain areas), as compared to those who perceive The Dress as blue/black (n=14). These results are consistent with the theory of top-down modulation5 and extend the Retinex theory6 to include differing strategies the brain uses to form a coherent representation of the world around us. This provides a fundamental building block to study interindividual differences in visual processing.

< Back to 2016 Symposia

The parietal cortex in vision, cognition, and action

Time/Room: Friday, May 13, 2016, 5:00 – 7:00 pm, Pavilion
Organizer(s): Yaoda Xu and David Freedman; Harvard University and University of Chicago
Presenters: Sabine Kastner, Yaoda Xu, Jacqueline Gottlieb, David Freedman, Peter Janssen, Melvyn Goodale

< Back to 2016 Symposia

Symposium Description

The primate parietal cortex has been associated with a diverse set of operations. Early evidence has highlighted the role of this brain region in spatial, attention, and action related processing. More recent evidence, however, has suggests a role for parietal cortex in non-spatial and cognitive functions such as object representation, categorization, short-term memory, number processing and decision making. How should we understand its function, given the wide array of sensory, cognitive and motor signals found to be encoded in parietal areas? Are there functionally dissociable regions within the primate parietal cortex, each participating in distinct functions? Or are the same parietal regions involved in multiple functions? Is it possible to form a unified account of parietal cortex’s role in perception, action and cognition? In this symposium, by bringing together researchers from monkey neurophysiology and human brain imaging, we will first ask the speakers to present our current understanding regarding the role of parietal cortex in visual spatial, non-spatial and cognitive functions. We will then ask the speakers whether the framework they have developed to understand parietal involvement in a particular task setting can help understand its role in other task contexts and whether there are fundamental features of parietal cortex that enable it to participate in a diverse set of tasks and functions. There will be a total of six speakers. Sabine Kastner will address spatial mapping, attention priority signals and object representations in human parietal cortex. Yaoda Xu will describe human parietal cortex’s involvement in visual short-term memory and object representation and their correspondence with behavior. Jacqueline Gottlieb will describe attention and decision related signals in monkey parietal cortex. David Freedman will examine monkey parietal cortex’s involvement in visual categorization, category learning, and working memory and its interaction with other cortical areas. Peter Janssen will detail the functional organization of the monkey intraparietal sulcus in relation to grasping and 3D object representation. Melvyn Goodale will discuss the role of the parietal cortex in the control of action.

Presentations

Comparative studies of posterior parietal cortex in human and non-human primates

Speaker: Sabine Kastner; Department of Psychology and The Princeton Neuroscience Institute, Princeton University

The primate parietal cortex serves many functions, ranging from integrating sensory signals and deriving motor plans to playing a critical role in cognitive functions related to object categorization, attentional selection, working memory or decision making. This brain region undergoes significant changes during evolution and can therefore serve as a model for a better understanding of the evolution of cognition. I will present comparative studies obtained in human and non-human primates using basically identical methods and tasks related to topographic and functional organization, neural representation of object information and attention-related signals. Topographic and functional mapping studies identified not only the parietal regions that primate species have in common, but also revealed several human-specific areas along the intraparietal sulcus. FMRI studies on parietal object representations show that in humans, they resemble those typically found in ventral visual cortex and appear to be more complex than those observed in non-human primates suggesting advanced functionality possibly related to the evolving human-specific tool network. Finally, electrophysiological signatures of parietal attention signals in space-based attention tasks are similar in many respects across primate species providing evidence for preserved functionality in this particular cognitive domain. Together, our comparative studies contribute to a more profound understanding of the evolution of cognitive domains related to object perception and attention in primates.

Decoding Visual Representations in the Human Parietal Cortex

Speaker: Yaoda Xu; Psychology Department, Harvard University

Although visual processing has been mainly associated with the primate occipital/temporal cortices, the processing of sophisticated visual information in the primate parietal cortex has also been reported by a number of studies. In this talk, I will examine the range of visual stimuli that can be represented in the human parietal cortex and the nature of these representations in terms of their distractor resistance, task dependency and behavioral relevance. I will then directly compare object representation similarity between occipital/temporal and parietal cortices. Together, these results argue against a “content-poor” view of parietal cortex’s role in attention. Instead, they suggest that parietal cortex is “content-rich” and capable of directly participating in goal-driven visual information representation in the brain. This view has the potential to help us understand the role of parietal cortex in other tasks such as decision-making and action, both of which demand the online processing of visual information. Perhaps one way to understand the function of parietal cortex is to view it as a global workspace where sensory information is retained, integrated, and evaluated to guide the execution of appropriate actions.

Multi-dimensional parietal signals for coordinating attention and decision making

Speaker: Jacqueline Gottlieb; Department of Neuroscience, Kavli Institute for Brain Science, Columbia University

In humans and non-human primates, the parietal lobe plays a key role in spatial attention – the ability to extract information from regions of space. This role is thought to be mediated by “priority” maps that highlight attention-worthy locations, and provide top-down feedback for motor orienting and attention allocation. Traditionally, priority signals have been characterized as being purely spatial – i.e., encoding the desired locus of gaze or attention regardless of the context in which the brain generates that selection. Here I argue, however, based on non-spatial modulations found in the monkey lateral intraparietal area, that non-spatial responses are critical for allowing the brain to coordinate attention with action – i.e., to estimate the significance and relative utility of competing sensory cues in the immediate task context. The results prompt an integrative view whereby attention is not a disembodied entity that acts on sensory or motor representations, but an organically emerging process that depends on dynamic interactions within sensorimotor loops.

Categorical Decision Making and Category Learning in Parietal and Prefrontal Cortices

Speaker: David Freedman; Department of Neurobiology and Grossman Institute for Neuroscience, Quantitative Biology, and Human Behavior, The University of Chicago

We have a remarkable ability to recognize the behavioral significance, or category membership of incoming sensory stimuli. In the visual system, much is known about how simple visual features (such as color, orientation and direction of motion) are processed in early stages of the visual system. However, much less is known about how the brain learns and recognizes categorical information that gives meaning to incoming stimuli. This talk will discuss neurophysiological and behavioral experiments aimed at understanding the mechanisms underlying visual categorization and decision making, with a focus on the impact of category learning on underlying neuronal representations in the posterior parietal cortex (PPC) and prefrontal cortex (PFC). We recorded from PPC both before and after training on a visual categorization task. This revealed that categorization training influenced both visual and cognitive encoding in PPC, with a marked enhancement of memory-related delay-period encoding during the categorization task which was not observed during a motion discrimination task prior to categorization training. In contrast, the PFC exhibited strong delay-period encoding during both discrimination and categorization tasks. This reveals a dissociation between PFC’s and PPC’s roles in decision making and short term memory, with generalized engagement of PFC across a wider range of tasks, in contrast with more task-specific and training dependent mnemonic encoding in PPC.

The functional organization of the intraparietal sulcus in the macaque monkey

Speaker: Peter Janssen; Laboratory for Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven

The lateral bank of the anterior intraparietal sulcus (IPS) is critical for object grasping. Functional magnetic resonance imaging (fMRI) (Durand et al., 2007) and single-cell recording studies (Srivastava, Orban, De Maziere, & Janssen, 2009) in macaque monkeys have demonstrated that neurons in the anterior intraparietal area (AIP) are selective for disparity-defined three-dimensional (3D) object shape. Importantly, the use of the same stimuli and tasks in macaque monkeys and humans has enabled us to infer possible homologies between the two species. I will review more recent studies combining fMRI, single-cell recordings, electrical microstimulation and reversible inactivation that have shed light on the functional organization of the IPS. Using an integrated approach (Premereur, Van Dromme, Romero, Vanduffel, & Janssen, 2015), we could identify differences in the effective connectivity between nearby patches of neurons with very similar response properties, resolving a long-standing controversy between anatomical and physiological studies with respect to the spatial extent of neighboring areas AIP and LIP. In addition, the effective connectivity of the different IPS sectors has clarified the functional organization of the anterior IPS. Finally, reversible inactivation during fMRI can demonstrate how visual information flows within the widespread functional network involved in 3D object processing. These results are not only critical to understand the role of the macaque parietal cortex, but will also contribute to a better understanding of the parietal cortex in humans.

The role of the posterior parietal cortex in the control of action

Speaker: Melvyn Goodale; The Brain and Mind Institute, The University of Western Ontario

A long history of neuropsychological research has shown the visual control of grasping and other skilled movements depends on the integrity of visual projections to the dorsal visual stream in the posterior parietal cortex. Patients with lesions to the dorsal stream are unable to direct their hands towards or grasp visual targets in the contralesional visual field, despite being able to describe the size, orientation, and location of those targets. Other patients with lesions to the ventral stream are able to grasp objects accurately and efficiently despite being unable to report the very object features controlling their actions. More recent imaging studies of both neurological patients and healthy controls has confirmed the role of the dorsal stream in transforming visual information into the required coordinates for action. In this presentation, I will discuss research from our lab showing that visual information about the metrical properties of goal objects may reach the dorsal stream via pathways that bypass the geniculostriate pathway. I will go on to show that manual interactions with some classes of objects, such as tools, requires that visual information about those objects be processed by circuits in both the ventral and the dorsal stream. Finally, I will speculate that some of the other higher-order functions of the parietal lobe, such as its evident role in numerical processing and working memory, may have evolved from the need to plan actions to multiple goals.

< Back to 2016 Symposia

Boundaries in Spatial Navigation and Visual Scene Perception

Time/Room: Friday, May 13, 2016, 12:00 – 2:00 pm, Pavilion
Organizer(s): Soojin Park, Johns Hopkins University and Sang Ah Lee, University of Trento
Presenters: Sang Ah Lee, Joshua B Julian, Nathaniel J. Killian, Tom Hartley, Soojin Park, Katrina Ferrara

< Back to 2016 Symposia

Symposium Description

The ability to navigate in the world using vision is intrinsically tied to the ability to analyze spatial relationship within a scene. For the past few decades, navigation researchers have shown that humans and nonhuman animals alike compute locations by using a spontaneously encoded geometry of the 3D environmental boundary layouts. This finding has been supported by neural evidence showing boundary-specific inputs to hippocampal place-mapping. More recently, researchers in visual scene perception have shown that boundaries not only play an important role in defining geometry for spatial navigation, but also in visual scene perception. How are boundary representations in scene perception related to those in navigation? What are the defining features of boundaries, and what are their neural correlates? The aim of this symposium is to bridge research from various subfields of cognitive science to discuss the specific role of boundaries in the processing of spatial information and to converge on a coherent theoretical framework for studying visual representations of boundaries. To achieve this, we have brought together an interdisciplinary group of speakers to present studies of boundary representations on a broad range of subject populations, from rodents, to primates, to individuals with genetic disorders, using various experimental methods (developmental, behavioral, fMRI, TMS, single-cell and population coding). The theoretical flow of the symposium will start with behavioral studies showing specificity and primacy of boundaries in spatial navigation and memory in both humans and a wide range of nonhuman vertebrates. Then, we will ask whether neural representations of boundary geometry can be derived from visual input, as opposed to active navigation, using primate’s saccadic eye gaze and human scene perception. Lastly, we will present evidence of spatial impairment marked by a dysfunction of boundary-processing mechanisms in Williams Syndrome. We believe that this symposium will be of great interest to VSS attendees for the following reasons: First, these convergent findings from independent research approaches to spatial representations and their neural correlates will make a powerful impact on theories of spatial information processing, from visual perception to hippocampal spatial mapping. Second, a better understanding of boundary geometry can broadly inform any research that involves visuo-spatial representations, such as studies on spatial perspective and saccadic eye movements. Finally, the methodological breadth of this symposium, and its aim to integrate them to a coherent picture will provide a new perspective on the power of multidisciplinary research in visual and cognitive sciences.

Presentations

Boundaries in space: A comparative approach

Speaker: Sang Ah Lee; Center for Mind/Brain Sciences, University of Trento

Spatial navigation provides a unique window into the evolutionary and developmental origins of complex behaviors and memory, due to its richness in representation and computation, its striking similarities between distantly related species, its neural specificity, and its transformation across human development. Environmental boundaries have been shown to play a crucial role in both neural and behavioral studies of spatial representation. In this talk, I will discuss evidence on boundary coding on three different levels: First, I will share my findings showing the primacy and specificity of visual representations of 3D environmental “boundaries” in early spatial navigation in children. Second, I will argue that the cognitive mechanisms underlying boundary representations are shared and widespread across the phylogenetic tree. Finally, I will bring together insights gathered from behavioral findings to investigate the neural underpinnings of boundary coding. From the firing of neurons in a navigating rat’s brain, to a child’s developing understanding of abstract space, I will argue that boundary representation is a fundamental, evolutionarily ancient ability that serves as a basis for spatial cognition and behavior.

Mechanisms for encoding navigational boundaries in the mammalian brain

Speaker: Joshua B Julian; Department of Psychology, University of Pennsylvania
Authors: Alex T Keinath, Department of Psychology, University of Pennsylvania; Jack Ryan, Department of Psychology, University of Pennsylvania; Roy H Hamilton, Department of Neurology, University of Pennsylvania; Isabel A Muzzio, Department of Biology, University of Texas: San Antonio; Russell A Epstein, Department of Psychology, University of Pennsylvania

Thirty years of research suggests that environmental boundaries exert powerful control over navigational behavior, often to the exclusion of other navigationally-relevant cues, such as objects or visual surface textures. Here we present findings from experiments in mice and humans demonstrating the existence of specialized mechanisms for processing boundaries during navigation. In the first study, we examined the navigational behavior of disoriented mice trained to locate rewards in two chambers with geometrically identical boundaries, distinguishable based on the visual textures along one wall. We observed that although visual textures were used to identify the chambers, those very same cues were not used to disambiguate facing directions within a chamber. Rather, recovery of facing directions relied exclusively on boundary geometry. These results provide evidence for dissociable processes for representing boundaries and other visual cues. In a second line of work, we tested whether the human visual system contains neural regions specialized for processing of boundaries. Specifically, we tested the prediction that the Occipital Place Area (OPA) might play a critical role in boundary-based navigation, by extracting boundary information from visual scenes. To do so, we used transcranial magnetic stimulation (TMS) to interrupt processing in the OPA during a navigation task that required participants to learn object locations relative to boundaries and non-boundary cues. We found that TMS of the OPA impaired learning of locations relative to boundaries, but not relative to landmark objects or large-scale visual textures. Taken together, these results provide evidence for dedicated neural circuitry for representing boundary information.

Neuronal representation of visual borders in the primate entorhinal cortex

Speaker: Nathaniel J. Killian; Department of Neurosurgery, Massachusetts General Hospital-Harvard Medical School
Authors: Elizabeth A Buffalo, Department of Physiology and Biophysics, University of Washington

The entorhinal cortex (EC) is critical to the formation of memories for complex visual relationships. Thus we might expect that EC neurons encode visual scenes within a consistent spatial framework to facilitate associations between items and the places where they are encountered. In particular, encoding of visual borders could provide a means to anchor visual scene information in allocentric coordinates. Studies of the rodent EC have revealed neurons that represent location, heading, and borders when an animal is exploring an environment. Because of interspecies differences in vision and exploratory behavior, we reasoned that the primate EC may represent visual space in a manner analogous to the rodent EC, but without requiring physical visits to particular places or items. We recorded activity of EC neurons in non-human primates (Macaca mulatta) that were head-fixed and freely viewing novel photographs presented in a fixed external reference frame. We identified visual border cells, neurons that had increased firing rate when gaze was close to one or more image borders. Border cells were co-localized with neurons that represented visual space in a grid-like manner and with neurons that encoded the angular direction of saccadic eye movements. As a population, primate EC neurons appear to represent gaze location, gaze movement direction, and scene boundaries. These spatial representations were detected in the presence of changing visual content, suggesting that the EC provides a consistent spatial framework for encoding visual experiences.

Investigating cortical encoding of visual parameters relevant to spatial cognition and environmental geometry in humans.

Speaker: Tom Hartley; Department of Psychology, University of York, UK
Authors: David Watson, Department of Psychology, University of York, UK; Tim Andrews, Department of Psychology, University of York, UK

Studies of firing properties of cells in the rodent hippocampal formation indicate an important role for “boundary cells” in anchoring the allocentric firing fields of place and grid cells. To understand how spatial variables such as the distance to local boundaries might be derived from visual input in humans, we are investigating links between the statistical properties of natural scenes and patterns of neural response in scene selective visual cortex. In our latest work we used a data-driven analysis to select clusters of natural scenes from a large database, solely on the basis of their image properties. Although these visually-defined clusters did not correspond to typical experimenter-defined categories used in earlier work, we found they elicited distinct and reliable patterns of neural response in parahippocampal cortex, and that the relative similarity of the response patterns was better explained in terms of low-level visual properties of the images than by local semantic information. Our results suggest that human parahippocampal cortex encodes visual parameters (including properties relevant to environmental geometry). Our approach opens the way to isolating these parameters and investigating their relationship to spatial variables.

Complementary neural representation of scene boundaries

Speaker: Soojin Park; Department of Cognitive Science, Johns Hopkins University
Authors: Katrina Ferrara, Center for Brain Plasticity and Recovery, Georgetown University

Environmental boundaries play a critical role in defining spatial geometry and restrict our movement within an environment. Developmental research with 4-year-olds shows that children are able to reorient themselves by the geometry of a curb that is only 2 cm high, but fail to do so when the curb boundary is replaced by a flat mat on the floor (Lee & Spelke, 2011). In this talk, we will present evidence that such fine-grained sensitivity to a 3D boundary cue is represented in visual scene processing regions of the brain, parahippocampal place area (PPA) and retrosplenial cortex (RSC). First, we will present univariate and multivoxel pattern data from both regions to suggest that they play complementary roles in the representation of boundary cues. The PPA shows disproportionately strong sensitivity to the presence of a slight vertical boundary, demonstrating a neural signature that corresponds to children’s behavioral sensitivity to slight 3D vertical cues (i.e., the curb boundary). RSC did not display this sensitivity. We will argue that this sensitivity does not simply reflect low-level image differences across conditions. Second, we investigate the nature of boundary representation in RSC by parametrically varying the height of boundaries in the vertical dimension. We find that RSC’s response matches a behavioral categorical decision point for the boundary’s functional affordance (e.g., whether the boundary limits the viewer’s potential navigation or not). Collectively, this research serves to highlight boundary structure as a key component of space that is represented in qualitatively different ways across two scene-selective brain regions.

Neural and behavioral sensitivity to boundary cues in Williams syndrome

Speaker: Katrina Ferrara; Center for Brain Plasticity and Recovery, Georgetown University
Authors: Barbara Landau, Department of Cognitive Science, Johns Hopkins University; Soojin Park, Department of Cognitive Science, Johns Hopkins University

Boundaries are fundamental features that define a scene and contribute to its geometric shape. Our previous research using fMRI demonstrates a distinct sensitivity to the presence of vertical boundaries in scene representation by the parahippocampal place area (PPA) in healthy adults (Ferrara & Park, 2014). In the present research, we show that this sensitivity to boundaries is impaired by genetic deficit. Studying populations with spatial disorders can provide insight to potential brain/behavior links that may be difficult to detect in healthy adults. We couple behavioral and neuroimaging methods to study individuals with Williams syndrome (WS), a disorder characterized by the deletion of 25 genes and severe impairment in a range of spatial functions. When both humans and animals are disoriented in a rectangular space, they are able to reorient themselves by metric information conveyed by the enclosure’s boundaries (e.g., long wall vs. short wall). Using this reorientation process as a measure, we find that individuals with WS are unable to reorient by a small boundary cue, in stark contrast to the behavior of typically developing (TD) children (Lee & Spelke, 2011). Using fMRI, we find a linked neural pattern in that the WS PPA does not detect the presence of a small boundary within a scene. Taken together, these results demonstrate that atypical patterns of reorientation correspond with less fine-grained representation of boundaries at the neural level in WS. This suggests that sensitivity to the geometry of boundaries is one of the core impairments that underlies the WS reorientation deficit.

< Back to 2016 Symposia

Artifice versus realism as an experimental methodology

Time/Room: Friday, May 13, 2016, 12:00 – 2:00 pm, Talk Room 1-2
Organizer(s): Peter Scarfe, Dept. Psychology, University of Reading, UK
Presenters: Tony Movshon, David Brainard, Roland Fleming, Johannes Burge, Jenny Read, Wendy Adams

< Back to 2016 Symposia

Symposium Description

The symposium will focus on the fine balance that all experimenters have to strike between adopting artifice or realism as an experimental methodology. As scientists, should we use stimuli and tasks that are extremely well characterized, but often bare little resemblance to anything someone would experience outside of an experiment? Or should we use realistic stimuli and tasks, but by doing so sacrifice some level of experimental control? How do we make valid inferences about brain and behavior based upon each approach, and is there a deal be struck, where we gain the best of both worlds? The symposium will bring together leading researchers who have taken differing approaches to satisfying the needs of realism and artifice. These will include those who have used artificial, or indeed physically impossible, stimuli to probe both 2D and 3D perception; those who have pioneered the use of photo-realistically rendered stimuli in experiments, and developed the tools for other experimenters to do so; and others who combine measurement of natural images statistics from the real world, with well characterized artificial stimuli during experiments. The research presented will cover perception and action in humans, non-human primates, and insects. Techniques will span both behavioral experiments as well as neurophysiology. All speakers will discuss the pros and cons of their approach and how they feel the best balance can be struck between ecological validity and experimental control. The symposium will be relevant to anyone attending VSS, whether student, postdoc, or faculty. In terms of benefits gained, we want to both inspire those at the start of their career, as well as provoke those with established research programs to consider alternative approaches. The aim is to give the audience an insight into how best to design experiments to make valid inferences about brain and behavior. The scientific merit of this is clear; at whatever stage of our research career, we as scientists should constantly be questioning our beliefs about the validity of our research with respect to the real world. The topic of the symposium is highly original and has never been more timely. With existing technology, it is possible to simulate parametrically-controlled photo-realistic stimuli that cannot be distinguished from real photographs. We can also map the statistics of the world around us in exquisite detail. Combine this with the prospect of affordable virtual reality in the near future, running highly-realistic experiments has never been easier. Despite this, the vast majority of experiments still use very artificial stimuli and tasks. It is only by defining and debating what we mean by “realism” and “artifice” that we will understand if this is a problem, and whether a fundamental shift is needed for us to truly understand the brain.

Presentations

Using artifice to understand nature

Speaker: Tony Movshon, NYU

Vision evolved to function in the natural world, but that does not mean that we need to use images of that world to study vision. Synthetic stimuli designed to test hypotheses about visual encoding and representation (e.g. lines, edges, gratings, random dot kinematograms and stereograms, textures with controlled statistics have given us a clear picture of many specific visual mechanisms, and allow principled tests of theories of visual function. What more could a reasonable person want?

The use of graphics simulations in the study of object color appearance

Speaker: David Brainard; University of Pennsylvania
Additional Authors: Ana Radonjić, Department of Psychology, University of Pennsylvania

A central goal in the study of color appearance is to develop and validate models that predict object color appearance from a physical scene description. Ultimately, we seek models that apply for any stimulus, and particularly for stimuli typical of natural viewing. One approach is to study color appearance using real illuminated objects in quasi-natural arrangements. This approach has the advantage that the measurements are likely to capture what happens for natural viewing. It has the disadvantage that it is challenging to manipulate the stimuli parametrically in theoretically interesting ways. At the other extreme, one can choose simplified stimulus sets (e.g., spots of light on uniform backgrounds, or ‘Mondrian’ configurations). This approach has the advantage that complete characterization of performance within the set may be possible, and one can hope that any principles developed will have general applicability. On the other hand, there is no a priori guarantee that what is learned will indeed be helpful for predicting what happens for real illuminated objects. Here we consider an intermediate choice, the use of physically-accurate graphics simulations. These offer the opportunity for precise stimulus specification and control; particularly interesting is the ability to manipulate explicitly distal (object and illuminant) rather than proximal (image) stimulus properties. They also allow for systematic introduction of complexities typical of natural stimuli, thus making it possible to ask what features of natural viewing affect performance and providing the potential to bridge between the study of simplified stimuli and the study of real illuminated objects.

Confessions of a reluctant photorealist

Speaker: Roland Fleming, Dept. of Experimental Psychology, University of Giessen

For some scientific questions, highly reduced stimuli are king. Sine waves. Gabors. Points of light. When paired with rigorous theory, such stimuli provide scalpel-like tools of unparalleled precision for dissecting sensory mechanisms. However, even the most disciplined mind is wont at times to turn to questions of subjective visual appearance. Questions like ‘what makes silk look soft?’, ‘why does honey look runny?‘ or ‘how can I tell wax is translucent?’. In order to study such complex phenomena (fluid flow, subsurface scattering, etc.), there simply is no alternative to using ‘real’ or ‘photorealistic’ stimuli, as these remain the only extant stimuli that elicit the relevant percepts. I will briefly describe a couple of my own experiments using computer simulations of complex physical processes to study the visual appearance of materials and the underlying visual computations. I will discuss both boons and perils of using computer simulations to study perception. On the one hand, the phenomena are horrendously complex and we still lack experimental methods for bridging the gap between discrimination and subjective appearance. On the other hand, simulations provide an unprecedented level of parametric control over complex processes, as well as access to the ground truth state of the scene (shape, motion, ray paths, etc). Finally, I will argue that using and analysing simulations is a necessary step in the development of more focussed, reduced stimuli that will also evoke the requisite phenomenology: one day we may have the equivalent of Gabors for studying complex visual appearance.

Predicting human performance in fundamental visual tasks with natural stimuli

Speaker: Johannes Burge, Department of Psychology, Neuroscience Graduate Group, University of Pennsylvania

Understanding how vision works under natural conditions is a fundamental goal of vision science. Vision research has made enormous progress toward this goal by probing visual function with artificial stimuli. However, evidence is mounting that artificial stimuli may not be fully up to the task. The field is full of computational models—from retina to behavior—that beautifully account for performance with artificial stimuli, but that generalize poorly to arbitrary natural stimuli. On the other hand, research with natural stimuli is often criticized on the grounds that natural signals are too complex and insufficiently controlled for results to be interpretable. I will describe recent efforts to develop methods for using natural stimuli without sacrificing computational and experimental rigor. Specifically, I will discuss how we use natural stimuli, techniques for dimensionality reduction, and ideal observer analysis to tightly predict human estimation and discrimination performance in three tasks related to depth perception: binocular disparity estimation, speed estimation, and motion through depth estimation. Interestingly, the optimal processing rules for processing natural stimuli also predict human performance with classic artificial stimuli. We conclude that properly controlled studies with natural stimuli can complement studies with artificial stimuli, perhaps contributing insights that more traditional approaches cannot.

Natural behaviour with artificial stimuli: probing praying mantis vision

Speaker: Jenny Read; Newcastle University, Institute of Neuroscience
Additional Authors: Dr Vivek Nityananda, Dr Ghaith Tarawneh, Dr Ronny Rosner, Ms Lisa Jones, Newcastle University, Institute of Neuroscience

My lab is working to uncover the neural circuitry supporting stereoscopic vision in the praying mantis, the only invertebrate known to possess this ability. Mantises catch their prey by striking out with their spiked forelimbs. This strike is released only when prey is perceived to be at the appropriate distance, so provides an extremely convenient way of probing the insects’ depth perception. Other behaviours, such as tracking, saccades and optomotor response, also inform us about mantis vision. Because we are using natural rather than trained behaviours, our stimuli have to be naturalistic enough to elicit these responses. Yet as we begin the study of mantis stereopsis, clear answers to our scientific questions are often best obtained by artificial or indeed impossible stimuli. For example, using artificial “cyclopean” stimuli, where objects are defined purely by disparity, would enable us to be sure that the mantis’ responses are mediated totally by disparity and not by other cues. Using anti-correlated stereograms, which never occur in nature, could help us understand whether mantis stereopsis uses cross-correlation between the two eyes’ images. Accordingly, my lab is navigating a compromise between these extremes. We are seeking stimuli which are naturalistic enough to drive natural behaviour, while artificial enough to provide cleanly-interpretible answers our research questions – although we do sometimes end up with stimuli which are naturalistic enough to present confounds, and artificial enough to lack ecological validity. I will discuss the pros and cons, and aim to convince you we are making progress despite the pitfalls.

Natural scene statistics and estimation of shape and reflectance.

Speaker: Wendy Adams; University of Southampton
Additional Authors: Erich W. Graf, University of Southampton, Southampton, UK; James H. Elder, York University, Canada

A major function of the visual system is to estimate the shape and reflectance of objects and surfaces from the image. Evidence from both human and computer vision suggests that solutions to this problem involve exploiting prior probability distributions over shape, reflectance and illumination. In an optimal system, these priors would reflect the statistics of our world. To allow a better understanding of the statistics of our environment, and how these statistics shape human perception, we have developed the Southampton-York Natural Scenes (SYNS) public dataset. The dataset includes scene samples from a wide variety of indoor and outdoor scene categories. Each scene sample consists of (i) 3D laser range (LiDAR) data over a nearly spherical field of view, co-registered with (ii) spherical high dynamic range imagery, and (iii) a panorama of stereo image pairs. These data are publicly available at https://syns.soton.ac.uk/. I will discuss a number of challenges that we have addressed in the course of this project, including: 1) geographic sampling strategy, 2) scale selection for surface analysis, 3) relating scene measurements to human perception. I will also discuss future work and potential applications.

< Back to 2016 Symposia

2016 Symposia

Artifice versus realism as an experimental methodology

Organizer(s): Peter Scarfe, Department of Psychology, University of Reading, UK
Time/Room: Friday, May 13, 2016, 12:00 – 2:00 pm, Talk Room 1-2

How do we make valid inferences about brain and behavior based on experiments using stimuli and tasks that are extremely well characterized, but bare little resemblance to the real world? Is this even a problem? This symposium will bring together leading researchers who have taken differing approaches to striking a balance between the experimental control of “artifice” and the ecological validity of “realism”. The aim is to provoke debate about how best to study perception and action, and ask whether a fundamental shift is needed for us to truly understand the brain. More…

Boundaries in Spatial Navigation and Visual Scene Perception

Organizer(s): Soojin Park, Johns Hopkins University and Sang Ah Lee, University of Trento
Time/Room: Friday, May 13, 2016, 12:00 – 2:00 pm, Pavilion

Humans and nonhuman animals compute locations in navigation and scene perception by using a spontaneously encoded geometry of the 3D environmental boundary layouts. The aim of this symposium is to bridge research from various subfields to discuss the specific role of boundaries in the processing of spatial information and to converge on a coherent theoretical framework for studying visual representations of boundaries. To achieve this, our interdisciplinary group of speakers will discuss research on a broad range of subject populations, from rodents, to primates, to individuals with genetic disorders, using various experimental methods (developmental, behavioral, fMRI, TMS, single-cell and population coding). More…

What do deep neural networks tell us about biological vision?

Organizer(s): Radoslaw Martin Cichy, Department of Psychology and Education, Free University Berlin, Berlin, Germany
Time/Room: Friday, May 13, 2016, 2:30 – 4:30 pm, Talk Room 1-2

To understand visual cognition we ultimately need an explicit and predictive model of neural processing. In recent years deep neural networks—brain-inspired computer vision models—have emerged as a promising model for visual capacities in the neurosciences. This symposium delivers the first results regarding how DNNs help us to understand visual processing in the human brain and provides a forum for critical discussion of DNNs: what have we gained, what are we missing, and what are the next steps? More…

What can we learn from #TheDress – in search for an explanation

Organizer(s): Annette Werner, Institute for Ophthalmic Research, Tübingen University
Time/Room: Friday, May 13, 2016, 2:30 – 4:30 pm, Pavilion

Few topics in colour research have generated so much interest in the science community and public alike, as the recent phenomenon #TheDress. The Symposium shall gather the actual experimental evidence and provide a profound basis for a discussion and evaluation of the hypotheses regarding the origin of the phenomenon. Furthermore, #TheDress is a chance for further insight into the nature of human colour perception, in particular with respect to individual differences, and cognitive influences, including memory, colour preferences and the interaction between peception and language. More…

ARVO@VSS: Information processing in a simple network: What the humble retina tells the brain.

Organizer(s): Scott Nawy, PhD, University of Nebraska Medical Center and Anthony Norcia, Stanford University
Time/Room: Friday, May 13, 2016, 5:00 – 7:00 pm, Talk Room 1-2

This year’s biennial ARVO at VSS symposium features a selection of recent work on circuit-level analyses of retinal, thalamic and collicular systems that are relevant to understanding of cortical mechanisms of vision. The speakers deploy a range of state-of-the art methods that bring an unprecedented level of precision to dissecting these important visual circuits. More…

The parietal cortex in vision, cognition, and action

Organizer(s): Yaoda Xu, Harvard University and David Freedman, University of Chicago
Time/Room: Friday, May 13, 2016, 5:00 – 7:00 pm, Pavilion

The parietal cortex has been associated with a diverse set of functions, such as visual spatial processing, attention, motor planning, object representation, short-term memory, categorization and decision making. By bringing together researchers from monkey neurophysiology and human brain imaging, this symposium will integrate recent findings to update our current understanding of the role of parietal cortex in vision, cognition and action. By bridging different experimental approaches and diverse perceptual, cognitive, and motor functions, this symposium will also attempt to address whether it is possible to form a unified view of parietal functions. More…

Vision Sciences Society