Does appearance matter?

Time/Room: Friday, May 10, 3:30 – 5:30 pm, Royal 6-8
Organizer: Sarah R. Allred, Rutgers–The State University of New Jersey
Presenters: Benjamin T. Backus, Frank H. Durgin, Michael Rudd, Alan Gilchrist, Qasim Zaidi, Anya Hurlbert

< Back to 2013 Symposia

Symposium Description

Vision science originated with questions about how and why things look the way do. With the advent of physiological tools and the development of rigorous psychophysical methods, however, the language of appearance has been largely abandoned. As scientists, we rarely invoke or report on the qualities of visual appearance and instead report more objective measures such as discrimination thresholds or points of subjective equality. This is not surprising; after all, appearance is experienced subjectively, and the goal of science is objectivity. Thus, phenomenology is sometimes given short shrift in the field as a whole. Here we offer several views, sometimes disparate, grounded in both experimental data and theory, on how vision science is advanced by incorporating phenomenology and appearance. We discuss the nature of scientifically objective methods that capture what we mean by appearance, and the role of subjective descriptions of appearance in vision science. Between us, we argue that by relying on phenomenology and the language of appearance, we can provide a parsimonious framework for interpreting many empirical phenomena, including instructional effects in lightness perception, contextual effects on color constancy, systematic biases in egocentric distance perception and predicting 3D shape from orientation flows. We also discuss contemporary interactions between appearance, physiology, and neural models. Broadly, we examine the criteria for the behaviors that are best thought of as mediated by reasoning about appearances. This symposium is timely. Although the basic question of appearance has been central to vision science since its inception, new physiological and psychophysical methods are rapidly developing. This symposium is thus practical in the sense that these new methods can be more fully exploited by linking them to phenomenology. The symposium is also of broad interest to those interested in the big picture questions of vision science. We expect to pull from a wide audience: the speakers represent a range of techniques (physiology, modeling, psychophysics), a diversity of institutional affiliations and tenure, and similarly broad areas of focus (e.g. cue integration, distance perception, lightness perception, basic spatial and color vision, and higher level color vision).


Legitimate frameworks for studying how things look

Speaker: Benjamin T. Backus, Graduate Center for Vision Research, SUNY College of Optometry

What scientific framework can capture what we might mean by “visual appearance” or “the way things look”? The study of appearance can be operationalized in specific situations, but a general definition is difficult. Some visually guided behaviors, such as changing one’s pupil size, maintaining one’s upright posture, ducking a projectile, or catching an object when it rolls off the kitchen counter, are not mediated by consciously apprehended appearances. These behaviors use vision in a fast, stereotyped, and automatic way. Compare them to assessing which side of a mountain to hike up, or whether a currently stationary object is at risk of rolling off the counter. These are behaviors probably are mediated by appearance, in the sense of a general-purpose representation that makes manifest to consciousness various estimated scene parameters. One can reason using appearances, and talk about them with other people. Over the years various strategies have been employed to study or exploit appearance: recording unprompted verbal responses from naïve observers; using novel stimuli that cannot be related to previous experience; or using of stimuli that force a dichotomous perceptual decision. We will review these ideas and try to identify additional criteria that might be used. An important realization for this effort is that conscious awareness need not be all-or-none; just as visual sense data are best known at the fovea, appearance is best known at the site of attentional focus.

Why do things seem closer than they are?

Speaker: Frank H. Durgin, Swarthmore Collete
Authors: Zhi Li; Swarthmore College

Systematic and stable biases in the visual appearance of locomotor space may reflect functional coding strategies for the sake of more precisely guiding motor actions. Perceptual matching tasks and verbal estimates suggest that there is a systematic underestimation of egocentric distance along the ground plane in extended environments. Whereas underestimation has previously been understood as a mere failure of proper verbal calibration, such an interpretation cannot account for perceptual matching results. Moreover, we have observed that the subjective geometry of distance perception on the ground plane is quantitatively consistent with the explicit overestimation of angular gaze declination which we have measured independently of perceived distance. We suggest that there is a locally-consistent expansion of specific angular variables in visual experience that is useful for action, and that this stable expansion may aid action by retaining more precise angular information, despite the information being mis-scaled approximately linearly. Actions are effective in this distorted perceived space by being calibrated to their perceived consequences (but notice that this means that measuring spatial action parameters, such as walked distance, are not directly informative about perceived distance). We distinguish our view from reports of small judgmental biases moderated by semantic, social and emotional factors on the one hand (which might or might not involve changes in visual appearance) and also from the prevailing implicit assumption that the perceptual variables guiding action must be accurate. The perceptual variables guiding action must be stable in order to support action calibration and precise to support precise action. We suggest that the systematic biases evident in the visual (and haptic) phenomenology of locomotor space may reflect a functional coding strategy that can render actions that are coded in the same perceived space more effective than if space were perceived veridically.

How expectations affect color appearance and how that might happen in the brain

Speaker: Michael Rudd, Howard Hughes Medical Institute; University of Washington

The highest luminance anchoring principle (HLAP) asserts the highest luminance surface within an illumination field appears white and the lightness of other surfaces are computed relative to the highest luminance. HLAP is a key tenet of the anchoring theories of Gilchrist and Bressan, and Land’s Retinex color constancy model. The principle is supported by classical psychophysical findings that the appearance of incremental targets is not much affected by changes in the surround luminance, while the appearances of decremental targets depends on the target-surround luminance ratio (Wallach, 1948; Heinemann, 1955). However, Arend and Spehar (1993) showed that this interpretation is too simplistic. Lightness matches made with such stimuli are strongly affected by instructions regarding either the perceptual dimension to be matched (lightness versus brightness) or the nature of illumination when lightness judgments are made. Rudd (2010) demonstrated that instructional effects can even transform contrast effects into assimilation effects. To model these results, I proposed a Retinex-like neural model incorporating mechanisms of edge integration, contrast gain control, and top-down control of edge weights. Here I show how known mechanisms in visual cortex could instantiate the model. Feedback from prefrontal cortex to layer 6 of V1 modulates edge responses in V1 to reorganize the edge integration properties of the V1-V4 circuit. Filling-in processes in V4 compute different lightnesses depending on the V1 gain settings, which are controlled by the observer’s conscious intention to view the stimulus in one way or another. The theory accounts for the instruction-dependent shifts between contrast and assimilation.

How things look

Speaker: Alan Gilchrist, Rutgers – Newark

Recognizing the historical role of materialism in the advancement of modern science, psychology has long sought to get the ghosts out of its theories. Phenomenology has thus been given short shrift, in part because of its distorted form under the early sway of introspectionism. However, phenomenology can no more be avoided in visual perception than the nature of matter can be avoided in physics. Visual experience is exactly what a theory of perception is tasked to explain. If we want to answer Koffka’s question of why things look as they do, a crucial step is the description of exactly how things do look. Of course there are pitfalls. Because we cannot measure subjective experience directly, we rely heavily on matching techniques. But the instructions to subjects must be carefully constructed so as to avoid matches based on the proximal stimulus on one hand, and matches that represent cognitive judgments (instead of the percept) on the other. Asking the subject “What do you think is the size (or shade of gray) of the object?” can exclude a proximal stimulus match but it risks a cognitive judgment. Asking “What does the size (or shade of gray) look like?” can exclude a cognitive judgment but risks a proximal match. Training subjects on the correct nature of the task may represent the best way to exclude proximal stimulus matches while the use of indirect tasks may represent the best way to exclude cognitive judgments. Though there may be no perfect solution to this problem, it cannot be avoided.

Phenomenology and neurons

Speaker: Qasim Zaidi, Graduate Center for Vision Research, SUNY College of Optometry

Frequent pitfalls of relying solely on visual appearances are theories that confuse the products of perception with the processes of perception. Being blatantly reductionist and seeking cell-level explanations helps to conceive of underlying mechanisms and avoid this pitfall. Sometimes the best way to uncover a neural substrate is to find physically distinct stimuli that appear identical, while ignoring absolute appearance. The prime example was Maxwell’s use of color metamers to critically test for trichromacy and estimate the spectral sensitivities of three classes of receptors. Sometimes it is better to link neural substrates to particular variations in appearance. The prime example was Mach’s inference of the spatial gradation of lateral inhibition between neurons, from what are now called Mach-bands. In both cases, a theory based on neural properties was tested by its perceptual predictions, and both strategies continue to be useful. I will first demonstrate a new method of uncovering the neural locus of color afterimages. The method relies on linking metamers created by opposite adaptations to shifts in the zero-crossings of retinal ganglion cell responses. I will then use variations in appearance to show how 3-D shape is inferred from orientation flows, relative distance from spatial-frequency gradients, and material qualities from relative energy in spatial-frequency bands. These results elucidate the advantages of the parallel extraction of orientations and spatial frequencies by striate cortex neurons, and suggest models of extra-striate neural processes. Phenomenology is thus made useful by playing with identities and variations, and considering theories that go below the surface.

The perceptual quality of colour

Speaker: Anya Hurlbert, Institute of Neuroscience, Newcastle University

Colour has been central to the philosophy of perception, and has been invoked to support the mutually opposing views of subjectivism and realism. Here I demonstrate that by understanding color as an appearance, we can articulate a sensible middle ground: although colour is constructed by the brain, it corresponds to a real property of objects. I will argue here that (1) color is a perceptual quality, a reading of the outside world, taken under biological and environmental constraints, and a meaningful property in the perceiver’s internal world (2) the core property of colour constancy makes sense only if colour is subjective and (3) measuring colour constancy illustrates both the need for and the difficulty of subjective descriptions of appearance in vision science. For example, colour names give parsimonious descriptions of subjective appearance, and the technique of colour naming under changing illumination provides a reliable method for measuring colour constancy which is both objective and subjective at the same time. In measurements of simultaneous chromatic contrast, responses of “more red” or “more green” are also appearance descriptors which can be quantified. Achromatic adjustment methods (“adjust the patch until it appears white”) also map a physical stimulus to the subjective experience of neutrality. I will compare the results of such techniques with our recent measurements of colour constancy using techniques that do not rely on appearance descriptors, in particular, the measurement of discrimination thresholds for global illumination change in real scenes.

< Back to 2013 Symposia