What Deafness Tells Us about the Nature of Vision

Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): Rain Bosworth, Ph.D., Department of Psychology, University of California, San Diego
Presenters: Matthew Dye, Ph.D., Olivier Pascalis, Ph.D., Rain Bosworth, Ph.D., Fang Jiang, Ph.D.

< Back to 2019 Symposia

Symposium Description

In the United States, around 3 in 1,000 newborns are born with severe to profound hearing loss. It is widely believed that auditory deprivation leads to compensatory enhancement of vision and touch in these children. Over the last 30 years, various visual abilities have been studied extensively in deaf populations. Both behavioral and neural adaptations have been reported, but some visual systems seem to exhibit more experience-dependent plasticity than others. A common ecological explanation for this variation in plasticity across systems is that some visual functions are more essential for successful interaction with the environment in the absence or attenuation of auditory input. As a result, more drastic changes are instantiated in these visual systems. For example, because deaf people are less able to utilize auditory cues to orient visual attention, peripheral vision, more than central vision, may play a crucial role in environmental monitoring. Another explanation is that some visual systems are biologically more immature during early development, such as the peripheral retina and magnocellular visual processing pathway. This may facilitate greater experience-dependent plasticity. The situation is complicated by extensive use of speechreading and sign language within deaf populations – experiences that may also induce neuroplasticity. While both behavioral and neural differences in deaf vision are now well established, the underlying neural mechanisms that give rise to behavioral changes remain elusive. Despite the importance of understanding plasticity and variability within the visual system, there has never been a symposium on this topic at VSS. The aim of this symposium is therefore to bring together a diverse group of scientists who have made important contributions to this topic. They will integrate past and recent findings to illuminate our current understanding of the neuroplasticity of visual systems, and identify research directions that are likely to increase our understanding of the mechanisms underpinning variability and adaptability in visual processing. Matthew Dye will introduce key theoretical perspectives on visual functions in deaf and hearing populations, drawing attention to the multisensory nature of perception. Next, Olivier Pascalis will elucidate an important point – not all visual systems are equally plastic – discussing his recent findings with face processing and peripheral versus central visual processing. One theme that Dye and Pascalis will both address is that while deaf adults often show enhancements, deaf children do not, suggesting that successful behavioral adaptation may require the integration of multiple neural systems in goal-directed ways. Rain Bosworth will then present findings on altered face and motion perception in deaf adults and consider important methodological issues in the study of deaf vision. The last two presenters, Fang Jiang and Geo Kartheiser, will cover the neural underpinnings of deaf vision as revealed by neuroimaging using fMRI, EEG, and fNIRS. Together, the presenters will show how sensory, linguistic, and social experiences during an early sensitive period in development have lasting effects on visual perception and visuospatial cognition. As such, we anticipate this symposium will appeal to a wide range of attendees across various disciplines including developmental psychology, vision science and neuroscience.

Presentations

Spatial and Temporal Vision in the Absence of Audition

Speaker: Matthew Dye, Ph.D., Rochester Institute of Technology/National Technical Institute for the Deaf (RIT/NTID)

Changes in the visual system due to deafness provide information about how multisensory processes feedback to scaffold the development of unisensory systems. One common perspective in the literature is that visual inputs are highly spatial, whereas auditory inputs, in contrasts, are highly temporal. A simple multisensory account for sensory reorganization therefore predicts spatial enhancements and temporal deficits within the visual system of deaf individuals. Here I will summarize our past and ongoing research which suggests that evidence for this multisensory scaffolding hypothesis is confounded due to language deprivation in many samples. This is because most deaf people are born to nonsigning parents, and deaf children do not have full access to the spoken language around them. By studying visual processing in deaf individuals who are exposed early to perceivable visual language, such as American Sign Language, we (i) gain a better understanding of the interplay between auditory and visual systems during development, and (ii) accumulate evidence for the importance of early social interaction for the development of higher order visual abilities. Our data suggest that changes in vision over space are ecologically driven and subject to cognitive control, and that early linguistic interaction is important for the development of sustained attention over time.

What is the Impact of Deafness on Face Perception and Peripheral Visual Field Sensitivity?

Speaker: Olivier Pascalis, Ph.D., Laboratoire de Psychologie et NeuroCognition, CNRS, Grenoble, France

It is well established that early profound deafness leads to enhancements in visual processes. Different findings are reported for peripheral versus central vision. Visual improvements have been mainly reported for the peripheral visual field, which is believed to be a result of deaf people’s need to compensate for inaccessible auditory cues in the periphery, but for central visual processing, mixed results (including no changes, poorer, and superior performance) have been found for deaf people. We consider two important intriguing (and often overlooked) issues that pertain to deaf vision: One, deaf people, and many hearing people too, use sign language which requires steady fixation on the face. Signers pay rigorous attention to the face because faces provide critical intonational and linguistic information during communication. Two, this also means that most of the manual language information falls in the perceiver’s lower visual field, as the signer’s hands almost always fall in front of the torso region. I will present a series of studies in which we tried to separate the impacts of deafness and sign language experience on face processing and on peripheral field sensitivity. In order to address the role of sign language, in the absence of deafness, we report results from hearing signers. Our results suggest that sign language experience, not associated with deafness, may be also a modulating factor of visual cognition.

Psychophysical Assessment of Contrast, Motion, Form, Face, and Shape Perception in Deaf and Hearing People

Speaker: Rain Bosworth, Ph.D., Department of Psychology, University of California, San Diego

Visual processing might be altered in deaf people for two reasons. One, they lack auditory input, compelling them to rely more on their intact visual modality. Two, many deaf people have extensive experience using a visual signed language (American Sign Language, ASL), which may alter certain aspects of visual perception that are important for processing of ASL. While some deaf people have ASL exposure since birth, by virtue of having deaf parents, many others are born to hearing parents, with no signing knowledge, and have delayed visual language exposure. In this study, we asked if deafness and/or sign language experience impact visual perception in 40 Deaf signers and 40 Hearing nonsigners, for psychophysical tests of motion, form, shape, and face discrimination, while controlling for contrast detection, age, visuospatial IQ, and gender makeup. The Deaf signers were separated into two groups, Deaf native signers who were exposed to ASL between ages 0 to 2 years and Deaf late-exposed signers who were exposed to ASL after the age of 3 years. Results indicated that enhanced face processing was found in Deaf native signers who have early visual language exposure, but not in Deaf late-exposed signers. Moreover, Deaf late-exposed signers actually have impoverished motion processing, compared to Deaf native signers and Hearing nonsigners. Together, these provide evidence that language exposure to sign or language deprivation in the first 2 years of life does have a lasting impact on visual perception in adults.

Measuring Visual Motion Processing in Early Deaf Individuals with Frequency Tagging

Speaker: Fang Jiang, Ph.D., Department of Psychology, University of Nevada, Reno, USA

Early deaf individuals show enhanced performance at some visual tasks, including the processing of visual motion. Deaf individuals’ auditory and association cortices have been shown to respond to visual motion, however, it is unclear how these responses relate to their enhanced motion processing ability. Here I will present data from two recent studies, where we examined deaf and hearing participants’ fMRI and EEG responses frequency-tagged to the presentation of directional motion. Our results suggest the intriguing possibility that deaf participants’ increased direction-selective motion responses in the right STS region of could potentially support their behavioral advantage reported in previous studies.

< Back to 2019 Symposia