VSS@ARVO 2019

Vision After Sight Restoration

Monday, April 29, 1:15 – 2:45 pm at ARVO 2019, Vancouver, Canada
Organizers: Lynne Kiorpes, Ulrike Grunert and David Brainard
Speakers: Holly Bridge, Krystel Huxlin, Sharon  Gilad -Gutnick and Geoff Boynton

Visual deprivation during development can have a profound effect on adult visual function, with congenital or early acquired blindness representing one extreme regarding the degree of deprivation and adult sight loss representing another. As better treatments for blindness become available, a critical question concerns the nature of vision after the restoration of sight and the level of remaining visual system plasticity. This symposium will highlight recent progress in this area, as well as how vision therapy can best be deployed to optimize the quality of post-restoration vision. This is the biennial VSS@ARVO symposium, featuring speakers from the Vision Sciences Society.

Rhythms of the brain, rhythms of perception

Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 2
Organizer(s): Laura Dugué, Paris Descartes University & Suliann Ben Hamed, Université Claude Bernard Lyon I
Presenters: Suliann Ben Hamed, Niko Busch, Laura Dugue, Ian Fiebelkorn

< Back to 2019 Symposia

Symposium Description

The phenomenological, continuous, unitary stream of our perceptual experience appears to be an illusion. Accumulating evidence suggests that what we perceive of the world and how we perceive it rises and falls rhythmically at precise temporal frequencies. Brain oscillations -rhythmic neural signals- naturally appear as key neural substrates for these perceptual rhythms. How these brain oscillations condition local neuronal processes, long-range network interactions, and perceptual performance is a central question to visual neuroscience. In this symposium, we will present an overarching review of this question, combining evidence from monkey neural and human EEG recordings, TMS interference studies, and behavioral analyses. Suliann Ben Hamed will first present monkey electrophysiology evidence for a rhythmic exploration of space by the prefrontal attentional spotlight in the alpha (8-12 Hz) frequency range and will discuss the functional coupling between this rhythmic exploration and long-range theta frequency modulations. Niko Busch will then present electro-encephalography (EEG) and psychophysics studies in humans, and argue that alpha oscillations reflect fluctuations of neuronal excitability that modulate periodically subjective perceptual experience. Laura Dugué will present a series of EEG, Transcranial Magnetic Stimulation (TMS) and psychophysics evidence in humans in favor of a functional dissociation between the alpha and the theta (3–8 Hz) rhythms underlying periodic fluctuations in perceptual and attentional performance respectively. Finally, Ian Fiebelkorn will present psychophysics studies in humans and electrophysiology evidence in macaque monkeys, and argue that the fronto-parietal theta rhythm allows for functional flexibility in large-scale networks. The multimodal approach, including human and monkey models and a large range of behavioral and neuroimaging techniques, as well as the timeliness of the question of the temporal dynamics of perceptual experience, should be of interest to cognitive neuroscientists, neurophysiologists and psychologists interested in visual perception and cognition, as well as to the broad audience of VSS.

Presentations

The prefrontal attentional spotlight in time and space

Speaker: Suliann Ben Hamed, Université Claude Bernard Lyon I

Recent accumulating evidence challenges the traditional view of attention as a continuously active spotlight over which we have direct voluntary control, suggesting instead a rhythmic operation. I will present monkey electrophysiological data reconciling these two views. I will apply machine learning methods to reconstruct, at high spatial and temporal resolution, the spatial attentional spotlight from monkey prefrontal neuronal activity. I will first describe behavioral and neuronal evidence for distinct spatial filtering mechanisms, the attentional spotlight serving to filter in task relevant information while at the same time filtering out task irrelevant information. I will then provide evidence for rhythmic spatial attention exploration by this prefrontal attentional spotlight in the alpha (7-12Hz) frequency range. I will discuss this rhythmic exploration of space both from the perspective of sensory encoding and behavioral trial outcome, when processing either task relevant or task irrelevant information. While these oscillations are task-independent, I will describe how their spatial unfoldment flexibly adjusts to the ongoing behavioral demands. I will conclude by bridging the gap between this alpha rhythmic exploration by the attentional spotlight and previous reports on a contribution of long-range theta oscillations in attentional exploration and I will propose a novel integrated account of a dynamic attentional spotlight.

Neural oscillations, excitability and perceptual decisions

Speaker: Niko Busch, WWU Münster

Numerous studies have demonstrated that the power of ongoing alpha oscillations in the EEG is inversely related to neural excitability, as reflected in spike-firing rate, multi-unit activity, or the hemodynamic fMRI signal. Furthermore, alpha oscillations also affect behavioral performance in perceptual tasks. However, it is surprisingly unclear which latent perceptual or cognitive mechanisms mediate this effect. For example, an open question is whether neuronal excitability fluctuations induced by alpha oscillations affect an observer’s acuity or perceptual bias. I will present a series of experiments that aim to clarify the link between oscillatory power and perceptual performance. In short, these experiments indicate that performance during moments of weak pre-stimulus power, indicating greater excitability, is best described by a more liberal detection criterion rather than a change in detection sensitivity or discrimination accuracy. I will argue that this effect is due to an amplification of both signal and noise, and that this amplification occurs already during the first stages of visual processing.

The rhythms of visual attention

Speaker: Laura Dugué, Paris Descartes University

Despite the impression that our visual perception is seamless and continuous across time, evidence suggests that our visual experience relies on a series of discrete moments, similar to the snapshots of a video clip. My research focuses on these perceptual and attentional rhythms. Information would be processed in discrete samples; our ability to discriminate and attend to visual stimuli fluctuating between favorable and less favorable moments. I will present a series of experiments, using multimodal functional neuroimaging combined with psychophysical measurements in healthy humans that assess the mechanisms underlying psychophysical performance during and between two perceptual samples, and how these rhythmic mental representations are implemented at the neural level. I will argue that two sampling rhythms coexist, i.e. the alpha rhythm (8–12 Hz) to allow for sensory, perceptual sampling, and the theta rhythm (3–8 Hz) rather supporting rhythmic, attentional exploration of the visual environment.

Rhythmic sampling of the visual environment provides critical flexibility

Speaker: Ian Fiebelkorn, Princeton University

Environmental sampling of spatial locations is a fundamentally rhythmic process. That is, both attention-related boosts in sensory processing and the likelihood of exploratory movements (e.g., saccades in primates and whisking in rodents) are linked to theta rhythms (3–8 Hz). I will present electrophysiological data, from humans and monkeys, demonstrating that intrinsic theta rhythms in the fronto-parietal network organize neural activity into two alternating attentional states. The first state is associated with both (i) the suppression of covert and overt attentional shifts and (ii) enhanced visual processing at a behaviorally relevant location. The second state is associated with attenuated visual processing at the same location (i.e., the location that received a boost in sensory processing during the first attentional state). In this way, theta-rhythmic sampling provides critical flexibility, preventing us from becoming overly focused on any single location. Every approximately 250 ms, there is a window of opportunity when it is easier to disengage from the presently attended location and shift to another location. Based on these recent findings, we propose a rhythmic theory of environmental sampling. The fronto-parietal network is positioned at the nexus of sensory and motor functions, directing both attentional and motor aspects of environmental sampling. Theta rhythms might help to resolve potential functional conflicts in this network, by temporally isolating sensory (i.e., sampling) and motor (i.e., shifting) functions. This proposed role for theta rhythms in the fronto-parietal network could be a more general mechanism for providing functional flexibility in large-scale networks.

< Back to 2019 Symposia

Visual Search: From youth to old age, from the lab to the world

Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 2
Organizer(s): Beatriz Gil-Gómez de Liaño, Brigham & Women’s Hospital-Harvard Medical School and Cambridge University
Presenters: Beatriz Gil-Gómez de Liaño, Iris Wiegand, Martin Eimer, Melissa L-H Võ, Lara García-Delgado, Todd Horowitz

< Back to 2019 Symposia

Symposium Description

In all stages of life, visual search is a fundamental aspect of everyday tasks from a child, looking for the right Lego blocks, to her parent, searching for lost keys in the living room, to an expert, hunting for signs of cancer in a lung CT, to grandmother finding the right tablets in the medicine cabinet. Many (perhaps, most) cognitive processes interact with selective attention. Those processes change from children to older adults, and vary between the processing of simple elements to processing of more complex objects in richer environments. This symposium uses visual search tasks as a way to probe changes and consistencies in cognition over the lifespan and in different types of real world environments. Basic research in visual search has revealed essential knowledge about human behavior in vision and cognitive science, but usually in repetitive and unrealistic environments, unfortunately many times lacking ecological validity. This symposium aims to go one step further to more realistic situations, and give insights into how humans from early childhood to old age perform visual search in the real world. Importantly, we will put forward how essential this knowledge is to develop systems and ways to use it in global human challenges in today’s society. The multidisciplinary and applied character of this proposal, from vision science, neuroscience, medicine, engineering, video games applications and education makes it of interest to students, postdocs, faculty, and even general audience. This symposium is a perfect example of how cognitive and vision science can be transferred to society in real products improving human lives, involving the adult population, as well as children and older adults. The first two talks will tell us about age differences in visual search from childhood to younger and older adulthood in more realistic environments. The third talk will give us clues to understand how the brain processes that support visual search change over the lifespan. The lifespan approach can give us insights to better understand visual search as a whole. In the fourth and fifth talks we will turn to visual search in the real world; its applications and new challenges. We will review what we know about visual search in real and virtual scenes (fourth talk), including applications of visual search in real world tasks. In the last talk we will show how from engineering and video game fields it has been possible to develop a reliable diagnostic tool based on crowdsourcing visual search, including people of all ages, from youth to old age, to diagnose diseases as malaria, tuberculosis or cancer breast in the real world.

Presentations

Visual Search in children: What we know so far, and new challenges in the real world.

Speaker: Beatriz Gil-Gómez de Liaño, Brigham & Women’s Hospital-Harvard Medical School and Cambridge University

While we have a very substantial body of research on visual search in adults, there is a much smaller literature in children, despite the importance of search in cognitive development. Visual Search is a vital task in everyday life of children: looking for friends in the park, choosing the appropriate word within a word-list in a quiz at school, looking for the numbers given in a math problem… For feature search (e.g. “pop-out” of red among green), it is well-established that infants and children generally perform similarly to adults, showing that exogenous attention is stable across the lifespan. However, for conjunction search tasks there is evidence of age-related performance differences through all stages of life showing the typical inverted U shape function from childhood to older age. In this talk I will review some recent work and present new data showing that different mechanisms of selective attention operate at different ages within childhood, not only at a quantitative level but also qualitatively. Target salience, reward history, child-friendly stimuli and video-game-like tasks may be also important factors modulating attention in visual search in childhood, showing that children’s attentional processes can be more effective than has been believed to date. We will also show new results from a visual search foraging task, highlighting it as a potentially useful task in a more complete study of cognitive and attentional development in the real world. This work leads to better understanding of typical cognitive development and gives us insights into developmental attentional deficits.

Visual Search in the older age: Understanding cognitive decline.

Speaker: Iris Wiegand, Max Planck UCL Center for Computational Psychiatry and Ageing Research

Did I miss that sign post? – Where did I leave my glasses? Older adults increasingly report experiencing such cognitive failures. Consistent with this experience, age-related decline has been demonstrated in standard visual search experiments. These standard laboratory tasks typically use simple stimulus material and brief trial structures that are well-designed to isolate some specific cognitive process component. Real-world tasks, however, while built of these smaller components, are complex and extend over longer periods of time. In this talk, I will compare findings on age differences in simple visual search experiments to our recent findings from extended hybrid (visual and memory) search and foraging tasks. The extended search tasks resemble complex real-world tasks more closely and enable us to look at age differences in attention, memory, and strategic process components within one single task. Surprisingly, after generalized age-related slowing of reaction times (RT) was controlled for, the extended search tasks did not reveal any age-specific deficits in attention and memory functions. However, we did find age-related decline in search efficiency, which were explained by differences between age groups in foraging strategies. I will discuss how these new results challenge current theories on cognitive aging and what impact they could have on the neuropsychological assessment of age-related cognitive changes.

Component processes of Visual Search: Insights from neuroscience.

Speaker: Martin Eimer, Birkbeck, University of London

I will discuss cognitive and neural mechanisms that contribute to VS, how these mechanisms are organized in real time, and how they change across the lifespan. These component processes include the ability to activate representations of search targets (attentional templates), the guidance of attention towards target objects, as well as the subsequent attentional selection of these objects and their encoding into working memory. The efficiency of VS performance changes considerably across the life span. I will discuss findings from search experiment with children, adults, and the elderly, in order to understand which component processes of VS show the most pronounced changes with age. I will focus on the time course of target template activation processes, differences between space-based and feature-based attentional guidance, and the speed with which attention is allocated to search targets.

Visual Search goes real: The challenges of going from the lab to (virtual) reality.

Speaker: Melissa L-H Võ, Goethe University Frankfurt

Searching for your keys can be easy if you know where you put them. But when your daughter loves playing with keys and has the bad habit of just randomly placing them in the fridge or in a pot, your routine search might become a nuisance. What makes search in the real world usually so easy and sometimes so utterly hard? There has been a trend to study visual perception in increasingly more naturalistic settings due to the legit concern that evidence gathered from simple, artificial laboratory experiments does not translate to the real world. For instance, how can one even attempt to measure set size effects in real world search? Does memory play a larger role when having to move your body towards the search target? Do target features even matter when we have scene context to guide our way? In this talk, I will review some of my labs’ latest efforts to study visual search in increasingly realistic environments, the great possibilities of virtual environments, and the new challenges that arise when moving away from highly controlled laboratory settings for the sake of getting real.

Crowdsourcing Visual Search in the real world: Applications to Collaborative Medical Image Diagnosis.

Speaker: Lara García-Delgado, Biomedical Image Technologies, Department of Electronic Engineering at Universidad Politécnica de Madrid, and member of Spotlab, Spain.
Additional Authors: Miguel Luengo-Oroz, Daniel Cuadrado, & María Postigo. Universidad Politécnica de Madrid & founders of Spotlab.

We will present the MalariaSpot.org project that develops collective tele-diagnosis systems through visual search video games to empower citizens of all ages to collaborate in solving global health challenges. It is based on a crowd-computing platform, which analyses medical images taken by a 3D printed microscope embedded in a smartphone connected to the internet, using image processing and human crowdsourcing through online visual search and foraging video games. It runs on the collective power of society in an engaging way, using visual and big data sciences to contribute to global health. So far more than 150.000 citizens around the world have learnt and contributed to diagnosis of malaria and tuberculosis. The multidisciplinary nature of this project, at the crossroads of medicine, vision science, video games, artificial intelligence and education, involves a diverse range of stakeholders that requires tailoring the message to each discipline and cultural context. From education activities to mainstream media or policy engagement, this digital collaboration concept behind the project has already impacted several dimensions of society.

Discussant

Speaker: Todd Horowitz, Program Director at the National Cancer Institute. USA.

Discussant will summarize the five talks and will open a general discussion with the audience about visual search appliances in the real life in the life span.

< Back to 2019 Symposia

Reading as a visual act: Recognition of visual letter symbols in the mind and brain

Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 1
Organizer(s): Teresa Schubert, Harvard University
Presenters: Teresa Schubert, Alex Holcombe, Kalanit Grill-Spector, Karin James
< Back to 2019 Symposia

Symposium Description

A large proportion of our time as literate adults is spent reading: Deriving meaning from visual symbols. Letter symbols have only been in use for a few millennia; our visual system, which may have evolved to recognize lions and the faces of our kin, is now required to recognize the written word “LION” and the handwriting of your nephew. How does the visual system accomplish this unique feat of recognition? A wealth of studies consider early visual abilities that are involved in letter recognition but the study of these symbols as visual objects is relatively rare. In this symposium, we will highlight work by a growing number of researchers attempting to bridge the gap in research between vision and language by investigating letter and word recognition processes. In addition to interest in reading on its own merits, we propose that a minimal understanding of letter recognition is relevant to vision scientists in related domains. Many popular paradigms, from visual search to the attentional blink, use letters as stimuli. Letters are also a unique class within visual objects, and an understanding of these stimuli can constrain broader theories. Furthermore, letters can be used as a comparison class to other stimuli with which humans have high levels of expertise, such as faces and tools. In this symposium, we will discuss the state of the science of letter recognition from both a cognitive and neural perspective. We will provide attendees with information specific to letter/word recognition and situate these findings relative to broader visual cognition. Our speakers span the range from junior to established scientists and use both behavioral and neural approaches. In the first talk, Schubert will present an overview of letter recognition, describing the hierarchical stages of abstraction and relating them to similar stages proposed in object recognition. In the second talk, Holcombe will address the relationship between domain-general abilities and letter recognition, by manipulating orthographic properties such as reading direction to interrogate capacity limits and laterality effects in visual working memory. In the third talk, Grill-Spector will discuss how foveal visual experience with words contributes to the organization of ventral temporal cortex over development. In the fourth talk, James will discuss the relationship between letter recognition and letter production. In addition to their visual properties letters have associated motor plans for production, and she will present evidence suggesting this production information may be strongly linked to letter recognition. Finally, we will integrate these levels into a discussion of broad open questions in letter recognition that have relevance across visual perception, such as: What are the limits of the flexibility of visual recognition systems? At what level do capacity limits in memory encoding operate? What pressures give rise to the functional organization of ventral temporal cortex? What is the extent of interactions between systems for visual perception and for motor action? On the whole, we anticipate that this symposium will provide a new perspective on the study of letter recognition and its relevance to work across the range of visual cognition.

Presentations

How do we recognize letters as visual objects?

Speaker: Teresa Schubert, Harvard University
Additional Authors: David Rothlein, VA Boston Healthcare System; Brenda Rapp, Johns Hopkins University

How do we recognize b and B as instances of the same letter? The cognitive mechanisms of letter recognition permit abstraction across highly different visual exemplars of the same letter (b and B), while also differentiating between highly similar exemplars of different letters (c and e). In this talk, I will present a hierarchical framework for letter recognition which involves progressively smaller reliance on sensory stimulus details to achieve abstract letter representation. In addition to abstraction across visual features, letter recognition in this framework also involves different levels of abstraction in spatial reference frames. This theory was developed based on data from individuals with acquired letter identification deficits (subsequent to brain lesion) and further supported by behavioral and neural research with unimpaired adult readers. I will relate this letter recognition theory to the seminal Marr & Nishihara (1978) framework for object recognition, arguing that letter recognition and visual object recognition require a number of comparable computations, leading to broadly similar recognition systems. Finally, I will compare and contrast neural evidence of cross-modal (visual and auditory letter name) representations for letters and objects. Overall, this talk will provide a theoretical and empirical framework within which to consider letter recognition as a form of object recognition.

Implicit reading direction and limited-capacity letter identification

Speaker: Alex Holcombe, University of Sydney
Additional Authors: Kim Ransley, University of Sydney

Reading this sentence was quite an accomplishment. You overcame a poor ability, possibly even a complete inability, to simultaneously identify multiple objects – according to the influential “EZ reader” model of reading, humans can identify only one word at a time. In the field of visual attention, it is known that if one must identify multiple simultaneously-presented stimuli, spatial biases may be present but are often small. Reading a sentence, by contrast, involves a highly stereotyped attentional routine with rapid but serial, or nearly serial, identification of stimuli from left to right. Unexpectedly, my lab has found evidence that this reading routine is elicited when just two widely-spaced letters are briefly presented and observers are asked to identify both letters. We find a large left-side performance advantage that is absent or reversed when the two letters are rotated to face to the left instead of to the right. Additional findings from RSVP (rapid serial visual presentation) lead us to suggest that both letters are selected by attention simultaneously, with the bottleneck at which one letter is prioritized sitting at a late stage of processing – identification or working memory consolidation. Thus, a rather minimal cue of letter orientation elicits a strong reading direction-based prioritization routine, which will allow better understanding of both the bottleneck in visual identification and how reading overcomes it.

How learning to read affects the function and structure of ventral temporal cortex

Speaker: Kalanit Grill-Spector, Stanford University
Additional Authors: Marisa Nordt, Stanford University; Vaidehi Natu, Stanford University; Jesse Gomez, Stanford University and UC Berkeley; Brianna Jeska, Stanford University; Michael Barnett, Stanford University

Becoming a proficient reader requires substantial learning over many years. However, it is unknown how learning to read affects development of distributed visual representations across human ventral temporal cortex (VTC). Using fMRI and a data-driven approach, we examined if and how distributed VTC responses to characters (pseudowords and numbers) develop after age 5. Results reveal anatomical- and hemisphere-specific development. With development, distributed responses to words and characters became more distinctive and informative in lateral but not medial VTC, in the left, but not right, hemisphere. While development of voxels with both positive and negative preference to characters affected distributed information, only activity across voxels with positive preference to characters correlated with reading ability. We also tested what developmental changes occur to the gray and white matter, by obtaining in the same participants quantitative MRI and diffusion MRI data. T1 relaxation time from qMRI and mean diffusivity (MD) from dMRI provide independent measurements of microstructural properties. In character-selective regions in lateral VTC, but not in place-selective regions in medial VTC, we found that T1 and MD decreased from age 5 to adulthood, as well as in their adjacent white matter. T1 and MD decreases are consistent with tissue growth and were correlated with the apparent thinning of lateral VTC. These findings suggest the intriguing possibility that regions that show a protracted functional development also have a protracted structural development. Our data have important ramifications for understanding how learning to read affects brain development, and for elucidating neural mechanisms of reading disabilities.

Visual experiences during letter production contribute to the development of the neural systems supporting letter perception

Speaker: Karin James, Indiana University
Additional Authors: Sophia Vinci-Booher, Indiana University

Letter production is a perceptual-motor activity that creates visual experiences with the practiced letters. Past research has focused on the importance of the motor production component of writing by hand, with less emphasis placed on the potential importance of the visual percepts that are created. We sought to better understand how different visual percepts that result from letter production are processed at different levels of literacy experience. During fMRI, three groups of participants, younger children, older children, and adults, ranging in age from 4.5 to 22 years old, were presented with dynamic and static re-presentations of their own handwritten letters, static presentations of an age-matched control’s handwritten letters, and typeface letters. In younger children, we found that only the ventral-temporal cortex was recruited, and only for handwritten forms. The response in the older children also included only the ventral-temporal cortex but was associated with both handwritten and typed letter forms. The response in the adults was more distributed than in the children and responded to all types of letter forms. Thus, the youngest children processed exemplars, but not letter categories in the VTC, while older children and adults generalized their processing to many letter forms. Our results demonstrate the differences in the neural systems that support letter perception at different levels of experience and suggest that the perception of handwritten forms is an important component of how letter production contributes to developmental changes in brain processing

<Back to 2019 Symposia

2019 Symposia

Reading as a visual act: Recognition of visual letter symbols in the mind and brain

Organizer(s): Teresa Schubert, Harvard University
Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 1

A great deal of our time as adults is spent reading: Deriving meaning from visual symbols. Our brains, which may have evolved to recognize a lion, now recognize the written word “LION”. Without recognizing the letters that comprise a word, we cannot access its meaning or its pronunciation: Letter recognition forms the basis of our ability to read. In this symposium, we will highlight work by a growing number of researchers attempting to bridge the gap in research between vision and language by investigating letter recognition processes, from both a behavioral and brain perspective. More…

Rhythms of the brain, rhythms of perception

Organizer(s): Laura Dugué, Paris Descartes University & Suliann Ben Hamed, Université Claude Bernard Lyon I
Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 2

The phenomenological, continuous, unitary stream of our perceptual experience appears to be an illusion. Accumulating evidence suggests that what we perceive of the world and how we perceive it rises and falls rhythmically at precise temporal frequencies. Brain oscillations -rhythmic neural signals- naturally appear as key neural substrates for these perceptual rhythms. How these brain oscillations condition local neuronal processes, long-range network interactions, and perceptual performance is a central question to visual neuroscience. In this symposium, we will present an overarching review of this question, combining evidence from monkey neural and human EEG recordings, TMS interference studies, and behavioral analyses. More…

What can be inferred about neural population codes from psychophysical and neuroimaging data?

Organizer(s): Fabian Soto, Department of Psychology, Florida International University
Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 1

Vision scientists have long assumed that it is possible to make inferences about neural codes from indirect measures, such as those provided by psychophysics (e.g., thresholds, adaptation effects) and neuroimaging. While this approach has been very useful to understand the nature of visual representation in a variety of areas, it is not always clear under what circumstances and assumptions such inferences are valid. This symposium has the goal of highlighting recent developments in computational modeling that allow us to give clearer answer to such questions. More…

Visual Search: From youth to old age, from the lab to the world

Organizer(s): Beatriz Gil-Gómez de Liaño, Brigham & Women’s Hospital-Harvard Medical School and Cambridge University
Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 2

This symposium aims to show how visual search works in children, adults and older age, in realistic settings and environments. We will review what we know about visual search in real and virtual scenes, and its applications to solving global human challenges. Insights of brain processes underlying visual search during life will also be shown. The final objective is to better understand visual search as a whole in the lifespan, and in the real world; and to demonstrate how science can be transferred to society improving human lives, involving children, as well as younger and older adults. More…

What Deafness Tells Us about the Nature of Vision

Organizer(s): Rain Bosworth, Ph.D., Department of Psychology, University of California, San Diego
Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 1

It is widely believed that loss of one sense leads to enhancement of the remaining senses – for example, deaf see better and blind hear better. The reality, uncovered by 30 years of research, is more complex, and this complexity provides a fuller picture of the brain’s adaptability in the face of atypical sensory experiences. In this symposium, neuroscientists and vision scientists will discuss how sensory, linguistic, and social experiences during early development have lasting effects on perceptual abilities and visuospatial cognition. Presenters offer new findings that provide surprising insights into the neural and behavioral organization of the human visual system. More…

Prefrontal cortex in visual perception and recognition

Organizer(s): Biyu Jade He, NYU Langone Medical Center
Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 2

The role of prefrontal cortex (PFC) in vision remains mysterious. While it is well established that PFC neuronal activity reflects visual features, it is commonly thought that such feature encoding in PFC is only for the service of behaviorally relevant functions. However, recent emerging evidence challenges this notion, and instead suggests that the PFC may be integral for visual perception and recognition. This symposium will address these issues from complementary angles, deriving insights from the perspectives of neuronal tuning in nonhuman primates, neuroimaging and lesion studies in humans, recent development in artificial intelligence, and to draw implications for psychiatric disorders. More…

Prefrontal cortex in visual perception and recognition

Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 2
Organizer(s): Biyu Jade He, NYU Langone Medical Center
Presenters: Diego Mendoza-Halliday, Vincent B. McGinty, Theofanis I Panagiotaropoulos, Hakwan Lau, Moshe Bar

< Back to 2019 Symposia

Symposium Description

To date, the role of prefrontal cortex (PFC) in visual perception and recognition remains mysterious. While it is well established that PFC neuronal activity reflects visual stimulus features in a wide range of dimensions (e.g., position, color, motion direction, faces, …), it is commonly thought that such feature encoding in PFC is only for the service of behaviorally relevant functions, such as working memory, attention, task rules, and report. However, recent emerging evidence is starting to challenge this notion, and instead suggests that contributions by the PFC may be integral for perceptual functions themselves. Currently, in the field of consciousness, an intense debate revolves around whether the PFC contributes to conscious visual perception. We believe that integrating insight from studies aiming to understand the neural basis of conscious visual perception with that from studies elucidating visual stimulus feature encoding will be valuable for both fields, and necessary for understanding the role of PFC in vision. This symposium brings together a group of leading scientists at different stages in their careers who all have made important contributions to this topic. The talks will address the role of the PFC in visual perception and recognition from a range of complementary angles, including neuronal tuning in nonhuman primates, neuroimaging and lesion studies in humans, recent development in artificial neural networks, and implications for psychiatric disorders. The first two talks by Mendoza-Halliday and McGinty will address neuronal coding of perceived visual stimulus features, such as motion direction and color, in the primate lateral PFC and orbitofrontal cortex, respectively. These two talks will also cover how neural codes for perceived visual stimulus features overlap or segregate from neural codes for stimulus features maintained in working memory and neural codes for object values, respectively. Next, the talk by Panagiotaropoulos will describe neuronal firing and oscillatory activity in the primate PFC that reflect the content of visual consciousness, including both complex objects such as faces and low-level stimulus properties such as motion direction. The talk by Lau will extend these findings and provide an updated synthesis of the literature on PFC’s role in conscious visual perception, including lesion studies and recent developments in artificial neural networks. Lastly, Bar will present a line of research that establishes the role that top-down input from PFC to the ventral visual stream plays in object recognition, touching upon topics of prediction and contextual facilitation. In sum, this symposium will present an updated view on what we know about the role of PFC in visual perception and recognition, synthesizing insight gained from studies on conscious visual perception and classic vision research, and across primate neurophysiology, human neuroimaging, patient studies and computational models. The symposium targets the general VSS audience, and will be accessible and of interest to both students and faculty.

Presentations

Partially-segregated population activity patterns represent perceived and memorized visual features in the lateral prefrontal cortex

Speaker: Diego Mendoza-Halliday, McGovern Institute for Brain Research at MIT, Cambridge MA
Additional Authors: Julio Martinez-Trujillo, Robarts Research Institute, Western University, London, ON, Canada.

Numerous studies have shown that the lateral prefrontal cortex (LPFC) plays a major role in both visual perception and working memory. While neurons in LPFC have been shown to encode perceived and memorized visual stimulus attributes, it remains unclear whether these two functions are carried out by the same or different neurons and population activity patterns. To systematically address this, we recorded the activity of LPFC neurons in macaque monkeys performing two similar motion direction match-to-sample tasks: a perceptual task, in which the sample moving stimulus remained perceptually available during the entire trial, and a memory task, in which the sample disappeared and was memorized during a delay. We found neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons preferentially or exclusively encoded perceived or memorized directions, whereas others encoded directions invariant to the representational nature. Using population decoding analysis, we show that this form of mixed selectivity allows the population codes representing perceived and memorized directions to be both sufficiently distinct to determine whether a given direction was perceived or memorized, and sufficiently overlapping to generalize across tasks. We further show that such population codes represent visual feature space in a parametric manner, show more temporal dynamics for memorized than perceived features, and are more closely linked to behavioral performance in the memory than the perceptual task. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features.

Mixed selectivity for visual features and economic value in the primate orbitofrontal cortex

Speaker: Vincent B. McGinty, Rutgers University – Newark, Center for Molecular and Behavioral Neuroscience Rutgers University – Newark, Center for Molecular and Behavioral Neuroscience

Primates use their acute sense of vision not only to identify objects, but also to assess their value, that is, their potential for benefit or harm. How the brain transforms visual information into value information is still poorly understood, but recent findings suggest a key role for the orbitofrontal cortex (OFC). The OFC includes several cytoarchitectonic areas within the ventral frontal lobe, and has a long-recognized role in representing object value and organizing value-driven behavior. One of the OFC’s most striking anatomical features is the massive, direct input it receives from the inferotemporal cortex, a ventral temporal region implicated in object identification. A natural hypothesis, therefore, is that in addition to well-documented value coding properties, OFC neurons may also represent visual features in a manner similar to neurons in the ventral visual stream. To test this hypothesis, we recorded OFC neurons in macaque monkeys performing behavioral tasks in which the value of visible objects was manipulated independently from their visual features. Preliminary findings include a subset of OFC cells that were modulated by object value, but only in response to objects that shared a particular visual feature (e.g. the color red). This form of ‘mixed’ selectivity suggests that the OFC may be an intermediate computational stage between visual identification and value retrieval. Moreover, recent work showing similar mixed value-feature selectivity in inferotemporal cortex neurons suggests that neural mechanisms of object valuation may be distributed over a continuum of cortical regions, rather than compartmentalized in a strict hierarchy.

Mapping visual consciousness in the macaque prefrontal cortex

Speaker: Theofanis I Panagiotaropoulos, Neurospin, Paris, France

In multistable visual perception, the content of consciousness alternates spontaneously between mutually exclusive or mixed interpretations of competing representations. Identifying neural signals predictive of such intrinsically driven perceptual transitions is fundamental in resolving the mechanism and identifying the brain areas giving rise to visual consciousness. In a previous study, using a no-report paradigm of externally induced perceptual suppression, we have shown that functionally segregated neural populations in the macaque prefrontal cortex explicitly reflect the content of consciousness and encode task phase. Here I will present results from a no-report paradigm of binocular motion rivalry based on the optokinetic nystagmus (OKN) reflex read-out of spontaneous perceptual transitions coupled with multielectrode recordings of local field potentials and single neuron discharges in the macaque prefrontal cortex. An increase in the rate of oscillatory bursts in the delta-theta (1-9 Hz), and a decrease in the beta (20-40 Hz) bands, were predictive of spontaneous transitions in the content of visual consciousness that was also reliably reflected in single neuron discharges. Mapping these perceptually modulated neurons revealed stripes of competing populations, also observed in the absence of OKN. These results suggest that the balance of stochastic prefrontal fluctuations is critical in refreshing conscious perception, and prefrontal neural populations reflect the content of consciousness. Crucially, consciousness in the prefrontal cortex could be observed for faces and complex objects but also for low-level stimulus properties like direction of motion therefore suggesting a reconsideration of the view that prefrontal cortex is not critical for consciousness.

Persistent confusion on the role of the prefrontal cortex in conscious visual perception

Speaker: Hakwan Lau, UCLA, USA

Is the prefrontal cortex (PFC) critical for conscious perception? Here we address three common misconceptions: (1) PFC lesions do not affect subjective perception; (2) PFC activity does not reflect specific perceptual content; and (3) PFC involvement in studies of perceptual awareness is solely driven by the need to make reports required by the experimental tasks rather than subjective experience per se. These claims are often made in high-profile statements in the literature, but they are in fact grossly incompatible with empirical findings. The available evidence highlights PFC’s essential role in enabling the subjective experience in perception, contra the objective capacity to perform visual tasks; conflating the two can also be a source of confusion. Finally we will also discuss the role of PFC in perception in the light of current machine learning models. If the PFC is treated as somewhat akin to a randomly connected recurrent neural network, rather than early layers of a convolution network, the lack of prominent lesion effects may be easily understood.

What’s real? Prefrontal facilitations and distortions

Speaker: Moshe Bar, Bar-Ilan University, Israel
Additional Authors: Shira Baror, Bar-Ilan University, Israel

By now, we know that visual perception involves much more than bottom-up processing. Specifically, we have shown that object recognition is facilitated, sometimes even afforded, by top-down projections from the lateral and inferior prefrontal cortex. Next we have found that the medial prefrontal cortex, in synchrony with the para-hippocampal cortex and the retrosplenial cortex form the ‘contextual associations network’, a network that is sensitive to associative information in the environment and which utilizes contextual information to generate predictions about objects. By using various behavioral and imaging methods, we found that contextual processing facilitates object recognition very early in perception. Here, we go further to discuss the overlap of the contextual associations network with the default mode network and its implications to enhancing conscious experience, within and beyond the visual realm. We corroborate this framework with findings that imply that top-down predictions are not limited to visual information but are extracted from social or affective contexts as well. We present recent studies that suggest that although associative processes take place by default, they are nonetheless context dependent and may be inhibited according to goals. We will further discuss clinical implications, with recent findings that demonstrate how activity in the contextual associations network is altered in visual tasks performed by patients experiencing major depressive disorder. To conclude, contextual processing, sustained by the co-activation of frontal and memory-relate brain regions, is suggested to constitute a critical mechanism in perception, memory and thought in the healthy brain.

< Back to 2019 Symposia

What Deafness Tells Us about the Nature of Vision

Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): Rain Bosworth, Ph.D., Department of Psychology, University of California, San Diego
Presenters: Matthew Dye, Ph.D., Olivier Pascalis, Ph.D., Rain Bosworth, Ph.D., Fang Jiang, Ph.D.

< Back to 2019 Symposia

Symposium Description

In the United States, around 3 in 1,000 newborns are born with severe to profound hearing loss. It is widely believed that auditory deprivation leads to compensatory enhancement of vision and touch in these children. Over the last 30 years, various visual abilities have been studied extensively in deaf populations. Both behavioral and neural adaptations have been reported, but some visual systems seem to exhibit more experience-dependent plasticity than others. A common ecological explanation for this variation in plasticity across systems is that some visual functions are more essential for successful interaction with the environment in the absence or attenuation of auditory input. As a result, more drastic changes are instantiated in these visual systems. For example, because deaf people are less able to utilize auditory cues to orient visual attention, peripheral vision, more than central vision, may play a crucial role in environmental monitoring. Another explanation is that some visual systems are biologically more immature during early development, such as the peripheral retina and magnocellular visual processing pathway. This may facilitate greater experience-dependent plasticity. The situation is complicated by extensive use of speechreading and sign language within deaf populations – experiences that may also induce neuroplasticity. While both behavioral and neural differences in deaf vision are now well established, the underlying neural mechanisms that give rise to behavioral changes remain elusive. Despite the importance of understanding plasticity and variability within the visual system, there has never been a symposium on this topic at VSS. The aim of this symposium is therefore to bring together a diverse group of scientists who have made important contributions to this topic. They will integrate past and recent findings to illuminate our current understanding of the neuroplasticity of visual systems, and identify research directions that are likely to increase our understanding of the mechanisms underpinning variability and adaptability in visual processing. Matthew Dye will introduce key theoretical perspectives on visual functions in deaf and hearing populations, drawing attention to the multisensory nature of perception. Next, Olivier Pascalis will elucidate an important point – not all visual systems are equally plastic – discussing his recent findings with face processing and peripheral versus central visual processing. One theme that Dye and Pascalis will both address is that while deaf adults often show enhancements, deaf children do not, suggesting that successful behavioral adaptation may require the integration of multiple neural systems in goal-directed ways. Rain Bosworth will then present findings on altered face and motion perception in deaf adults and consider important methodological issues in the study of deaf vision. The last two presenters, Fang Jiang and Geo Kartheiser, will cover the neural underpinnings of deaf vision as revealed by neuroimaging using fMRI, EEG, and fNIRS. Together, the presenters will show how sensory, linguistic, and social experiences during an early sensitive period in development have lasting effects on visual perception and visuospatial cognition. As such, we anticipate this symposium will appeal to a wide range of attendees across various disciplines including developmental psychology, vision science and neuroscience.

Presentations

Spatial and Temporal Vision in the Absence of Audition

Speaker: Matthew Dye, Ph.D., Rochester Institute of Technology/National Technical Institute for the Deaf (RIT/NTID)

Changes in the visual system due to deafness provide information about how multisensory processes feedback to scaffold the development of unisensory systems. One common perspective in the literature is that visual inputs are highly spatial, whereas auditory inputs, in contrasts, are highly temporal. A simple multisensory account for sensory reorganization therefore predicts spatial enhancements and temporal deficits within the visual system of deaf individuals. Here I will summarize our past and ongoing research which suggests that evidence for this multisensory scaffolding hypothesis is confounded due to language deprivation in many samples. This is because most deaf people are born to nonsigning parents, and deaf children do not have full access to the spoken language around them. By studying visual processing in deaf individuals who are exposed early to perceivable visual language, such as American Sign Language, we (i) gain a better understanding of the interplay between auditory and visual systems during development, and (ii) accumulate evidence for the importance of early social interaction for the development of higher order visual abilities. Our data suggest that changes in vision over space are ecologically driven and subject to cognitive control, and that early linguistic interaction is important for the development of sustained attention over time.

What is the Impact of Deafness on Face Perception and Peripheral Visual Field Sensitivity?

Speaker: Olivier Pascalis, Ph.D., Laboratoire de Psychologie et NeuroCognition, CNRS, Grenoble, France

It is well established that early profound deafness leads to enhancements in visual processes. Different findings are reported for peripheral versus central vision. Visual improvements have been mainly reported for the peripheral visual field, which is believed to be a result of deaf people’s need to compensate for inaccessible auditory cues in the periphery, but for central visual processing, mixed results (including no changes, poorer, and superior performance) have been found for deaf people. We consider two important intriguing (and often overlooked) issues that pertain to deaf vision: One, deaf people, and many hearing people too, use sign language which requires steady fixation on the face. Signers pay rigorous attention to the face because faces provide critical intonational and linguistic information during communication. Two, this also means that most of the manual language information falls in the perceiver’s lower visual field, as the signer’s hands almost always fall in front of the torso region. I will present a series of studies in which we tried to separate the impacts of deafness and sign language experience on face processing and on peripheral field sensitivity. In order to address the role of sign language, in the absence of deafness, we report results from hearing signers. Our results suggest that sign language experience, not associated with deafness, may be also a modulating factor of visual cognition.

Psychophysical Assessment of Contrast, Motion, Form, Face, and Shape Perception in Deaf and Hearing People

Speaker: Rain Bosworth, Ph.D., Department of Psychology, University of California, San Diego

Visual processing might be altered in deaf people for two reasons. One, they lack auditory input, compelling them to rely more on their intact visual modality. Two, many deaf people have extensive experience using a visual signed language (American Sign Language, ASL), which may alter certain aspects of visual perception that are important for processing of ASL. While some deaf people have ASL exposure since birth, by virtue of having deaf parents, many others are born to hearing parents, with no signing knowledge, and have delayed visual language exposure. In this study, we asked if deafness and/or sign language experience impact visual perception in 40 Deaf signers and 40 Hearing nonsigners, for psychophysical tests of motion, form, shape, and face discrimination, while controlling for contrast detection, age, visuospatial IQ, and gender makeup. The Deaf signers were separated into two groups, Deaf native signers who were exposed to ASL between ages 0 to 2 years and Deaf late-exposed signers who were exposed to ASL after the age of 3 years. Results indicated that enhanced face processing was found in Deaf native signers who have early visual language exposure, but not in Deaf late-exposed signers. Moreover, Deaf late-exposed signers actually have impoverished motion processing, compared to Deaf native signers and Hearing nonsigners. Together, these provide evidence that language exposure to sign or language deprivation in the first 2 years of life does have a lasting impact on visual perception in adults.

Measuring Visual Motion Processing in Early Deaf Individuals with Frequency Tagging

Speaker: Fang Jiang, Ph.D., Department of Psychology, University of Nevada, Reno, USA

Early deaf individuals show enhanced performance at some visual tasks, including the processing of visual motion. Deaf individuals’ auditory and association cortices have been shown to respond to visual motion, however, it is unclear how these responses relate to their enhanced motion processing ability. Here I will present data from two recent studies, where we examined deaf and hearing participants’ fMRI and EEG responses frequency-tagged to the presentation of directional motion. Our results suggest the intriguing possibility that deaf participants’ increased direction-selective motion responses in the right STS region of could potentially support their behavioral advantage reported in previous studies.

< Back to 2019 Symposia

What can be inferred about neural population codes from psychophysical and neuroimaging data?

Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Fabian Soto, Department of Psychology, Florida International University
Presenters: Justin L. Gardner, Rosie Cowell, Kara Emery, Jason Hays, Fabian A. Soto

< Back to 2019 Symposia

Symposium Description

Vision scientists have long assumed that it is possible to make inferences about neural codes from indirect measures, such as those provided by psychophysics (e.g., thresholds, adaptation effects) and neuroimaging. While this approach has been very useful to understand the nature of visual representation in a variety of areas, it is not always clear under what circumstances and assumptions such inferences are valid. Recent modeling work has shown that some patterns of results previously thought to be the hallmark of particular encoding strategies can be mimicked by other mechanisms. Examples abound: face adaptation effects once thought to be diagnostic of norm-based encoding are now known to be reproduced by other encoding schemes, properties of population tuning functions reconstructed from fMRI data can be explained by multiple neural encoding mechanisms, and tests of invariance applied to fMRI data may be unrelated to invariance at the level of neurons and neural populations. This highlights how important it is to study encoding models through simulation and mathematical theory, to get a better understanding of exactly what can and cannot be inferred about neural encoding from psychophysics and neuroimaging, and what assumptions and experimental designs are necessary to facilitate valid inferences. This symposium has the goal of highlighting recent advances in this area, which pave the way for modelers to answer similar questions in the future, and for experimentalists to perform studies with a clearer understanding of what designs, assumptions, and analyses are optimal to answer their research questions. Following a brief introduction to the symposium’s theme and some background (~5 minutes), each of the five scheduled talks will be presented (20 minutes each), followed by a Q&A from the audience (15 minutes). The format of the Q&A will be the following: questions from the audience will be directed to specific speakers, and after an answer other speakers will be invited to comment if they wish. Questions from one speaker to another will be allowed after all the audience questions are addressed. This symposium is targeted to a general audience of researchers interested in performing inferences about neural population codes from psychophysical and neuroimaging data. This includes any researcher interested on how visual dimensions (e.g., orientation, color, face identity and expression, etc.) are encoded in visual cortex, and on how this code is modified by high-level cognitive processes (e.g., spatial and feature attention, working memory, categorization, etc.) and learning (e.g., perceptual learning, value learning). It also includes researchers with a general interest in modeling and measurement. The target audience is composed of researchers at all career stages (i.e., students, postdoctoral researchers and faculty). Those attending this symposium will benefit by a clearer understanding of what inferences they can make about encoding of visual information from psychophysics and neuroimaging, and what assumptions are necessary to make such inferences. The audience will learn about recently discovered pitfalls in this type of research and newly developed methods to deal with such pitfalls.

Presentations

Inverted encoding models reconstruct the model response, not the stimulus

Speaker: Justin L. Gardner, Department of Psychology, Stanford University
Additional Authors: Taosheng Liu, Michigan State University

Life used to be simpler for sensory neuroscientists. Some measurement of neural activity, be it single-unit activity or increase in BOLD response, was measured against systematic variation of a stimulus and the resulting tuning functions presented and interpreted. But as the field discovered signal in the pattern of responses across voxels in a BOLD measurement or dynamic structure hidden within the activity of a population of neurons, computational techniques to extract features not easily discernible from raw measurement increasingly began to intervene between measurement and data presentation and interpretation. I will discuss one particular technique, the inverted encoding model, and how it extracts model responses rather than stimulus representations and what challenges that makes for interpretation of results.

Bayesian modeling of fMRI data to infer modulation of neural tuning functions in visual cortex

Speaker: Rosie Cowell, University of Massachusetts Amherst
Additional Authors: Patrick S. Sadil, University of Massachusetts Amherst; David E. Huber, University of Massachusetts Amherst.

Many visual neurons exhibit tuning functions for stimulus features such as orientation. Methods for analyzing fMRI data reveal analogous feature-tuning in the BOLD signal (e.g., Inverted Encoding Models; Brouwer and Heeger, 2009). Because these voxel-level tuning functions (VTFs) are superficially analogous to the neural tuning functions (NTFs) observed with electrophysiology, it is tempting to interpret VTFs as mirroring the underlying NTFs. However, each voxel contains many subpopulations of neurons with different preferred orientations, and the distribution of neurons across the subpopulations is unknown. Because of this, there are multiple alternative accounts by which changes in the subpopulation-NTFs could produce a given change in the VTF. We developed a hierarchical Bayesian model to determine, for a given change in the VTF, which account of the change in underlying NTFs best explains the data. The model fits many voxels simultaneously, inferring both the shape of the NTF in different conditions and the distribution of neurons across subpopulations in each voxel. We tested this model in visual cortex by applying it to changes induced by increasing visual contrast — a manipulation known from electrophysiology to produce multiplicative gain in NTFs. Although increasing contrast caused an additive shift in the VTFs, the Bayesian model correctly identified multiplicative gain as the change in the underlying NTFs. This technique is potentially applicable to any fMRI study of modulations in cortical responses that are tuned to a well-established dimension of variation (e.g., orientation, speed of motion, isoluminant hue).

Inferring neural coding strategies from adaptation aftereffects

Speaker: Kara Emery, University of Nevada Reno

Adaptation aftereffects have been widely used to infer mechanisms of visual coding. In the context of face processing, aftereffects have been interpreted in terms of two alternative models: 1) norm-based codes, in which the facial dimension is represented by the relative activity in a pair of broadly-tuned mechanisms with opposing sensitivities; or 2) exemplar codes, in which the dimension is sampled by multiple channels narrowly-tuned to different levels of the stimulus. Evidence for or against these alternatives has been based on the different patterns of aftereffects they predict (e.g. whether there is adaptation to the norm, and how adaptation increases with stimulus strength). However, these predictions are often based on implicit assumptions about both the encoding and decoding stages of the models. We evaluated these latent assumptions to better understand how the alternative models depend on factors such as the number, selectivity, and decoding strategy of the channels, to clarify the consequential differences between these coding schemes and the adaptation effects that are most diagnostic for discriminating between them. We show that the distinction between norm and exemplar codes depends more on how the information is decoded than encoded, and that some aftereffect patterns commonly proposed to distinguish the models fail to in principle. We also compare how these models depend on assumptions about the stimulus (e.g. broadband vs. punctate) and the impact of noise. These analyses point to the fundamental distinctions between different coding strategies and the patterns of visual aftereffects that are best for revealing them.

What can be inferred about changes in neural population codes from psychophysical threshold studies?

Speaker: Jason Hays, Florida International University
Additional Authors: Fabian A. Soto, Florida International University

The standard population encoding/decoding model is now routinely used to study visual representation through psychophysics and neuroimaging. Such studies are indispensable to understand human visual neuroscience, where more invasive techniques are usually not available, but researchers should be careful not to interpret curves obtained from such indirect measures as directly comparable to analogous data from neurophysiology. Here we explore through simulation exactly what kind of inference can be made about changes in neural population codes from observed changes in psychophysical thresholds. We focus on the encoding of orientation by a dense array of narrow-band neural channels, and assume statistically optimal decoding. We explore several mechanisms of encoding change, which could be produced by factors such as attention and learning, and which have been highlighted in the previous literature: (non)specific gain, (non)specific bandwidth-narrowing, inward/outward tuning shifts, and specific suppression with(out) nonspecific gain. We compared the pattern of psychophysical thresholds produced by the model with and without the influence of such mechanisms, in several experimental designs. Each type of model produced a distinctive behavioral pattern, but only if changes in encoding are strong enough and two or more experiments with different designs are performed (i.e., no single experiment can discriminate among all mechanisms). Our results suggest that identifying encoding changes from psychophysics is possible under the right conditions and assumptions and suggest that psychophysical threshold studies are a powerful alternative to neuroimaging in the study of visual neural representation in humans.

What can be inferred about invariance of visual representations from fMRI decoding studies?

Speaker: Fabian A. Soto, Florida International University
Additional Authors: Sanjay Narasiwodeyar, Florida International University

Many research questions in vision science involve determining whether stimulus properties are represented and processed independently in the brain. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. Here we develop a new framework that links general recognition theory from psychophysics and encoding models from computational neuroscience. We focus on separability, a special form of independence that is equivalent to the concept of “invariance” often used by vision scientists, but we show that other types of independence can be formally defined within the theory. We show how this new framework allows us to precisely define separability of neural representations and to theoretically link such definition to psychophysical and neuroimaging tests of independence and invariance. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In addition, two commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we discuss the results of an fMRI study used to validate and compare several tests of representational invariance, and confirm that the relations among them proposed by the theory are correct.

< Back to 2019 Symposia

ARVO@VSS 2018

Clinical insights into basic visual processes

Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 1
Organizer(s): Paul Gamlin, University of Alabama at Birmingham; Ann E. Elsner, Indiana University; Ronald Gregg, University of Louisville
Presenters: Geunyoung Yoon, Artur Cideciyan, Ione Fine, MiYoung Kwon

< Back to 2018 Symposia

Symposium Description

This year’s biennial ARVO at VSS symposium features insights into human visual processing at the retinal and cortical level arising from clinical and translational research. The speakers will present recent work based on a wide range of state-of-the art techniques including adaptive optics, brain and retinal imaging, psychophysics and gene therapy.

Presentations

Neural mechanisms of long-term adaptation to the eye’s habitual aberration

Speaker: Geunyoung Yoon, Flaum Eye Institute, Center for Visual Science, The Institute of Optics, University of Rochester

Understanding the limits of human vision requires fundamental insights into both optical and neural factors in vision. Although the eye’s optics are far from perfect, contributions of the optical factors to neural processing are largely underappreciated. Specifically, how neural processing of images formed on the retina is altered by the long-term visual experience with habitual optical blur has remained unexplored. With technological advances in an adaptive optics vision simulator, it is now possible to manipulate ocular optics precisely. I will highlight our recent investigations on underlying mechanisms of long-term neural adaptation to the optics of the eye and its impact on spatial vision in the normally developed adult visual system.

Human Melanopic Circuit in Isolation from Photoreceptor Input: Light Sensitivity and Temporal Profile

Speaker: Artur Cideciyan, Scheie Eye Institute, Perelman School of Medicine, University of Pennsylvania

Leber congenital amaurosis refers to a group of severe early-onset inherited retinopathies. There are more than 20 causative genes with varied pathophysiological mechanisms resulting in vision loss at the level of the photoreceptors. Some eyes retain near normal photoreceptor and inner retinal structure despite the severe retina-wide loss of photoreceptor function. High luminance stimuli allow recording of pupillary responses driven directly by melanopsin-expressing intrinsically photosensitive retinal ganglion cells. Analyses of these pupillary responses help clarify the fidelity of transmission of light signals from the retina to the brain for patients with no light perception undergoing early phase clinical treatment trials. In addition, these responses serve to define the sensitivity and temporal profile of the human melanopic circuit in isolation from photoreceptor input.

Vision in the blind

Speaker: Ione Fine, Department of Psychology, University of Washington

Individuals who are blind early in life show cross-modal plasticity – responses to auditory and tactile stimuli within regions of occipital cortex that are purely visual in the normally sighted. If vision is restored later in life, as occurs in a small number of sight recovery individuals, this cross-modal plasticity persists, even while some visual responsiveness is regained. Here I describe the relationship between cross-modal responses and persisting residual vision. Our results suggest the intriguing possibility that the dramatic changes in function that are observed as a result of early blindness are implemented in the absence of major changes in neuroanatomy at either the micro or macro scale: analogous to reformatting a Windows computer to Linux.

Impact of retinal ganglion cell loss on human pattern recognition

Speaker: MiYoung Kwon, Department of Ophthalmology, University of Alabama at Birmingham

The processing of human pattern detection and recognition requires integrating visual information across space. In the human visual system, the retinal ganglion cells (RGCs) are the output neurons of the retina, and human pattern recognition is built from the neural representation of the RGCs. Here I will present our recent work demonstrating how a loss of RGCs due to either normal aging or pathological conditions such as glaucoma undermines pattern recognition and alters spatial integration properties. I will further highlight the role of the RGCs in determining the spatial extent over which visual inputs are combined. Our findings suggest that understanding the structural and functional integrity of RGCs would help not only better characterize visual deficits associated eye disorders, but also understand the front-end sensory requirements for human pattern recognition.

< Back to 2018 Symposia

Visual remapping: From behavior to neurons through computation

Time/Room: Friday, May 18, 2018, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT & Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany
Presenters: Julie Golomb, Patrick Cavanagh, James Bisley, James Mazer, Fred Hamker

< Back to 2018 Symposia

Symposium Description

Active vision in both humans and non-human primates depends on saccadic eye movements to accurately direct the foveal portion of the retina towards salient visual scene features. Saccades, in concert with visual attention, can faciliate efficient allocation of limited neural and computational resources in the brain during visually guided behaviors. Saccades, however, are not without consequences; saccades can dramatically alter the spatial distribution of activity in the retina several times per second. This can lead to large changes to the cortical scene representation even when the scene is static. Behaviors that depend on accurate visuomotor coordination and stable sensory (and attentional) representations in the brain, like reaching and grasping, must somehow compensate for the apparent scene changes caused by eye movements. Recent psychophysical, neurophysiological and modeling results have shed new light on the neural substrates of this compensatory process. Visual “remapping” has been identified as a putative mechanism for stabilizing visual and attentional representations across saccades. At the neuronal level, remapping occurs when neuronal receptive fields shift in anticipation of a saccade, as originally described in the lateral intraparietal area of the monkey (Duhamel et al., 1992). It has been suggested that remapping facilitates perceptual stability by bridging pre- and post-saccadic visual and attentional representations in the brain. In this symposium we will address the functional role of remapping and the specific relationship between neurophysiological remapping (a single-neuron phenomenon) and psychophysically characterized perisaccadic changes in visual perception and attentional facilitation. We propose to consider computational modeling as a potential bridge to connect these complementary lines of research. The goal of this symposium is to clarify our current understanding of physiological remapping as it occurs in different interconnected brain regions in the monkey (V4, LIP and FEF) and to address how remapping at the neuronal level can account for observed perisaccadic changes in visual perception and attentional state. Symposium participants have been drawn from three different, yet complementary, disciplines: psychophysics, neurophysiology and computational modeling. Their approaches have provided novel insights into remapping at phenomenological, functional and mechanistic levels. Remapping is currently a major area of research in all three disciplines and, while there are several common themes developing, there remains substantial debate about the degree to which remapping can account for various psychophysical phenonomena. We propose that bringing together key researchers using different approaches to discuss the implications of currently available data and models will both advance our understanding of remapping and be broad interest to VSS members (both students and faculty) across disciplines.

Presentations

Remapping of object features: Implications of the two-stage theory of spatial remapping

Speaker: Julie Golomb, The Ohio State University, Columbus, OH

When we need to maintain spatial information across an eye movement, it is an object’s location in the world, not its location on our retinas, which is generally relevant for behavior. A number of studies have demonstrated that neurons can rapidly remap visual information, sometimes even in anticipation of an eye movement, to preserve spatial stability. However, it has also been demonstrated that for a period of time after each eye movement, a “retinotopic attentional trace” still lingers at the previous retinotopic location, suggesting that remapping actually manifests in two overlapping stages, and may not be as fast or efficient as previously thought. If spatial attention is remapped imperfectly, what does this mean for feature and object perception? We have recently demonstrated that around the time of an eye movement, feature perception is distorted in striking ways, such that features from two different locations may be simultaneously bound to the same object, resulting in feature-mixing errors. We have also revealed that another behavioral signature of object-location binding, the “spatial congruency bias”, is tied to retinotopic coordinates after a saccade. These results suggest that object-location binding may need to be re-established following each eye movement rather than being automatically remapped. Recent efforts from the lab are focused on linking these perceptual signatures of remapping with model-based neuroimaging, using fMRI multivoxel pattern analyses, inverted encoding models, and EEG steady-state visual evoked potentials to dynamically track both spatial and feature remapping across saccades.

Predicting the present: saccade based vs motion-based remapping

Speaker: Patrick Cavanagh, Glendon College, Toronto, ON and Dartmouth College, Hanover, NH

Predictive remapping alerts neurons when a target will fall into its receptive field after an upcoming saccade. This has consequences for attention which starts selecting information from the target’s remapped location before the eye movement begins even though that location is not relevant to pre-saccadic processing. Thresholds are lower and information from the target’s remapped and current locations may be integrated. These predictive effects for eye movements are mirrored by predictive effects for object motion, in the absence of saccades: motion-based remapping. An object’s motion is used to predict its current location and as a result, we sometimes see a target far from its actual location: we see it where it should be now. However, these predictions operate differently for eye movements and for perception, establishing two distinct representations of spatial coordinates. We have begun identifying the cortical areas that carry these predictive position representations and how they may interface with memory and navigation.

How predictive remapping in LIP (but not FEF) might explain the illusion of perceptual stability

Speaker: James Bisley, Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, California

The neurophysiology of remapping has tended to examine the latency of responses to stimuli presented around a single saccade. Using a visual foraging task, in which animals make multiple eye movements within a trial, we have examined predictive remapping in the lateral intraparietal area (LIP) and the frontal eye field (FEF) with a focus on when activity differentiates between stimuli that are brought on to the response field. We have found that the activity in LIP, but not FEF, rapidly shifts from a pre-saccadic representation to a post-saccadic representation during the period of saccadic suppression. We hypothesize that this sudden switch keeps attentional priorities of high priority locations stable across saccades and, thus, could create the illusion of perceptual stability.

Predictive attentional remapping in area V4 neurons

Speaker: James Mazer, Cell Biology & Neuroscience, Montana State University, Bozeman, MT

Although saccades change the distribution of neural activity throughout the visual system, visual perception and spatial attention are relatively unaffected by saccades. Studies of human observers have suggested that attentional topography in the brain is stablized across saccades by an active process that redirects attentional facilitation to the right neurons in retinotopic visual cortex. To characterize the specific neuronal mechanisms underlying this retargeting process we trained two monkeys to perform a novel behavioral task that required them to sustain attention while making guided saccades. Behavioral performance data indicate that monkeys, like humans, can sustain spatiotopic attention across saccades. Data recorded from neurons in extrastriate area V4 during task performance were used to access perisaccadic attentional dynamics. Specificially, we asked when attentional facilitation turns on or off relative to saccades and how attentional modulation changes depending on whether a saccade brings a neuron’s receptive field (RF) into or out of the attended region. Our results indicate that for a substantial fraction of V4 neurons, attentional state changes begin ~100 ms before saccade onset, consistent with the timing of predictive attentional shifts in human observers measured psychophysically. In addition, although we found little evidence of classical, LIP-style spatial remapping in V4, there was a small anticipatory shift or skew of the RF in the 100ms immediately saccades detectable at the population level, although it is unclear of this effect corresponds to a shift towards the saccade endpoint or reflects a shift parallel to the saccade vector.

Neuro-computational models of spatial updating

Speaker: Fred Hamker, Chemnitz University of Technology, Chemnitz, Germany

I review neuro-computational models of peri-saccadic spatial perception that provide insight into the neural mechanisms of spatial updating around eye movements. Most of the experimental observations can be explained by only two different models, one involves spatial attention directed towards the saccade target and the other relies on predictive remapping and gain-fields for coordinate transformation. The latter model uses two eye related signals: a predictive corollary discharge and eye position, which updates after saccade. While spatial attention is mainly responsible for peri-saccadic compression, predictive remapping (in LIP) and gain-fields for coordinate transformation can account for the shift of briefly flashed bars in total darkness and for the increase of the threshold in peri-saccadic displacement detection. With respect to the updating of sustained spatial attention, recently, two different types were discovered. One study shows that attention lingers after saccade at the (irrelevant) retinotopic position, another shows that shortly before saccade onset, spatial attention is remapped to a position opposite to the saccade direction. I show new results which demonstrate that both observations are not contradictory and emerge through model dynamics: The lingering of attention is explained by the (late-updating) eye position signal, which establishes an attention pointer in an eye-reference frame. This reference shifts with the saccade and updates attention to the initial position only after saccade. The remapping of attention opposite to the saccade direction is explained by the corollary discharge signal, which establishes a transient eye-reference frame, anticipates the saccade and thus updates attention prior to saccade onset.

< Back to 2018 Symposia

Vision Sciences Society