2022 Symposia

2022 Symposia

Beyond objects and features: High-level relations in visual perception

Organizers: Chaz Firestone1, Alon Hafri1; 1Johns Hopkins University

The world contains not only objects and features (red apples, glass bowls, large dogs, and small cats), but also relations holding between them (apples contained in bowls, dogs chasing cats). What role does visual processing play in extracting such relations, and how do relational representations structure visual experience? This symposium brings together a variety of approaches to explore new perspectives on the visual processing of relations. A unifying theme is that relations deserve equal place at the vision scientist's table—and indeed that many traditional areas of vision science (including scene perception, attention, and memory) are fundamentally intertwined with relational representation. More…

Beyond representation and attention: Cognitive modulations of activity in visual cortex

Organizers: Alex White1, Kendrick Kay2; 1Barnard College, Columbia University, 2University of Minnesota

This symposium addresses modulations of activity in visual cortex that go beyond classical notions of stimulus representation and attentional selection. For instance, activity patterns can reflect the contents of visual imagery, working memory, and expectations. In other cases, unstimulated regions of cortex are affected by the level of arousal or task difficulty. Furthermore, what might appear as general attentional amplifications are sometimes quite specific to stimulus type, brain region, and task. Although these effects are diverse, this symposium will seek unifying principles that are required to build general models of how sensory and cognitive signals are blended in visual cortex. More…

How we make saccades: selection, control, integration

Organizers: Emma Stewart1, Bianca R. Baltaretu1; 1Justus-Liebig University Giessen, Germany

Making a saccade is a non-trivial process: the saccade target must be selected, the visuomotor system must execute a motor command, and the visual system must integrate pre- and postsaccadic information. Recent research has uncovered titillating new roles for established neural regions, giving an evolving and sophisticated perspective into processes underlying saccadic selection and control. Additionally, computational models have advanced our understanding of how saccades shape perception. This symposium will unify established knowledge about the disparate phases of saccade production, giving insight into the full life cycle of a saccade, from selection, to control, to the ultimate ensuing transsaccadic perception. More…

Perceptual Organization - Lessons from Neurophysiology, Human Behavior, and Computational Modeling

Organizers: Dirk B. Walther1, James Elder2; 1University of Toronto, 2York University

A principal challenge for both biological and machine vision systems is to integrate and organize the diversity of cues received from the environment into the coherent global representations we experience and require to make good decisions and take effective actions. Early psychological investigations date back more than 100 years to the seminal work of the Gestalt school. But in the last 50 years, neuroscientific and computational approaches to understanding perceptual organization have become equally important, and a full understanding requires integration of all three approaches. This symposium will highlight the latest results and identify promising directions in perceptual organization research. More…

The probabilistic nature of vision: How should we evaluate the empirical evidence?

Organizers: Ömer Dağlar Tanrıkulu1, Arni Kristjansson2; 1Williams College, 2University of Iceland

The view that our visual system represents sensory information probabilistically is prevalent in contemporary vision science. However, providing empirical evidence for such a claim has proved to be difficult since both probabilistic and non-probabilistic perceptual representations can, in principle, account for the experimental results in the literature. In this symposium, we discuss how vision research can provide empirical evidence relevant to the question of probabilistic perception. How can we operationalize probabilistic visual representations, and, if possible, how can we provide empirical evidence that settles the issue? Our goal is to encourage researchers to make their assumptions about probabilistic perception explicit. More…

What does the world look like? How do we know?

Organizers: Mark Lescroart1, Benjamin Balas2, Kamran Binaee1; 1University of Nevada, Reno, 2North Dakota State University

Statistical regularities in visual experience have been broadly shown to shape neural and perceptual visual processing. However, our ability to make inferences about visual processing based on natural image statistics is limited by the representativeness of natural image datasets. Here, we consider the consequences of using non-representative datasets, and we explore challenges in assembling datasets that are more representative in terms of the sampled environments, activities, and individuals. We explicitly address the following questions: what are we not sampling, why are we not sampling it, and how does this limit the inferences we can draw about visual processing? More…

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University
Presenters: Susana Marcos, Brian Vohnsen, Ann Elsner, Juliette E. McGregor

< Back to 2021 Symposia

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions.

Presentations

Foveal aberrations and the impact on vision

Susana Marcos1; 1Institute of Optics, CSIC

Optical aberrations degrade the quality of images projected on the retina. The magnitude and orientation of the optical aberrations vary dramatically across individuals. Changes also occur with processes such as accommodation, and aging, and also with corneal and lens disease and surgery. Certain corrections such as multifocal lenses for presbyopia modify the aberration pattern to create simultaneous vision or extended depth-of-focus. Ocular aberrometers have made their way into the clinical practice. Besides, quantitative 3-D anterior segment imaging has allowed quantifying the morphology and alignment of the cornea and lens, linking ocular geometry and aberrations through custom eye models, and shedding light on the factors contributing to the optical degradation. However, perceived vision is affected by the eye’s aberrations in more ways than those purely predicted by optics, as the eye appears to be adapted to the magnitude and orientation of its own optical blur. Studies using Adaptive Optics, not only reveal the impact of manipulating the optical aberrations on vision, but also that the neural code for blur is driven by subject’s own aberrations.

The integrated Stiles-Crawford effect: understanding the role of pupil size and outer-segment length in foveal vision

Brian Vohnsen1; 1Advanced Optical Imaging Group, School of Physics, University College Dublin, Ireland

The Stiles-Crawford effect of the first kind (SCE-I) describes a psychophysical change in perceived brightness related to the angle of incidence of a ray of light onto the retina. The effect is commonly explained as being due to angular-dependent waveguiding by foveal cones, yet the SCE-I is largely absent from similar-shaped rods suggesting that a different mechanism than waveguiding is at play. To examine this, we have devised a flickering pupil method that directly measures the integrated SCE-I for normal pupil sizes in normal vision rather than relying on mathematical integration of the standard SCE-I function as determined with Maxwellian light. Our results show that the measured effective visibility for normal foveal vision is related to visual pigment density in the three-dimensional retina rather than waveguiding. We confirm the experimental findings with a numerical absorption model using Beer-Lambert’s law for the visual pigments.

Structure of cones and microvasculature in healthy and diseased eyes

Ann Elsner1; 1Indiana University School of Optometry

There are large differences in the distribution of cones in the living human retina, with the density at the fovea varying more than with greater eccentricities. The size and shape of the foveal avascular zone also varies across individuals, and distances between capillaries can be greatly enlarged in disease. While diseases such as age-related macular degeneration and diabetes impact greatly on both cones and retinal vessels, some cones can survive for decades although their distributions become more irregular. Surprisingly, in some diseased eyes, cone density at retinal locations outside those most compromised can exceed cone density for control subjects.

Imaging of calcium indicators in retinal ganglion cells for understanding foveal function

Juliette E. McGregor1; 1Centre for Visual Science, University of Rochester

The fovea mediates much of our conscious visual perception but is a delicate retinal structure that is difficult to investigate physiologically using traditional approaches. By expressing the calcium indicator protein GCaMP6s in retinal ganglion cells (RGCs) of the living primate we can optically read out foveal RGC activity in response to visual stimuli presented to the intact eye. Pairing this with adaptive optics ophthalmoscopy it is possible to both present highly stabilized visual stimuli to the fovea and read out retinal activity on a cellular scale in the living animal. This approach has allowed us to map the functional architecture of the fovea at the retinal level and to classify RGCs in vivo based on their responses to chromatic stimuli. Recently we have used this platform as a pre-clinical testbed to demonstrate successful restoration of foveal RGC responses following optogenetic therapy.

< Back to 2021 Symposia

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem
Presenters: Jeremy M Wolfe, Shaul Hochstein, Catherine Tallon-Baudry, James DiCarlo, Merav Ahissar

< Back to 2021 Symposia

Forty years ago, Anne Treisman presented Feature Integration Theory (FIT; Treisman & Gelade, 1980). FIT proposed a parallel, preattentive first stage and a serial second stage controlled by visual selective attention, so that search tasks could be divided into those performed by the first stage, in parallel, and those requiring serial processing and further “binding” in an object file (Kahneman, Treisman, & Gibbs, 1992). Ten years later, Jeremy Wolfe expanded FIT with Guided Search Theory (GST), suggesting that information from the first stage could guide selective attention in the second (Wolfe, Cave & Franzel, 1989; Wolfe, 1994). His lab’s recent visual search studies enhanced this theory (Wolfe, 2007), including studies of factors governing search (Wolfe & Horowitz, 2017), hybrid search (Wolfe, 2012; Nordfang, Wolfe, 2018), and scene comprehension capacity (Wick … Wolfe, 2019). Another ten years later, Shaul Hochstein and Merav Ahissar proposed Reverse Hierarchy Theory (RHT; Hochstein, Ahissar, 2002), turning FIT on its head, suggesting that early conscious gist perception, like early generalized perceptual learning (Ahissar, Hochstein, 1997, 2004), reflects high cortical level representations. Later feedback, returning to lower levels, allows for conscious perception of scene details, already represented in earlier areas. Feedback also enables detail-specific learning. Follow up found that top-level gist perception primacy leads to the counter-intuitive results that faces pop out of heterogeneous object displays (Hershler, Hochstein, 2005), individuals with neglect syndrome are better at global tasks (Pavlovskaya … Hochstein, 2015), and gist perception includes ensemble statistics (Khayat, Hochstein, 2018, 2019; Hochstein et al., 2018). Ahissar’s lab mapped RHT dynamics to auditory systems (Ahissar, 2007; Ahissar etal., 2008) in both perception and successful/failed (from developmental disabilities) skill acquisition (Lieder … Ahissar, 2019) James DiCarlo has been pivotal in confronting feedforward-only versus recurrency-integrating network models of extra-striate cortex, considering animal/human behavior (DiCarlo, Zoccolan, Rust, 2012; Yarmins … DiCarlo, 2014; Yamins, DiCarlo, 2016). His large-scale electrophysiology recordings from behaving primate ventral stream, presented with challenging object-recognition tasks, relate directly to whether recurrent connections are critical or superfluous (Kar … DiCarlo, 2019). He recently developed combined deep artificial neural network modeling, synthesized image presentation, and electrophysiological recording to control neural activity of specific neurons and circuits (Bashivan, Kar, DiCarlo, 2019). Cathrine Tallon-Baudry uses MEG/EEG recordings to study neural correlates of conscious perception (Tallon-Baudry, 2012). She studied roles of human brain oscillatory activity in object representation and visual search tasks (Tallon-Baudry, 2009), analyzing effects of attention and awareness (Wyart, Tallon-Baudry, 2009). She has directly tested, with behavior and MEG recording, implications of hierarchy and reverse hierarchy theories, including global information processing being first and mandatory in conscious perception (Campana, Tallon-Baudry, 2013; Campana … Tallon-Baudry, 2016) In summary, bottom-up versus top-down processing theories reflect on the essence of perception: the dichotomy of rapid vision-at-a-glance versus slower vision-with-scrutiny, roles of attention, hierarchy of visual representation levels, roles of feedback connections, sites and mechanisms of various visual phenomena, and sources of perceptual/cognitive deficits (Neglect, Dyslexia, ASD). Speakers at the proposed symposium will address these issues with both a historical and forward looking perspective.

Presentations

Is Guided Search 6.0 compatible with Reverse Hierarchy Theory

Jeremy M Wolfe1; 1Harvard Medical School and Visual Attention Lab Brigham & Women’s Hospital

It has been 30 years since the first version of the Guided Search (GS) model of visual search was published. As new data about search accumulated, GS needed modification. The latest version is GS6. GS argues that visual processing is capacity-limited and that attention is needed to “bind” features together into recognizable objects. The core idea of GS is that the deployment of attention is not random but is “guided” from object to object. For example, in a search for your black shoe, search would be guided toward black items. Earlier versions of GS focused on top-down (user-driven) and bottom-up (salience) guidance by basic features like color. Subsequent research adds guidance by history of search (e.g. priming), value of the target, and, most importantly, scene structure and meaning. Your search for the shoe will be guided by your understanding of the scene, including some sophisticated information about scene structure and meaning that is available “preattentively”. In acknowledging the initial, preattentive availability of something more than simple features, GS6 moves closer to ideas that are central to the Reverse Hierarchy Theory of Hochstein and Ahissar. As is so often true in our field, this is another instance where the answer is not Theory A or Theory B, even when they seem diametrically opposed. The next theory tends to borrow and synthesize good ideas from both predecessors.

Gist perception precedes awareness of details in various tasks and populations

Shaul Hochstein1; 1Life Sciences, Hebrew University, Jerusalem

Reverse Hierarchy Theory proposes several dramatic propositions regarding conscious visual perception. These include the suggestion that, while the visual system receives scene details and builds from them representations of the objects, layout, and structure of the scene, nevertheless, the first conscious percept is that of the gist of the scene – the result of implicit bottom-up processing. Only later does conscious perception attain scene details by return to lower cortical area representations. Recent studies at our lab analyzed phenomena whereby participants receive and perceive the gist of the scene before and without need for consciously knowing the details from which the gist is constructed. One striking conclusion is that “pop-out” is an early high-level effect, and is therefore not restricted to basic element features. Thus, faces pop-out from heterogeneous objects, and participants are unaware of rejected objects. Our recent studies of ensemble statistics perception find that computing set mean does not require knowledge of its individuals. This mathematically-improbable computation is both useful and natural for neural networks. I shall discuss just how and why set means are computed without need for explicit representation of individuals. Interestingly, our studies of neglect patients find that their deficit is in terms of tasks requiring focused attention to local details, and not for those requiring only global perception. Neglect patients are quite good at pop-out detection and include left-side elements in ensemble perception.

From global to local in conscious vison: behavior & MEG

Catherine Tallon-Baudry1; 1CNRS Cognitive Neuroscience, Ecole Normale Supérieure, Paris

The reverse hierarchy theory makes strong predictions on conscious vision. Local details would be processed in early visual areas before being rapidly and automatically combined into global information in higher order area, where conscious percepts would initially emerge. The theory thus predicts that consciousness arises initially in higher order visual areas, independently from attention and task, and that additional and optional attentional processes operating from top to bottom are needed to retrieve local details. We designed novel textured stimuli that, as opposed to Navon’s letters, are truly hierarchical. Taking advantage of both behavioral measures and of the decoding of MEG data, we show that global information is consciously perceived faster than local details, and that global information is computed regardless of task demands during early visual processing. These results support the idea that global dominance in conscious percepts originates in the hierarchical organization of the visual system. Implications for the nature of conscious visual experience and its underlying neural mechanisms will be discussed.

Next-generation models of recurrent computations in the ventral visual stream

James DiCarlo1; 1Neuroscience, McGovern Inst. & Brain & Cognitive Sci., MIT

Understanding mechanisms underlying visual intelligence requires combined efforts of brain and cognitive scientists, and forward engineering emulating intelligent behavior (“AI engineering”). This “reverse-engineering” approach has produced more accurate models of vision. Specifically, a family of deep artificial neural-network (ANN) architectures arose from biology’s neural network for object vision — the ventral visual stream. Engineering advances applied to this ANN family produced specific ANNs whose internal in silico “neurons” are surprisingly accurate models of individual ventral stream neurons, that now underlie artificial vision technologies. We and others have recently demonstrated a new use for these models in brain science — their ability to design patterns of light energy images on the retina that control neuronal activity deep in the brain. The reverse engineering iteration loop — respectable ANN models to new ventral stream data to even better ANN models — is accelerating. My talk will discuss this loop: experimental benchmarks for in silico ventral streams, key deviations from the biological ventral stream revealed by those benchmarks, and newer in silico ventral streams that partly close those differences. Recent experimental benchmarks argue that automatically-evoked recurrent processing is critically important to even the first 300msec of visual processing, implying that conceptually simpler, feedforward only, ANN models are no longer tenable as accurate in silico ventral streams. Our broader aim is to nurture and incentivize next generation models of the ventral stream via a community software platform termed “Brain-Score” with the goal of producing progress that individual research groups may be unable to achieve.

Visual and non-visual skill acquisition – success and failure

Merav Ahissar1; 1Psychology Department, Social Sciences & ELSC, Hebrew University, Israel

Acquiring expert skills requires years of experience – whether these skills are visual (e.g. face identification), motor (playing tennis) or cognitive (mastering chess). In 1977, Shiffrin & Schneider proposed an influential stimulus-driven, bottom-up theory of expertise automaticity, involving mapping stimuli to their consistent response. Integrating many studies since, I propose a general, top-down theory of skill acquisition. Novice performance is based on the high-level multiple-demand (Duncan, 2010) fronto-parietal system, and with practice, specific experiences are gradually represented in lower-level domain-specific temporal regions. This gradual process of learning-induced reverse-hierarchies is enabled by detection and integration of task-relevant regularities. Top-down driven learning allows formation of task-relevant mapping and representations. These in turn form a space which affords task-consistent interpolations (e.g. letters in a manner crucial for letter identification rather than visual similarity). These dynamics characterize successful skills. Some populations, however, have reduced sensitivity to task-related regularities, hindering their related skill acquisition, preventing specific expertise acquisition even after massive training. I propose that skill-acquisition failure, perceptual as cognitive, reflects specific difficulties in detecting and integrating task-relevant regularities, impeding formation of temporal-area expertise. Such is the case for individuals with dyslexia (reduced retention of temporal regularities; Jaff-Dax et al., 2017), who fail to form an expert visual word-form area, and for individuals with autism (who integrate regularities too slowly for online updating; Lieder et al., 2019). Based on this general conceptualization, I further propose that this systematic impediment.

< Back to 2021 Symposia

2021 Symposia

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions. More…

Wait for it: 20 years of temporal orienting

Organizers: Nir Shalev1,2,3, Anna Christina (Kia) Nobre1,2,3; 1Department of Experimental Psychology, University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford, 3Oxford Centre for Human Brain Activity, University of Oxford

Time is an essential dimension framing our behaviour. In considering adaptive behaviour in dynamic environments, it is essential to consider how our psychological and neural systems pick up on temporal regularities to prepare for events unfolding over time. The last two decades have witnessed a renaissance of interest in understanding how we orient attention in time to anticipate relevant moments. New experimental approaches have proliferated and demonstrated how we derive and utilise recurring temporal rhythms, associations, probabilities, and sequences to enhance perception. We bring together researchers from across the globe exploring the fourth dimension of selective attention with complementary approaches. More…

What we learn about the visual system by studying non-human primates: Past, present and future

Organizers: Rich Krauzlis1, Michele Basso2; 1National Eye Institute, 2Brain Research Institute, UCLA

Non-human primates (NHPs) are the premier animal model for understanding the brain circuits and neuronal properties that accomplish vision. This symposium will take a “look back” at what we have learned about vision over the past 20 years by studying NHPs, and also “look forward” to the emerging opportunities provided by new techniques and approaches. The 20th anniversary of VSS is the ideal occasion to present this overview of NHP research to the general VSS membership, with the broader goal of promoting increased dialogue and collaboration between NHP and non-NHP vision researchers. More…

What has the past 20 years of neuroimaging taught us about human vision and where do we go from here?

Organizers: Susan Wardle1, Chris Baker1; 1National Institutes of Health

Over the past 20 years, neuroimaging methods have become increasingly popular for studying the neural mechanisms of vision in the human brain. To celebrate 20 years of VSS this symposium will focus on the contribution that brain imaging techniques have made to our field of vision science. The aim is to provide both a historical context and an overview of current trends for the role of neuroimaging in vision science. This will lead to informed discussion about what future directions will prove most fruitful for answering fundamental questions in vision science. More…

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem

Interactions of bottom-up and top-down mechanisms in visual perception are heatedly debated to this day. The aim of the proposed symposium is to review the history, progress, and prospects of our understanding of the roles of feedforward and recurrent processing streams. Where and how does top-down influence kick in? Is it off-line, as suggested by some deep-learning networks? is it an essential aspect governing bottom-up flow at every stage, as in predictive processing? We shall critically consider the continued endurance of these models, their meshing with current state-of-the-art theories and accumulating evidence, and, most importantly, the outlook for future understanding. More…

What’s new in visual development?

Organizers: Oliver Braddick1, Janette Atkinson2; 1University of Oxford, 2University College London

Since 2000, visual developmental science has advanced beyond defining how and when basic visual functions emerge during childhood. Advances in structural MRI, fMRI and near-infrared spectroscopy have identified localised visual brain networks even in early months of life, including networks identifying objects and faces. Newly refined eye tracking has examined how oculomotor function relates to the effects of visual experience underlying strabismus and amblyopia. New evidence has allowed us to model developing visuocognitive processes such as decision-making and attention. This symposium illustrates how such advances, ideas and challenges enhance understanding of visual development, including infants and children with developmental disorders. More…

2020 Symposia

2020 Symposia

No Symposia were presented at the V-VSS 2020 meeting.

2021 Open Science Symposium

Friday, May 21, 5:00 – 7:00 pm EDT

Organizer: VSS Student-Postdoc Advisory Committee
Moderator: Björn Jörges, York University
Speakers: Geoffrey Aguirre, University of Pennsylvania; Janine Bijsterbosch, Washington University School of Medicine; Christopher Donkin, UNSW Sydney; Alex Holcombe, University of Sydney; and Russell A. Poldrack, Stanford University

Open Science has become an important part of the scientific landscape. Researchers are adopting open practices such as preregistrations and registered reports, open access, and the use of open source software, journals make data and code sharing more and more a desired or even required feature of research publications, and funders are increasingly evaluating the applicants’ open science track records along with their scientific proposals. It is therefore more important than ever for all scientists, and particularly for Early Career Researchers, to be able to navigate the Open Science space. For this reason, the Student Postdoc Committee organizes the Open Science Symposium as a means to introduce the VSS community to the basics of Open Science and some current debates.
The Open Science Symposium will start out with a short overview of the most important open practices. The speakers then delve deeper into two topics: preregistration and code and data sharing. We have invited two speakers for each topic: one of them argues in favor, while the other argues against, provides some nuance, or points out limitations. Both parties will first explain their respective perspectives, followed by a joint presentation in which some synthesis or common ground will be reached.

VSS LogoGeoffrey Aguirre

University of Pennsylvania

Geoffrey Aguirre bio. Contact Geoffrey at email address.

Janine Bijsterbosch

Washington University School of Medicine

Janine Bijsterbosch has worked in brain imaging since 2007. She is currently Assistant Professor in the Computational Imaging section of the Department of Radiology at Washington University in St Louis. The Personomics Lab headed by Dr. Bijsterbosch aims to understand how brain connectivity patterns differ from one person to the next, by studying the “personalized connectome”. Using big data resources such as the Human Connectome Project and UK Biobank, the Personomics Lab adopts cutting edge analysis techniques to study functional connectivity networks and their role in behavior, performance, mental health, disease risk, treatment response, and physiology. Dr. Bijsterbosch is Chair-Elect of the Open Science special interest group as part of the Organization for Human Brain Mapping. In addition, Dr. Bijsterbosch wrote a textbook on functional connectivity analyses, which was published by Oxford University Press in 2017. Contact Janine at .

Christopher Donkin

UNSW Sydney

Christopher Donkin is a cognitive psychologist at UNSW Sydney. His work tends to rely on a mix of computational modelling and experiments. He is interested in decision-making, memory, models, and metascience. While agreeing that open science is of utmost importance, many long series of conversations with Aba Szollosi about how knowledge is created has led to disagreement around the purported benefits of preregistration. Though the content of the talk will be specific to preregistration, the background knowledge underlying these arguments is more carefully laid out here.  Contact Chris at .

VSS LogoAlex Holcombe

University of Sydney

Alex Holcombe bio. Contact Alex at .

Russell A. Poldrack

Stanford University

Russell A. Poldrack is the Albert Ray Lang Professor in the Department of Psychology and Professor (by courtesy) of Computer Science at Stanford University, and Director of the Stanford Center for Reproducible Neuroscience. His research uses neuroimaging to understand the brain systems underlying decision making and executive function. His lab is also engaged in the development of neuroinformatics tools to help improve the reproducibility and transparency of neuroscience, including the Openneuro.org and Neurovault.org data sharing projects and the Cognitive Atlas ontology. Contact Russ at .

Björn Jörges

York University

Björn Jörges studies the role of prediction for visual perception, as well as visuo-vestibular integration for the perception of object motion and self-motion. Beyond these topics, he also aspires to make science better, i.e., more diverse, more transparent and more robust. After finishing his PhD in Barcelona on the role of a strong earth gravity prior for perception and action, he started a Postdoc in the Multisensory Integration Lab at York University, where he currently investigates how the perception of self-motion changes in response to microgravity. Contact Björn at .

Rhythms of the brain, rhythms of perception

Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 2
Organizer(s): Laura Dugué, Paris Descartes University & Suliann Ben Hamed, Université Claude Bernard Lyon I
Presenters: Suliann Ben Hamed, Niko Busch, Laura Dugue, Ian Fiebelkorn

< Back to 2019 Symposia

Symposium Description

The phenomenological, continuous, unitary stream of our perceptual experience appears to be an illusion. Accumulating evidence suggests that what we perceive of the world and how we perceive it rises and falls rhythmically at precise temporal frequencies. Brain oscillations -rhythmic neural signals- naturally appear as key neural substrates for these perceptual rhythms. How these brain oscillations condition local neuronal processes, long-range network interactions, and perceptual performance is a central question to visual neuroscience. In this symposium, we will present an overarching review of this question, combining evidence from monkey neural and human EEG recordings, TMS interference studies, and behavioral analyses. Suliann Ben Hamed will first present monkey electrophysiology evidence for a rhythmic exploration of space by the prefrontal attentional spotlight in the alpha (8-12 Hz) frequency range and will discuss the functional coupling between this rhythmic exploration and long-range theta frequency modulations. Niko Busch will then present electro-encephalography (EEG) and psychophysics studies in humans, and argue that alpha oscillations reflect fluctuations of neuronal excitability that modulate periodically subjective perceptual experience. Laura Dugué will present a series of EEG, Transcranial Magnetic Stimulation (TMS) and psychophysics evidence in humans in favor of a functional dissociation between the alpha and the theta (3–8 Hz) rhythms underlying periodic fluctuations in perceptual and attentional performance respectively. Finally, Ian Fiebelkorn will present psychophysics studies in humans and electrophysiology evidence in macaque monkeys, and argue that the fronto-parietal theta rhythm allows for functional flexibility in large-scale networks. The multimodal approach, including human and monkey models and a large range of behavioral and neuroimaging techniques, as well as the timeliness of the question of the temporal dynamics of perceptual experience, should be of interest to cognitive neuroscientists, neurophysiologists and psychologists interested in visual perception and cognition, as well as to the broad audience of VSS.

Presentations

The prefrontal attentional spotlight in time and space

Speaker: Suliann Ben Hamed, Université Claude Bernard Lyon I

Recent accumulating evidence challenges the traditional view of attention as a continuously active spotlight over which we have direct voluntary control, suggesting instead a rhythmic operation. I will present monkey electrophysiological data reconciling these two views. I will apply machine learning methods to reconstruct, at high spatial and temporal resolution, the spatial attentional spotlight from monkey prefrontal neuronal activity. I will first describe behavioral and neuronal evidence for distinct spatial filtering mechanisms, the attentional spotlight serving to filter in task relevant information while at the same time filtering out task irrelevant information. I will then provide evidence for rhythmic spatial attention exploration by this prefrontal attentional spotlight in the alpha (7-12Hz) frequency range. I will discuss this rhythmic exploration of space both from the perspective of sensory encoding and behavioral trial outcome, when processing either task relevant or task irrelevant information. While these oscillations are task-independent, I will describe how their spatial unfoldment flexibly adjusts to the ongoing behavioral demands. I will conclude by bridging the gap between this alpha rhythmic exploration by the attentional spotlight and previous reports on a contribution of long-range theta oscillations in attentional exploration and I will propose a novel integrated account of a dynamic attentional spotlight.

Neural oscillations, excitability and perceptual decisions

Speaker: Niko Busch, WWU Münster

Numerous studies have demonstrated that the power of ongoing alpha oscillations in the EEG is inversely related to neural excitability, as reflected in spike-firing rate, multi-unit activity, or the hemodynamic fMRI signal. Furthermore, alpha oscillations also affect behavioral performance in perceptual tasks. However, it is surprisingly unclear which latent perceptual or cognitive mechanisms mediate this effect. For example, an open question is whether neuronal excitability fluctuations induced by alpha oscillations affect an observer’s acuity or perceptual bias. I will present a series of experiments that aim to clarify the link between oscillatory power and perceptual performance. In short, these experiments indicate that performance during moments of weak pre-stimulus power, indicating greater excitability, is best described by a more liberal detection criterion rather than a change in detection sensitivity or discrimination accuracy. I will argue that this effect is due to an amplification of both signal and noise, and that this amplification occurs already during the first stages of visual processing.

The rhythms of visual attention

Speaker: Laura Dugué, Paris Descartes University

Despite the impression that our visual perception is seamless and continuous across time, evidence suggests that our visual experience relies on a series of discrete moments, similar to the snapshots of a video clip. My research focuses on these perceptual and attentional rhythms. Information would be processed in discrete samples; our ability to discriminate and attend to visual stimuli fluctuating between favorable and less favorable moments. I will present a series of experiments, using multimodal functional neuroimaging combined with psychophysical measurements in healthy humans that assess the mechanisms underlying psychophysical performance during and between two perceptual samples, and how these rhythmic mental representations are implemented at the neural level. I will argue that two sampling rhythms coexist, i.e. the alpha rhythm (8–12 Hz) to allow for sensory, perceptual sampling, and the theta rhythm (3–8 Hz) rather supporting rhythmic, attentional exploration of the visual environment.

Rhythmic sampling of the visual environment provides critical flexibility

Speaker: Ian Fiebelkorn, Princeton University

Environmental sampling of spatial locations is a fundamentally rhythmic process. That is, both attention-related boosts in sensory processing and the likelihood of exploratory movements (e.g., saccades in primates and whisking in rodents) are linked to theta rhythms (3–8 Hz). I will present electrophysiological data, from humans and monkeys, demonstrating that intrinsic theta rhythms in the fronto-parietal network organize neural activity into two alternating attentional states. The first state is associated with both (i) the suppression of covert and overt attentional shifts and (ii) enhanced visual processing at a behaviorally relevant location. The second state is associated with attenuated visual processing at the same location (i.e., the location that received a boost in sensory processing during the first attentional state). In this way, theta-rhythmic sampling provides critical flexibility, preventing us from becoming overly focused on any single location. Every approximately 250 ms, there is a window of opportunity when it is easier to disengage from the presently attended location and shift to another location. Based on these recent findings, we propose a rhythmic theory of environmental sampling. The fronto-parietal network is positioned at the nexus of sensory and motor functions, directing both attentional and motor aspects of environmental sampling. Theta rhythms might help to resolve potential functional conflicts in this network, by temporally isolating sensory (i.e., sampling) and motor (i.e., shifting) functions. This proposed role for theta rhythms in the fronto-parietal network could be a more general mechanism for providing functional flexibility in large-scale networks.

< Back to 2019 Symposia

Reading as a visual act: Recognition of visual letter symbols in the mind and brain

Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 1
Organizer(s): Teresa Schubert, Harvard University
Presenters: Teresa Schubert, Alex Holcombe, Kalanit Grill-Spector, Karin James
< Back to 2019 Symposia

Symposium Description

A large proportion of our time as literate adults is spent reading: Deriving meaning from visual symbols. Letter symbols have only been in use for a few millennia; our visual system, which may have evolved to recognize lions and the faces of our kin, is now required to recognize the written word “LION” and the handwriting of your nephew. How does the visual system accomplish this unique feat of recognition? A wealth of studies consider early visual abilities that are involved in letter recognition but the study of these symbols as visual objects is relatively rare. In this symposium, we will highlight work by a growing number of researchers attempting to bridge the gap in research between vision and language by investigating letter and word recognition processes. In addition to interest in reading on its own merits, we propose that a minimal understanding of letter recognition is relevant to vision scientists in related domains. Many popular paradigms, from visual search to the attentional blink, use letters as stimuli. Letters are also a unique class within visual objects, and an understanding of these stimuli can constrain broader theories. Furthermore, letters can be used as a comparison class to other stimuli with which humans have high levels of expertise, such as faces and tools. In this symposium, we will discuss the state of the science of letter recognition from both a cognitive and neural perspective. We will provide attendees with information specific to letter/word recognition and situate these findings relative to broader visual cognition. Our speakers span the range from junior to established scientists and use both behavioral and neural approaches. In the first talk, Schubert will present an overview of letter recognition, describing the hierarchical stages of abstraction and relating them to similar stages proposed in object recognition. In the second talk, Holcombe will address the relationship between domain-general abilities and letter recognition, by manipulating orthographic properties such as reading direction to interrogate capacity limits and laterality effects in visual working memory. In the third talk, Grill-Spector will discuss how foveal visual experience with words contributes to the organization of ventral temporal cortex over development. In the fourth talk, James will discuss the relationship between letter recognition and letter production. In addition to their visual properties letters have associated motor plans for production, and she will present evidence suggesting this production information may be strongly linked to letter recognition. Finally, we will integrate these levels into a discussion of broad open questions in letter recognition that have relevance across visual perception, such as: What are the limits of the flexibility of visual recognition systems? At what level do capacity limits in memory encoding operate? What pressures give rise to the functional organization of ventral temporal cortex? What is the extent of interactions between systems for visual perception and for motor action? On the whole, we anticipate that this symposium will provide a new perspective on the study of letter recognition and its relevance to work across the range of visual cognition.

Presentations

How do we recognize letters as visual objects?

Speaker: Teresa Schubert, Harvard University
Additional Authors: David Rothlein, VA Boston Healthcare System; Brenda Rapp, Johns Hopkins University

How do we recognize b and B as instances of the same letter? The cognitive mechanisms of letter recognition permit abstraction across highly different visual exemplars of the same letter (b and B), while also differentiating between highly similar exemplars of different letters (c and e). In this talk, I will present a hierarchical framework for letter recognition which involves progressively smaller reliance on sensory stimulus details to achieve abstract letter representation. In addition to abstraction across visual features, letter recognition in this framework also involves different levels of abstraction in spatial reference frames. This theory was developed based on data from individuals with acquired letter identification deficits (subsequent to brain lesion) and further supported by behavioral and neural research with unimpaired adult readers. I will relate this letter recognition theory to the seminal Marr & Nishihara (1978) framework for object recognition, arguing that letter recognition and visual object recognition require a number of comparable computations, leading to broadly similar recognition systems. Finally, I will compare and contrast neural evidence of cross-modal (visual and auditory letter name) representations for letters and objects. Overall, this talk will provide a theoretical and empirical framework within which to consider letter recognition as a form of object recognition.

Implicit reading direction and limited-capacity letter identification

Speaker: Alex Holcombe, University of Sydney
Additional Authors: Kim Ransley, University of Sydney

Reading this sentence was quite an accomplishment. You overcame a poor ability, possibly even a complete inability, to simultaneously identify multiple objects – according to the influential “EZ reader” model of reading, humans can identify only one word at a time. In the field of visual attention, it is known that if one must identify multiple simultaneously-presented stimuli, spatial biases may be present but are often small. Reading a sentence, by contrast, involves a highly stereotyped attentional routine with rapid but serial, or nearly serial, identification of stimuli from left to right. Unexpectedly, my lab has found evidence that this reading routine is elicited when just two widely-spaced letters are briefly presented and observers are asked to identify both letters. We find a large left-side performance advantage that is absent or reversed when the two letters are rotated to face to the left instead of to the right. Additional findings from RSVP (rapid serial visual presentation) lead us to suggest that both letters are selected by attention simultaneously, with the bottleneck at which one letter is prioritized sitting at a late stage of processing – identification or working memory consolidation. Thus, a rather minimal cue of letter orientation elicits a strong reading direction-based prioritization routine, which will allow better understanding of both the bottleneck in visual identification and how reading overcomes it.

How learning to read affects the function and structure of ventral temporal cortex

Speaker: Kalanit Grill-Spector, Stanford University
Additional Authors: Marisa Nordt, Stanford University; Vaidehi Natu, Stanford University; Jesse Gomez, Stanford University and UC Berkeley; Brianna Jeska, Stanford University; Michael Barnett, Stanford University

Becoming a proficient reader requires substantial learning over many years. However, it is unknown how learning to read affects development of distributed visual representations across human ventral temporal cortex (VTC). Using fMRI and a data-driven approach, we examined if and how distributed VTC responses to characters (pseudowords and numbers) develop after age 5. Results reveal anatomical- and hemisphere-specific development. With development, distributed responses to words and characters became more distinctive and informative in lateral but not medial VTC, in the left, but not right, hemisphere. While development of voxels with both positive and negative preference to characters affected distributed information, only activity across voxels with positive preference to characters correlated with reading ability. We also tested what developmental changes occur to the gray and white matter, by obtaining in the same participants quantitative MRI and diffusion MRI data. T1 relaxation time from qMRI and mean diffusivity (MD) from dMRI provide independent measurements of microstructural properties. In character-selective regions in lateral VTC, but not in place-selective regions in medial VTC, we found that T1 and MD decreased from age 5 to adulthood, as well as in their adjacent white matter. T1 and MD decreases are consistent with tissue growth and were correlated with the apparent thinning of lateral VTC. These findings suggest the intriguing possibility that regions that show a protracted functional development also have a protracted structural development. Our data have important ramifications for understanding how learning to read affects brain development, and for elucidating neural mechanisms of reading disabilities.

Visual experiences during letter production contribute to the development of the neural systems supporting letter perception

Speaker: Karin James, Indiana University
Additional Authors: Sophia Vinci-Booher, Indiana University

Letter production is a perceptual-motor activity that creates visual experiences with the practiced letters. Past research has focused on the importance of the motor production component of writing by hand, with less emphasis placed on the potential importance of the visual percepts that are created. We sought to better understand how different visual percepts that result from letter production are processed at different levels of literacy experience. During fMRI, three groups of participants, younger children, older children, and adults, ranging in age from 4.5 to 22 years old, were presented with dynamic and static re-presentations of their own handwritten letters, static presentations of an age-matched control’s handwritten letters, and typeface letters. In younger children, we found that only the ventral-temporal cortex was recruited, and only for handwritten forms. The response in the older children also included only the ventral-temporal cortex but was associated with both handwritten and typed letter forms. The response in the adults was more distributed than in the children and responded to all types of letter forms. Thus, the youngest children processed exemplars, but not letter categories in the VTC, while older children and adults generalized their processing to many letter forms. Our results demonstrate the differences in the neural systems that support letter perception at different levels of experience and suggest that the perception of handwritten forms is an important component of how letter production contributes to developmental changes in brain processing

<Back to 2019 Symposia

2019 Symposia

Reading as a visual act: Recognition of visual letter symbols in the mind and brain

Organizer(s): Teresa Schubert, Harvard University
Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 1

A great deal of our time as adults is spent reading: Deriving meaning from visual symbols. Our brains, which may have evolved to recognize a lion, now recognize the written word “LION”. Without recognizing the letters that comprise a word, we cannot access its meaning or its pronunciation: Letter recognition forms the basis of our ability to read. In this symposium, we will highlight work by a growing number of researchers attempting to bridge the gap in research between vision and language by investigating letter recognition processes, from both a behavioral and brain perspective. More…

Rhythms of the brain, rhythms of perception

Organizer(s): Laura Dugué, Paris Descartes University & Suliann Ben Hamed, Université Claude Bernard Lyon I
Time/Room: Friday, May 17, 2019, 12:00 – 2:00 pm, Talk Room 2

The phenomenological, continuous, unitary stream of our perceptual experience appears to be an illusion. Accumulating evidence suggests that what we perceive of the world and how we perceive it rises and falls rhythmically at precise temporal frequencies. Brain oscillations -rhythmic neural signals- naturally appear as key neural substrates for these perceptual rhythms. How these brain oscillations condition local neuronal processes, long-range network interactions, and perceptual performance is a central question to visual neuroscience. In this symposium, we will present an overarching review of this question, combining evidence from monkey neural and human EEG recordings, TMS interference studies, and behavioral analyses. More…

What can be inferred about neural population codes from psychophysical and neuroimaging data?

Organizer(s): Fabian Soto, Department of Psychology, Florida International University
Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 1

Vision scientists have long assumed that it is possible to make inferences about neural codes from indirect measures, such as those provided by psychophysics (e.g., thresholds, adaptation effects) and neuroimaging. While this approach has been very useful to understand the nature of visual representation in a variety of areas, it is not always clear under what circumstances and assumptions such inferences are valid. This symposium has the goal of highlighting recent developments in computational modeling that allow us to give clearer answer to such questions. More…

Visual Search: From youth to old age, from the lab to the world

Organizer(s): Beatriz Gil-Gómez de Liaño, Brigham & Women’s Hospital-Harvard Medical School and Cambridge University
Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 2

This symposium aims to show how visual search works in children, adults and older age, in realistic settings and environments. We will review what we know about visual search in real and virtual scenes, and its applications to solving global human challenges. Insights of brain processes underlying visual search during life will also be shown. The final objective is to better understand visual search as a whole in the lifespan, and in the real world; and to demonstrate how science can be transferred to society improving human lives, involving children, as well as younger and older adults. More…

What Deafness Tells Us about the Nature of Vision

Organizer(s): Rain Bosworth, Ph.D., Department of Psychology, University of California, San Diego
Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 1

It is widely believed that loss of one sense leads to enhancement of the remaining senses – for example, deaf see better and blind hear better. The reality, uncovered by 30 years of research, is more complex, and this complexity provides a fuller picture of the brain’s adaptability in the face of atypical sensory experiences. In this symposium, neuroscientists and vision scientists will discuss how sensory, linguistic, and social experiences during early development have lasting effects on perceptual abilities and visuospatial cognition. Presenters offer new findings that provide surprising insights into the neural and behavioral organization of the human visual system. More…

Prefrontal cortex in visual perception and recognition

Organizer(s): Biyu Jade He, NYU Langone Medical Center
Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 2

The role of prefrontal cortex (PFC) in vision remains mysterious. While it is well established that PFC neuronal activity reflects visual features, it is commonly thought that such feature encoding in PFC is only for the service of behaviorally relevant functions. However, recent emerging evidence challenges this notion, and instead suggests that the PFC may be integral for visual perception and recognition. This symposium will address these issues from complementary angles, deriving insights from the perspectives of neuronal tuning in nonhuman primates, neuroimaging and lesion studies in humans, recent development in artificial intelligence, and to draw implications for psychiatric disorders. More…

Prefrontal cortex in visual perception and recognition

Time/Room: Friday, May 17, 2019, 5:00 – 7:00 pm, Talk Room 2
Organizer(s): Biyu Jade He, NYU Langone Medical Center
Presenters: Diego Mendoza-Halliday, Vincent B. McGinty, Theofanis I Panagiotaropoulos, Hakwan Lau, Moshe Bar

< Back to 2019 Symposia

Symposium Description

To date, the role of prefrontal cortex (PFC) in visual perception and recognition remains mysterious. While it is well established that PFC neuronal activity reflects visual stimulus features in a wide range of dimensions (e.g., position, color, motion direction, faces, …), it is commonly thought that such feature encoding in PFC is only for the service of behaviorally relevant functions, such as working memory, attention, task rules, and report. However, recent emerging evidence is starting to challenge this notion, and instead suggests that contributions by the PFC may be integral for perceptual functions themselves. Currently, in the field of consciousness, an intense debate revolves around whether the PFC contributes to conscious visual perception. We believe that integrating insight from studies aiming to understand the neural basis of conscious visual perception with that from studies elucidating visual stimulus feature encoding will be valuable for both fields, and necessary for understanding the role of PFC in vision. This symposium brings together a group of leading scientists at different stages in their careers who all have made important contributions to this topic. The talks will address the role of the PFC in visual perception and recognition from a range of complementary angles, including neuronal tuning in nonhuman primates, neuroimaging and lesion studies in humans, recent development in artificial neural networks, and implications for psychiatric disorders. The first two talks by Mendoza-Halliday and McGinty will address neuronal coding of perceived visual stimulus features, such as motion direction and color, in the primate lateral PFC and orbitofrontal cortex, respectively. These two talks will also cover how neural codes for perceived visual stimulus features overlap or segregate from neural codes for stimulus features maintained in working memory and neural codes for object values, respectively. Next, the talk by Panagiotaropoulos will describe neuronal firing and oscillatory activity in the primate PFC that reflect the content of visual consciousness, including both complex objects such as faces and low-level stimulus properties such as motion direction. The talk by Lau will extend these findings and provide an updated synthesis of the literature on PFC’s role in conscious visual perception, including lesion studies and recent developments in artificial neural networks. Lastly, Bar will present a line of research that establishes the role that top-down input from PFC to the ventral visual stream plays in object recognition, touching upon topics of prediction and contextual facilitation. In sum, this symposium will present an updated view on what we know about the role of PFC in visual perception and recognition, synthesizing insight gained from studies on conscious visual perception and classic vision research, and across primate neurophysiology, human neuroimaging, patient studies and computational models. The symposium targets the general VSS audience, and will be accessible and of interest to both students and faculty.

Presentations

Partially-segregated population activity patterns represent perceived and memorized visual features in the lateral prefrontal cortex

Speaker: Diego Mendoza-Halliday, McGovern Institute for Brain Research at MIT, Cambridge MA
Additional Authors: Julio Martinez-Trujillo, Robarts Research Institute, Western University, London, ON, Canada.

Numerous studies have shown that the lateral prefrontal cortex (LPFC) plays a major role in both visual perception and working memory. While neurons in LPFC have been shown to encode perceived and memorized visual stimulus attributes, it remains unclear whether these two functions are carried out by the same or different neurons and population activity patterns. To systematically address this, we recorded the activity of LPFC neurons in macaque monkeys performing two similar motion direction match-to-sample tasks: a perceptual task, in which the sample moving stimulus remained perceptually available during the entire trial, and a memory task, in which the sample disappeared and was memorized during a delay. We found neurons with a wide variety of combinations of coding strength for perceived and memorized directions: some neurons preferentially or exclusively encoded perceived or memorized directions, whereas others encoded directions invariant to the representational nature. Using population decoding analysis, we show that this form of mixed selectivity allows the population codes representing perceived and memorized directions to be both sufficiently distinct to determine whether a given direction was perceived or memorized, and sufficiently overlapping to generalize across tasks. We further show that such population codes represent visual feature space in a parametric manner, show more temporal dynamics for memorized than perceived features, and are more closely linked to behavioral performance in the memory than the perceptual task. Our results indicate that a functionally diverse population of LPFC neurons provides a substrate for discriminating between perceptual and mnemonic representations of visual features.

Mixed selectivity for visual features and economic value in the primate orbitofrontal cortex

Speaker: Vincent B. McGinty, Rutgers University – Newark, Center for Molecular and Behavioral Neuroscience Rutgers University – Newark, Center for Molecular and Behavioral Neuroscience

Primates use their acute sense of vision not only to identify objects, but also to assess their value, that is, their potential for benefit or harm. How the brain transforms visual information into value information is still poorly understood, but recent findings suggest a key role for the orbitofrontal cortex (OFC). The OFC includes several cytoarchitectonic areas within the ventral frontal lobe, and has a long-recognized role in representing object value and organizing value-driven behavior. One of the OFC’s most striking anatomical features is the massive, direct input it receives from the inferotemporal cortex, a ventral temporal region implicated in object identification. A natural hypothesis, therefore, is that in addition to well-documented value coding properties, OFC neurons may also represent visual features in a manner similar to neurons in the ventral visual stream. To test this hypothesis, we recorded OFC neurons in macaque monkeys performing behavioral tasks in which the value of visible objects was manipulated independently from their visual features. Preliminary findings include a subset of OFC cells that were modulated by object value, but only in response to objects that shared a particular visual feature (e.g. the color red). This form of ‘mixed’ selectivity suggests that the OFC may be an intermediate computational stage between visual identification and value retrieval. Moreover, recent work showing similar mixed value-feature selectivity in inferotemporal cortex neurons suggests that neural mechanisms of object valuation may be distributed over a continuum of cortical regions, rather than compartmentalized in a strict hierarchy.

Mapping visual consciousness in the macaque prefrontal cortex

Speaker: Theofanis I Panagiotaropoulos, Neurospin, Paris, France

In multistable visual perception, the content of consciousness alternates spontaneously between mutually exclusive or mixed interpretations of competing representations. Identifying neural signals predictive of such intrinsically driven perceptual transitions is fundamental in resolving the mechanism and identifying the brain areas giving rise to visual consciousness. In a previous study, using a no-report paradigm of externally induced perceptual suppression, we have shown that functionally segregated neural populations in the macaque prefrontal cortex explicitly reflect the content of consciousness and encode task phase. Here I will present results from a no-report paradigm of binocular motion rivalry based on the optokinetic nystagmus (OKN) reflex read-out of spontaneous perceptual transitions coupled with multielectrode recordings of local field potentials and single neuron discharges in the macaque prefrontal cortex. An increase in the rate of oscillatory bursts in the delta-theta (1-9 Hz), and a decrease in the beta (20-40 Hz) bands, were predictive of spontaneous transitions in the content of visual consciousness that was also reliably reflected in single neuron discharges. Mapping these perceptually modulated neurons revealed stripes of competing populations, also observed in the absence of OKN. These results suggest that the balance of stochastic prefrontal fluctuations is critical in refreshing conscious perception, and prefrontal neural populations reflect the content of consciousness. Crucially, consciousness in the prefrontal cortex could be observed for faces and complex objects but also for low-level stimulus properties like direction of motion therefore suggesting a reconsideration of the view that prefrontal cortex is not critical for consciousness.

Persistent confusion on the role of the prefrontal cortex in conscious visual perception

Speaker: Hakwan Lau, UCLA, USA

Is the prefrontal cortex (PFC) critical for conscious perception? Here we address three common misconceptions: (1) PFC lesions do not affect subjective perception; (2) PFC activity does not reflect specific perceptual content; and (3) PFC involvement in studies of perceptual awareness is solely driven by the need to make reports required by the experimental tasks rather than subjective experience per se. These claims are often made in high-profile statements in the literature, but they are in fact grossly incompatible with empirical findings. The available evidence highlights PFC’s essential role in enabling the subjective experience in perception, contra the objective capacity to perform visual tasks; conflating the two can also be a source of confusion. Finally we will also discuss the role of PFC in perception in the light of current machine learning models. If the PFC is treated as somewhat akin to a randomly connected recurrent neural network, rather than early layers of a convolution network, the lack of prominent lesion effects may be easily understood.

What’s real? Prefrontal facilitations and distortions

Speaker: Moshe Bar, Bar-Ilan University, Israel
Additional Authors: Shira Baror, Bar-Ilan University, Israel

By now, we know that visual perception involves much more than bottom-up processing. Specifically, we have shown that object recognition is facilitated, sometimes even afforded, by top-down projections from the lateral and inferior prefrontal cortex. Next we have found that the medial prefrontal cortex, in synchrony with the para-hippocampal cortex and the retrosplenial cortex form the ‘contextual associations network’, a network that is sensitive to associative information in the environment and which utilizes contextual information to generate predictions about objects. By using various behavioral and imaging methods, we found that contextual processing facilitates object recognition very early in perception. Here, we go further to discuss the overlap of the contextual associations network with the default mode network and its implications to enhancing conscious experience, within and beyond the visual realm. We corroborate this framework with findings that imply that top-down predictions are not limited to visual information but are extracted from social or affective contexts as well. We present recent studies that suggest that although associative processes take place by default, they are nonetheless context dependent and may be inhibited according to goals. We will further discuss clinical implications, with recent findings that demonstrate how activity in the contextual associations network is altered in visual tasks performed by patients experiencing major depressive disorder. To conclude, contextual processing, sustained by the co-activation of frontal and memory-relate brain regions, is suggested to constitute a critical mechanism in perception, memory and thought in the healthy brain.

< Back to 2019 Symposia