2022 VSS Symposia

Beyond objects and features: High-level relations in visual perception

Friday, May 13, 2022, 12:00 – 2:00 pm EDT, Talk Room 1

Organizers: Chaz Firestone1, Alon Hafri1; 1Johns Hopkins University

The world contains not only objects and features (red apples, glass bowls, large dogs, and small cats), but also relations holding between them (apples contained in bowls, dogs chasing cats). What role does visual processing play in extracting such relations, and how do relational representations structure visual experience? This symposium brings together a variety of approaches to explore new perspectives on the visual processing of relations. A unifying theme is that relations deserve equal place at the vision scientist’s table—and indeed that many traditional areas of vision science (including scene perception, attention, and memory) are fundamentally intertwined with relational representation. More…

Beyond representation and attention: Cognitive modulations of activity in visual cortex

Friday, May 13, 2022, 12:00 – 2:00 pm EDT, Talk Room 2

Organizers: Alex White1, Kendrick Kay2; 1Barnard College, Columbia University, 2University of Minnesota

This symposium addresses modulations of activity in visual cortex that go beyond classical notions of stimulus representation and attentional selection. For instance, activity patterns can reflect the contents of visual imagery, working memory, and expectations. In other cases, unstimulated regions of cortex are affected by the level of arousal or task difficulty. Furthermore, what might appear as general attentional amplifications are sometimes quite specific to stimulus type, brain region, and task. Although these effects are diverse, this symposium will seek unifying principles that are required to build general models of how sensory and cognitive signals are blended in visual cortex. More…

How we make saccades: selection, control, integration

Friday, May 13, 2022, 2:30 – 4:30 pm EDT, Talk Room 1

Organizers: Emma Stewart1, Bianca R. Baltaretu1; 1Justus-Liebig University Giessen, Germany

Making a saccade is a non-trivial process: the saccade target must be selected, the visuomotor system must execute a motor command, and the visual system must integrate pre- and postsaccadic information. Recent research has uncovered titillating new roles for established neural regions, giving an evolving and sophisticated perspective into processes underlying saccadic selection and control. Additionally, computational models have advanced our understanding of how saccades shape perception. This symposium will unify established knowledge about the disparate phases of saccade production, giving insight into the full life cycle of a saccade, from selection, to control, to the ultimate ensuing transsaccadic perception. More…

Perceptual Organization – Lessons from Neurophysiology, Human Behavior, and Computational Modeling

Friday, May 13, 2022, 2:30 – 4:30 pm EDT, Talk Room 2

Organizers: Dirk B. Walther1, James Elder2; 1University of Toronto, 2York University

A principal challenge for both biological and machine vision systems is to integrate and organize the diversity of cues received from the environment into the coherent global representations we experience and require to make good decisions and take effective actions. Early psychological investigations date back more than 100 years to the seminal work of the Gestalt school. But in the last 50 years, neuroscientific and computational approaches to understanding perceptual organization have become equally important, and a full understanding requires integration of all three approaches. This symposium will highlight the latest results and identify promising directions in perceptual organization research. More…

The probabilistic nature of vision: How should we evaluate the empirical evidence?

Friday, May 13, 2022, 5:00 – 7:00 pm EDT, Talk Room 1

Organizers: Ömer Dağlar Tanrıkulu1, Arni Kristjansson2; 1Williams College, 2University of Iceland

The view that our visual system represents sensory information probabilistically is prevalent in contemporary vision science. However, providing empirical evidence for such a claim has proved to be difficult since both probabilistic and non-probabilistic perceptual representations can, in principle, account for the experimental results in the literature. In this symposium, we discuss how vision research can provide empirical evidence relevant to the question of probabilistic perception. How can we operationalize probabilistic visual representations, and, if possible, how can we provide empirical evidence that settles the issue? Our goal is to encourage researchers to make their assumptions about probabilistic perception explicit. More…

What does the world look like? How do we know?

Friday, May 13, 2022, 5:00 – 7:00 pm EDT, Talk Room 2

Organizers: Mark Lescroart1, Benjamin Balas2, Kamran Binaee1, Michelle Greene3, Paul MacNeilage1; 1University of Nevada, Reno, 2North Dakota State University, 3Bates College

Statistical regularities in visual experience have been broadly shown to shape neural and perceptual visual processing. However, our ability to make inferences about visual processing based on natural image statistics is limited by the representativeness of natural image datasets. Here, we consider the consequences of using non-representative datasets, and we explore challenges in assembling datasets that are more representative in terms of the sampled environments, activities, and individuals. We explicitly address the following questions: what are we not sampling, why are we not sampling it, and how does this limit the inferences we can draw about visual processing? More…

What’s new in visual development?

Organizers: Oliver Braddick1, Janette Atkinson2; 1University of Oxford, 2University College London
Presenters: Oliver Braddick, Rebecca Saxe, T. Rowan Candy, Dennis M Levi, Janette Atkinson, Tessa Dekker

< Back to 2021 Symposia

In the last two decades, the science of human development has moved beyond defining how and when basic visual functions emerge during infancy and childhood, through both technical and conceptual advances. First, technical progress in MRI and near infrared spectroscopy and dedicated efforts by researchers have made it possible to image and localize activity in the visual brain, as early as the first months of life. These will be exemplified in the symposium by Rebecca Saxe’s presentation on the development of area specialization within the ventral visual stream in early infancy, and in Tessa Dekker’s research on childhood development of decision making for efficient visual cue combination, combining neuroimaging with novel behavioural measures .Secondly, Rowan Candy’s presentation will show how measurements of infants’ eye movements and refractive state using Purkinje image eye tracking and photorefraction have been refined to new levels of accuracy, providing novel insights into oculomotor development and how it interacts with binocular visual experience from the first weeks of life. This work offers the possibility of understanding the early development of strabismus where accommodation-vergence synergy develops atypically. The resulting condition of amblyopia reflects early plasticity, but the work presented by Dennis Levi shows that this condition remains treatable into adulthood, using novel therapies designed to re-establish binocular interactions rather than simply strengthen the cortical input from the amblyopic eye, with new implications for extended critical periods . Third, these approaches, alongside new behavioural methods , have highlighted the interlocking relationships between basic visual functions and visuocognitive processes such as decision-making and attention. Janette Atkinson’s presentation will define the key role of attention in visual development, and how different components of attention, depending on distinct brain networks, can be separated in young children and those with neurodevelopmental disorders. Different disorders (e.g. perinatal brain damage, Down and Williams syndromes) show distinctive profiles of attentional impairment. Imaging studies of cortical area and fibre tract development suggest that specific parietal and frontal networks are associated with individual differences in children’s visual decision-making and may also develop atypically across many developmental disorders. Tessa Dekker’s presentation will show how decision processes operating on visual information are as critical in development, including visuomotor development, as the development of basic sensitivity to visual feature properties. Detailed modelling of visual and visuomotor behaviour and localised brain responses indicate a prolonged development into middle & late childhood of the integrative processes required for efficient visual decisions. These talks illustrate some highlights in a much wider field of new insights into both typical and atypical visual development. Oliver Braddick’s presentation will outline the scope of this broader field, including pointers to work on automated visual assessment, infant eye-tracking with head-mounted cameras in the natural visual environment, isolating specific discriminations through frequency-tagging EEG, MRI analyses of developing brain connectivity, and the developmental impact of early and late visual deprivation. This whole range of work has greatly extended our understanding of the developing visual brain and its intimate links throughout neurocognitive systems, and allows us to identifiy the challenges ahead.

Presentations

New techniques, new questions in visual development

Oliver Braddick1; 1University of Oxford

In the last two decades, the range of research on visual development has been expanded by new methodologies, some represented in this symposium, which provide richer data and more direct insights into the visual brain mechanisms underlying development. This talk provides a brief overview of other advances which have started to answer some key questions in visual development: (i) application of eye tracking to automated visual assessment; (ii) head-mounted eye tracking yielding data on how infants sample their natural visual environment; (iii) frequency-tagging to refine the specificity of information yielded by EEG; (iv) MRI approaches to the connectivity and structure of the developing visual brain, including individual differences in development; (v) broader studies of the impact of visual deprivation on human visual development. As well as applying new methods, developmental research, in common with vision research more generally, has also extended its scope into the interfaces of vision with attention, action systems, decision processes, and other aspects of cognition. All these advances open the prospects of a wider and deeper understanding of the role of vision in the development of brain systems in infancy and childhood. However, there remain challenges in understanding the origins of individual differences across children in visuospatial, visuomotor, and visuosocial cognition.

The origins of specificity in ventral-stream cortical areas

Rebecca Saxe1, Heather Kosakowski1, Lindsey Powell1, Michael Cohen1; 1Massachussetts Institute of Technology

In human adults, visual responses to object categories are organized into large scale maps; and within these maps are regions responding highly-selectively to behaviourally significant stimulus categories, such as faces, bodies, and scenes. Here we used fMRI in awake infants to directly measure the early development of these visual responses. Our first study (n=9) found that extrastriate cortex of 4–6-month-old infants contains regions that respond preferentially to faces, scenes, and objects with a spatial organization similar to adults. However, the responses in these regions were not selective for a single category. In a second fMRI study (n=26, age 2-9 months) we replicated these results, again finding preferential responses to faces, scenes and objects (but not bodies) in extrastriate areas. Third, we again replicated spatially separable responses to faces and scenes, but not bodies, within lateral occipito-temporal cortex, using functional near infrared spectroscopy (fNIRS). These results demonstrate that the large-scale organization of category preferences in visual cortex is present within a few months after birth, but is subsequently refined through development.

Infants’ control of their visual experience through vergence and accommodation

T. Rowan Candy1; 1University of Indiana

While a large literature has demonstrated the impact of abnormal visual experience on postnatal development of the visual system, the role of the ocular motor visual system in defining retinal visual experience during infancy and early childhood has been less well understood. Advances in instrumentation have made it possible for us to track simultaneously infants’ vergence eye movements and accommodation, showing that these responses are coupled, associated with sensitivity to binocular disparity, and can be dynamically adjusted, from the first weeks of life. This control, along with that of conjugate eye movements, enables infants to control their own visual experience in their dynamic three-dimensional world. In turn, visual experience enables most children to calibrate these coupled responses effectively, while others develop misalignment of their eyes and strabismus. A key question for future studies is to explore the source of this individual failure, whether it lies in disrupted fusional vergence potential or in the ability to undergo adaptation. This talk will also briefly consider the following questions: How does the improving spatial resolution of the infant’s visual system affect the iterative development of motor and sensory visual systems? How can human visual development inform machine learning and robotics? How does development of the first stages of visual processing impact higher-order extrastriate function, and what is the influence of top-down processes?

Rethinking amblyopia and its therapies

Dennis M Levi1; 1University of California, Berkeley

Recent work has transformed our ideas about effective therapies for amblyopia. Since the 1700’s, the clinical treatment for amblyopia has consisted of patching or penalizing the strong eye, to force the “lazy” amblyopic eye, to work. This treatment has generally been limited to infants and young children during the “critical” or sensitive period of development. Over the last 20 years, we have learned much about the nature and neural mechanisms underlying the loss of spatial and binocular vision in amblyopia, and that a degree of neural plasticity persists well beyond the sensitive period. Importantly, the last decade has seen a resurgence of research into new approaches to the treatment of amblyopia both in children and adults, which emphasise that monocular therapies may not be the most effective for the fundamentally binocular disorder that is amblyopia. These approaches include perceptual learning, video game play and binocular methods aimed at reducing inhibition of the amblyopic eye by the strong fellow eye, and enhancing binocular fusion and stereopsis. This talk will highlight both the successes of these approaches in labs around the world, and their dismal failures in clinical trials. Reconciling these results raises important new questions that may help to focus future directions.

Typical and atypical brain development for components of visual attention

Janette Atkinson1; 1University College London

The development of attention mechanisms plays a key role in how visual information is used and in determining how the visual environment shapes visual development. However, visual attention is not a unitary process but involves multiple components of selective attention, sustained attention, and executive function(s). The Early Childhood Attention Battery (ECAB) allows these components to be separated in preschool children or children with equivalent mental age and defines individual and group differences in their ‘attention profile’ across these components. For example, we find that sustained visual attention is impaired in children with perinatal brain injury and/or very premature birth, but that this ability is relatively preserved in children with Williams (WS) and Down Syndrome (DS). Children with DS or WS have difficulties inhibiting prepotent responses in executive function tasks, although in WS these difficulties are much greater in the visuospatial than the vebal domain. Spatial attention processes are particularly associated with structures in the dorsal stream and our past work has highlighted attention as an element of the ‘dorsal stream vulnerability’ characterising many developmental disorders. We will discuss these patterns of deficit across syndromes in relation to the dorsal and ventral attention networks and salience network defined by data from current connectivity studies in children and adults, including our findings on the tracts associated with children’s performance on visual decisions. Individual variations in the way these networks interact may determine the way top-down goals and bottom-up sensory stimulation are integrated in the control of visual behaviour in development.

Model-based MRI and psychophysics reveal crucial role of decision-making in visual development in childhood.

Tessa Dekker1, Marko Nardini2, Peter Jones1; 1University College London, 2University of Durham

Vision undergoes major development during infancy and childhood, demonstrated in improvements in both detection and recognition tasks. Classically, developmental vision research has focussed on sensitivity improvements in early visual channels. However, in recent years, decision-theoretic approaches have formalised how changes in visual performance could also result from more efficient use of available information, for example by optimising decision rules, cost functions, and priors. Using these quantitative frameworks, we are beginning to understand how these factors contribute to childhood vision. For example, improved depth perception in late childhood reflects a shift from processing depth cues independently to combining them in visual cortex, as demonstrated by the emergence of fMRI evidence for fused depth-cue representations within neural detectors in area V3B. Similarly, development of visual motion-, location-, and object perception, in part reflects more efficient combining of stimulus features (e.g., averaging dots across displays) besides greater sensitivity to these features’ properties (e.g., single dot motion). Thus, rather than greater sensitivity to basic visual information, substantial improvements in visual discrimination and detection may reflect better inferential capacities. This also applies to visually-guided movement tasks that emulate real-life action under risk: while adults can rapidly identify visuomotor strategies that minimise risk and uncertainty in new situations with complex cost factors, children up to age 10 years do not. Together, these studies show that improved decision-making plays a major role in visual development in childhood, and that modelling this role is needed to gain computational-level insight in the driving factors of human visual plasticity.

< Back to 2021 Symposia

Wait for it: 20 years of temporal orienting

Organizers: Nir Shalev1,2,3, Anna Christina (Kia) Nobre1,2,3; 1Department of Experimental Psychology, University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford, 3Oxford Centre for Human Brain Activity, University of Oxford
Presenters: Jennifer Coull, Rachel Denison, Shlomit Yuval-Greenberg, Nir Shalev, Assaf Breska, Sander Los

< Back to 2021 Symposia

The study of temporal preparation in guiding behaviour adaptively and proactively has long roots, traceable at least as far back as Wundt (1887). Additional forays into exploring the temporal dimension of anticipatory attention resurfaced during the early years of cognitive psychology. But, the field of selective temporal attention has undoubtedly blossomed in the last twenty years. In 1998, Coull and Nobre introduced a temporal analogue of the visual spatial orienting paradigm (Posner, 1980), demonstrating sizeable and reproducible effects of temporal orienting, as well as ushering in studies to study its neural systems and mechanisms. The studies built on seminal psychological demonstrations of auditory perceptual facilitation by temporal rhythms (Jones, 1976). Over the ensuing years, investigating ‘when we attend’ has become increasingly mainstay. Today we recognise that our psychological and neural systems extract temporal information from recurring temporal rhythms, associations, probabilities, and sequences to enhance perception in the various modalities as well as across them. Sophisticated experimental designs have been developed, and various approaches have been applied to investigate the principles of selective temporal attention. Are there dedicated systems for anticipating events in time leading to a common set of modulatory functions? Or, are mechanisms for temporal orienting embedded within task-specific systems and dependent on the nature of the available temporal regularities (e.g., rhythms or associations). In the following symposium, we illustrate contemporary research on selective temporal attention by bringing together researchers from across the globe and using complementary approaches. Across the presentations, researchers explore the roles of temporal rhythms, associations, probabilities, and sequences using psychophysics, eye movements, neural measurements, neuropsychology, developmental psychology, and theoretical models. In a brief introduction, Coull and Nobre will comment on the context of their initial temporal orienting studies and on the major strands and developments in the field. The first research presentation by Rachel Denison (with Marisa Carrasco) will introduce behavioural and neurophysiological studies demonstrating the selective nature of temporal attention and its relative costs and benefits to performance. The second presentation by Shlomit Yuval-Greenberg will show how anticipatory temporal attention influences oculomotor behaviour, with converging evidence from saccades, micro-saccades, and eye-blinks. The third presentation by Nir Shalev (with Sage Boettcher) will show how selective temporal attention generalises to dynamic and extended visual search contexts, picking up on learned conditional probabilities to guide perception and eye movement in adults and in children. The fourth presentation by Assaf Breska will provide evidence for a double dissociation between temporal attention based on temporal rhythms vs. associations by comparing performance of individuals with lesions in the cerebellum vs. basal ganglia. The final presentation by Sander Los will introduce a theoretical and computational model that proposes to account to various effects of temporal orienting across multiple time spans – from between successive trials to across contexts. A panel discussion will follow, to consider present and forthcoming research challenges and opportunities. In addition to considering current issues in selective temporal attention, our aim is to lure our static colleagues into the temporal dimension.

Presentations

20 years of temporal orienting: an introduction

Jennifer Coull1,2, Anna Christina Nobre3,4,5; 1Aix-Marseille Universite, France, 2French National Center for Scientific Research (CNRS), 3Department of Experimental Psychology, University of Oxford, 4Wellcome Centre for Integrative Neuroscience, University of Oxford, 5Oxford Centre for Human Brain Activity, University of Oxford

In a brief introduction to the symposium, we will spell out the main questions and issues framing cognitive neuroscience studies of attention when we conducted our first temporal orienting combining behavioural methods with PET, fMRI, and ERPS. We will reflect on the strands of research at the time which helped guide our thinking and interpretation of results; and then consider the rich, varied, and many ways in which the temporal attention field has evolved into its exciting, dynamic, and multifaceted guise.

The dynamics of temporal attention

Rachel Denison1, Marisa Carrasco1; 1Department of Psychology, New York University

Selection is the hallmark of attention: processing improves for attended items but is relatively impaired for unattended items. It is well known that visual spatial attention changes sensory signals and perception in this selective fashion. In the research we will present, we asked whether and how attentional selection happens across time. Specifically, we investigated voluntary temporal attention, the goal-driven prioritization of visual information at specific points in time. First, our experiments revealed that voluntary temporal attention is selective, resulting in perceptual tradeoffs across time. Perceptual sensitivity increased at attended times and decreased at unattended times, relative to a neutral condition in which observers were instructed to sustain attention. Temporal attention changed the precision of orientation estimates, as opposed to an all-or-none process, and it was similarly effective at different visual field locations (fovea, horizontal meridian, vertical meridian). Second, we measured microsaccades and found that directing voluntary temporal attention increases the stability of the eyes in anticipation of a brief, attended stimulus, improving perception. Attention affected microsaccade dynamics even for perfectly predictable stimuli. Precisely timed gaze stabilization can therefore be an overt correlate of the allocation of temporal attention. Third, we developed a computational model of dynamic attention, which incorporates normalization and dynamic gain control, and accounts for the time-course of perceptual tradeoffs. Altogether, this research shows how voluntary temporal attention increases perceptual sensitivity at behaviorally relevant times, and helps manage inherent limits in visual processing across short time intervals. This research advances our understanding of attention as a dynamic process.

Oculomotor inhibition as a correlate of temporal orienting

Shlomit Yuval-Greenberg1,2, Noam Tal1, Dekel Abeles1; 1School of Psychological Sciences, Tel-Aviv University, 2Sagol School of Neuroscience, Tel-Aviv University

Temporal orienting in humans is typically assessed by measuring classical behavioral measurements, such as reaction times (RTs) and accuracy-rates, and by examining electrophysiological responses. But these methods have some disadvantages: RTs and accuracy-rates provide only retrospective estimates of temporal orientation, and electrophysiological markers are often difficult to interpret. Fixational eye movements, such as microsaccades, occur continuously and involuntarily even when observers attempt to suppress them by holding steady fixation. These continuous eye movements can provide reliable and interpretable information on fluctuations of cognitive states across time, including those that are related to temporal orienting. In a series of studies, we show that temporal orienting is associated with the inhibition of oculomotor behaviors, including saccades, microsaccades and eye-blinks. First, we show that eye movements are inhibited prior to predictable visual targets. This effect was found for targets that were anticipated either because they were embedded in a rhythmic stream of stimulation or because they were preceded by an informative temporal cue. Second, we show that this effect is not specific to the visual modality but is present also for temporal orienting in the auditory modality. Last, we show that the oculomotor inhibition effect of temporal orienting is related to the construction of expectations and not to the estimation of interval duration, and also that it reflects a local trial-by-trial anticipation rather than a global arousal state. We conclude that pre-target inhibition of oculomotor behaviors is a reliable correlate of temporal orienting processes of various types and modalities.

Spatial-temporal predictions in a dynamic visual search

Nir Shalev1,2,3, Sage Boettcher1,2,3, Anna Christina Nobre1,2,3; 1Department of Experimental Psychology, University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford, 3Oxford Centre for Human Brain Activity, University of Oxford

Our environment contains many regularities that allow the anticipation of upcoming events. Waiting for a traffic light to change, an elevator to arrive, or using a toaster: all contain temporal ‘rules’ that can be learned and used to improve performance. We investigated the guidance of spatial attention based on spatial-temporal associations using a dynamic variation of a visual search task. On each trial, individuals searched for eight targets among distractors, all fading in and out of the display at different locations and times. The screen was split into four distinct quadrants. Crucially, we rendered four targets predictable by presenting them repeatedly in the same quadrants and times throughout the task. The other four targets were randomly distributed in their locations and onsets. At the first part of our talk, we will show that participants are faster and more accurate in detecting predictable targets. We identify this benefit when testing both young adults (age 18-30), and in a cohort of young children (age 5-6). At the second part of the talk, we will present a further inquiry about the source of the behavioural benefit, contrasting sequential-priming vs. memory guidance. We do so by introducing two more task variations: one in which the onsets and locations of all targets occasionally repeated in successive trials; and one in which the trial pattern was occasionally violated. The results suggest that both factors, i.e., priming and memory, provide a useful source for guiding attention.

Distinct mechanisms of rhythm- and interval-based attention shifting in time

Assaf Breska1; 1Department of Psychology, University of California, Berkeley, 2Helen Wills Neuroscience Institute, University of California, Berkeley

A fundamental principle of brain function is the use of temporal regularities to predict the timing of upcoming events and proactively allocate attention in time accordingly. Historically, predictions in rhythmic streams were explained by oscillatory entrainment models, whereas predictions formed based on associations between cues and isolated interval were explained by dedicated interval timing mechanisms. A fundamental question is whether predictions in these two contexts are indeed mediated by distinct mechanisms, or whether both rely on a single mechanism. I will present a series of studies that combined behavioral, electrophysiological, neuropsychological and computational approached to investigate the cognitive and neural architecture of rhythm- and interval-based predictions. I will first show that temporal predictions in both contexts similarly modulate behavior and anticipatory neural dynamics measured by EEG such as ramping activity, as well as phase-locking of delta-band activity, previously taken as signature of oscillatory entrainment. Second, I will show that cerebellar degeneration patients were impaired in forming temporal predictions based on isolated intervals but not based on rhythms, while Parkinson’s disease patients showed the reverse pattern. Finally, I will demonstrate that cerebellar degeneration patients show impaired temporal adjustment of ramping activity and delta-band phase-locking, as well as timed suppression of beta-band activity during interval-based prediction. Using computational modelling, I will identify the aspects of neural dynamics that prevail in rhythm-based prediction despite impaired interval-based prediction. To conclude, I will discuss implications for rhythmic entrainment and interval timing models, and the role of subcortical structures in temporal prediction and attention.

Is temporal orienting a voluntary and controlled process?

Sander Los1, Martijn Meeter1, Wouter Kruijne2; 1Vrije Universiteit Amsterdam, 2University of Groningen

Temporal orienting involves the allocation of attentional resources to future points in time to facilitate the processing of an expected target stimulus. To examine temporal orienting, studies have varied the foreperiod between a warning stimulus and a target stimulus, with a cue specifying the duration of the foreperiod at the start of each trial with high validity (typically 80%). It has invariably been found that the validity of the cue has a substantial behavioral effect (typically expressed in reaction times) on short-foreperiod trials but not on long-foreperiod trials. The standard explanation of this asymmetry starts with the idea that, at the start of each trial, the participant voluntarily aligns the focus of attention with the moment specified by the cue. On short foreperiod trials, this policy leads to an effect of cue validity, reflecting differential temporal orienting. By contrast, on long-foreperiod trials, an initially incorrect early focus of attention (induced by an invalid cue) will be discovered during the ongoing foreperiod, allowing re-orienting toward a later point in time, thus preventing behavioral costs. In this presentation, we challenge this view. Starting from our recent multiple trace theory of temporal preparation (MTP), we developed an alternative explanation based on the formation of associations between the specific cues and foreperiods. We will show that MTP accounts naturally for the typical findings in temporal orienting without recourse to voluntary and controlled processes. We will discuss initial data that serve to distinguish between the standard view and the view derived from MTP.

< Back to 2021 Symposia

What has the past 20 years of neuroimaging taught us about human vision and where do we go from here?

Organizers: Susan Wardle1, Chris Baker1; 1National Institutes of Health
Presenters: Aina Puce, Frank Tong, Janneke Jehee, Justin Gardner, Marieke Mur

< Back to 2021 Symposia

Over the past 20 years, neuroimaging methods have become increasingly popular for studying the neural mechanisms of vision in the human brain. To celebrate 20 years of VSS this symposium will focus on the contribution that brain imaging techniques have made to our field of vision science. In the year 2000, we knew about retinotopy and category-selectivity, but neuroimaging was still evolving. Now in 2020, the field is taking an increasingly computational approach to applying neuroimaging data to understanding questions about vision. The aim of this symposium is to provide both a historical context and a forward-focus for the role of neuroimaging in vision science. Our speakers are a diverse mix of pioneering researchers in the field who applied neuroimaging in the early days of the technique, and those who have more recently continued to push the field forward by creative application of imaging techniques. We have also selected speakers who use a range of different methodological approaches to investigate both low-level and high-level vision, including computational and modeling techniques, multivariate pattern analysis and representational similarity analysis, and methods that aim to link brain to behavior. The session will begin with a short 5-10 min Introductory talk by Susan Wardle to provide context for the symposium. Talks by the five selected speakers will be 20 minutes each; with 1-2 mins available for clarification questions after each talk. The session will end with a longer 10-15 min general discussion period. In the first talk, Aina Puce will consider the contribution made by multiple neuroimaging techniques such as fMRI and M/EEG towards understanding the social neuroscience of face perception, and how technological advances are continuing to shape the field. In the second talk, Frank Tong will discuss progress made in understanding top-down feedback in the visual system using neuroimaging, predictive coding models, and deep learning networks. In the third talk, Janneke Jehee will argue that a crucial next step in visual neuroimaging is to connect cortical activity to behavior, using perceptual decision-making as an illustrative example. In the fourth talk, Justin Gardner will discuss progress made in using neuroimaging to link cortical activity to human visual perception, with a focus on quantitative linking models. In the final talk, Marieke Mur will reflect on what fMRI has taught us about high-level visual processes, and outline how understanding the temporal dynamics of object recognition will play an important role in the development of the next generation of computational models of human vision. Overall, the combination of a historical perspective and an overview of current trends in neuroimaging presented in this symposium will lead to informed discussion about what future directions will prove most fruitful for answering fundamental questions in vision science.

Presentations

Technological advances are the scaffold for propelling science forward in social neuroscience

Aina Puce1; 1Indiana University

Over the last 20 years, neuroimaging techniques [e.g. EEG/MEG, fMRI] were used to map neural activity within a core and extended brain network to study how we use social information from faces. By the 20th century’s end, neuroimaging methods had identified the building blocks of this network, but how these parts came together to make a whole was unknown. In 20 years, technological advances in data acquisition and analysis have occurred in a number of spheres. First, network neuroscience has progressed our understanding of which brain regions functionally connect with one another on a regular basis. Second, improvements in white matter tract tracing have allowed putative underlying white matter pathways to be identified for some functional networks. Third, [non-]invasive brain stimulation has allowed the identification of some causal relationships between brain activity and behavior. Fourth, technological developments in portable EEG and MEG systems propelled social neuroscience out of the laboratory and into the [ecologically valid] wide world. This is changing activation task design as well as data analysis. Potential advantages of these ‘wild type’ approaches include the increased signal-to-noise provided by a live interactive 3D visual stimulus e.g. another human being, instead of an isolated static face on a computer monitor. Fifth, work with machine learning algorithms has begun to differentiate brain/non-brain activity in these datasets. Finally, we are finally ‘putting the brain back into the body’ – whereby recordings of brain activity are made in conjunction with physiological signals including EKG, EMG, pupil dilation, and eye position.

Understanding the functional roles of top-down feedback in the visual system

Frank Tong1; 1Vanderbilt University

Over the last 20 years, neuroimaging techniques have shed light on the modulatory nature of top-down feedback signals in the visual system. What is the functional role of top-down feedback and might there be multiple types of feedback that can be implemented through automatic and controlled processes? Studies of voluntary covert attention have demonstrated the flexible nature of attentional templates, which can be tuned to particular spatial locations, visual features or to the structure of more complex objects. Although top-down feedback is typically attributed to visual attention, there is growing evidence that multiple forms of feedback exist. Studies of visual imagery and working memory indicate the flexible nature of top-down feedback from frontal-parietal areas to early visual areas for maintaining and manipulating visual information about stimuli that are no longer in view. Theories of predictive coding propose that higher visual areas encode feedforward signals according to learned higher order patterns, and that any unexplained components are fed back as residual error signals to lower visual areas for further processing. These feedback error signals may serve to define an image region as more salient, figural, or stronger in apparent contrast. Here, I will discuss both theory and supporting evidence of multiple forms of top-down feedback, and consider how deep learning networks can be used to evaluate the utility of predictive coding models for understanding vision. I will go on to discuss what important questions remain to be addressed regarding the nature of feedback in the visual system.

Using neuroimaging to better understand behavior

Janneke Jehee1,2; 1Donders Institute for Brain, Cognition and Behavior, 2Radboud University Nijmegen, Nijmegen, Netherlands

Over the past 20 years, functional MRI has become an important tool in the methodological arsenal of the vision scientist. The technique has led to many amazing discoveries, ranging from human brain areas involved in face perception to information about stimulus orientation in early visual activity. While providing invaluable insights, most of the work to date has sought to link visual stimuli to a cortical response, with far less attention paid to how such cortical stimulus representations might give rise to behavior. I will argue that a crucial next step in visual neuroimaging is to connect cortical activity to behavior, and will illustrate this using our recent work on perceptual decision-making.

Using neuroimaging to link cortical activity to human visual perception

Justin Gardner1; 1Stanford University

Over the last 20 years, human neuroimaging, in particular BOLD imaging, has become the dominant technique for determining visual field representations and measuring selectivity to various visual stimuli in the human cortex. Indeed, BOLD imaging has proven decisive in settling long standing disputes that other techniques such as electrophysiological recordings of single neurons provided only equivocal evidence for. For example, by showing that cognitive influences due to attention or perceptual state could be readily measured in so-called early sensory areas. Part of this success is due to the ability to make precise behavioral measurements through psychophysics in humans which can quantitatively measure such cognitive effects. Leveraging this ability to make quantitive behavioral measurements with concurrent measurement of cortical activity with BOLD imaging, we can provide answers to a central question of visual neuroscience: What is the link between cortical activity and perceptual behavior? To make continued progress in the next 20 years towards answering this question, we must turn to quantitative linking models that formalize hypothesized relationships between cortical activity and perceptual behavior. Such quantitative linking models are falsifiable hypotheses whose success or failure can be determined by their ability or inability to quantitatively account for behavioral and neuroimaging measurements. These linking models will allow us to determine the cortical mechanisms that underly visual perception and account for cognitive influences such as attention on perceptual behavior.

High-level vision: from category selectivity to representational geometry

Marieke Mur1; 1Western University, London ON, Canada

Over the last two decades, functional magnetic resonance imaging (fMRI) has provided important insights into the organization and function of the human visual system. In this talk, I will reflect on what fMRI has taught us about high-level visual processes, with an emphasis on object recognition. The discovery of object-selective and category-selective regions in high-level visual cortex suggested that the visual system contains functional modules specialized for processing behaviourally relevant object categories. Subsequent studies, however, showed that distributed patterns of activity across high-level visual cortex also contain category information. These findings challenged the idea of category-selective modules, suggesting that these regions may instead be clusters in a continuous feature map. Consistent with this organizational framework, object representations in high-level visual cortex are at once categorical and continuous: the representational code emphasizes category divisions of longstanding evolutionary relevance while still distinguishing individual images. This body of work provides important insights on the nature of high-level visual representations, but it leaves open how these representations are dynamically computed from images. In recent years, deep neural networks have begun to provide a computationally explicit account of how the ventral visual stream may transform images into meaningful representations. I will close off with a discussion on how neuroimaging data can benefit the development of the next generation of computational models of human vision and how understanding the temporal dynamics of object recognition will play an important role in this endeavor.

< Back to 2021 Symposia

What we learn about the visual system by studying non-human primates: Past, present and future

Organizers: Rich Krauzlis1, Michele Basso2; 1National Eye Institute, 2Brain Research Institute, UCLA
Presenters: Ziad Hafed, Farran Briggs, Jude Mitchell, Marlene Cohen, Yasmine El-Shamayleh, Bevil Conway

< Back to 2021 Symposia

The symposium includes six highly regarded mid-career and junior investigators (Ziad Hafed, Farran Briggs, Jude Mitchell, Marlene Cohen, Yasmine El-Shamayleh, Bevil Conway) who use NHPs to study a range of topics (e.g., attention, eye movements, object and color perception) of interest to the VSS membership. Each speaker will have 15’ for their talk plus time for questions. Ziad Hafed will review how an observation about fixational eye movements, first described about 20 years ago, led to a series of studies exploring the brain circuits that control both attention and saccades. We now have a much deeper understanding of the underlying roots and implications of the correlation between attention and microsaccades, the role of subcortical and cortical visual structures in this process, and the importance of approaching vision from an active, rather than passive, perspective. Farran Briggs will describe a series of approaches using traditional and modern tools to explore how cortical feedback influences early visual processing. Transformations in visual signals traversing the feedforward retino-geniculo-cortical pathways are well understood, but the contribution of corticogeniculate feedback to visual perception is less clear. Through examinations of the morphology, physiology and function of corticogeniculate neurons, a new hypothesis emerges in which corticogeniculate feedback regulates the timing and precision of feedforward visual signal transmission. Jude Mitchell will describe the role of different classes of neurons in visual cortex. Over the past twenty years, there have been major advances towards manipulating and tagging different neuronal classes, and new molecular and recording techniques that distinguish cell class are now becoming available for use in NHPs. Jude will describe the application of these approaches in the marmoset monkey to understand how eye movements modulate early sensory processing as a function of cell class and cortical layer. Marlene Cohen will describe insights in understanding populations of neurons. Twenty years ago, most NHP work focused on the activity of single neurons and relatively simple stimuli and behaviors. It is now possible to record from many neurons in multiple brain areas while monkeys make judgments about a variety of stimulus properties. Marlene will describe recent work showing that these complex data sets can reveal strikingly simple relationships between neuronal populations and visual perception. Yasmine El-Shamayleh will review a well-established framework for studying visual object processing in the primate cerebral cortex – one that has stood the test of two decades of experimental investigation. She will then describe how she plans to leverage new, cell type-specific viral vector-based optogenetic approaches to begin to elucidate the detailed circuit-level mechanisms in extrastriate cortex that govern this visual function. Bevil Conway will discuss how functional MRI in NHPs has advanced our understanding of the ventral visual pathway. Combining fMRI with neurophysiology has facilitated the systematic study of extrastriate cortex, guided targeted recordings from neurons in functionally identified patches of cortex, and provided a direct comparison of brain activity in humans and monkeys. This work underscores the importance of understanding how functionally identified populations of neurons interact to enable perception of colors, objects, places and faces.

Presentations

Foveal action for the control of extrafoveal vision

Ziad Hafed1; 1Eberhard Karls Universität Tübingen

Ziad Hafed will review how an observation about fixational eye movements, first described about 20 years ago, led to a series of studies exploring the brain circuits that control both attention and saccades. We now have a much deeper understanding of the underlying roots and implications of the correlation between attention and microsaccades, the role of subcortical and cortical visual structures in this process, and the importance of approaching vision from an active, rather than passive, perspective.

The role of corticogeniculate feedback in visual perception

Farran Briggs1; 1University of Rochester

Farran Briggs will describe a series of approaches using traditional and modern tools to explore how cortical feedback influences early visual processing. Transformations in visual signals traversing the feedforward retino-geniculo-cortical pathways are well understood, but the contribution of corticogeniculate feedback to visual perception is less clear. Through examinations of the morphology, physiology and function of corticogeniculate neurons, a new hypothesis emerges in which corticogeniculate feedback regulates the timing and precision of feedforward visual signal transmission.

Neural circuits for pre-saccadic attention in the marmoset monkey

Jude Mitchell1; 1University of Rochester

Jude Mitchell will describe the role of different classes of neurons in visual cortex. Over the past twenty years, there have been major advances towards manipulating and tagging different neuronal classes, and new molecular and recording techniques that distinguish cell class are now becoming available for use in NHPs. Jude will describe the application of these approaches in the marmoset monkey to understand how eye movements modulate early sensory processing as a function of cell class and cortical layer.

Multi-neuron approaches to studying visual perception and decision-making

Marlene Cohen1; 1University of Pittsburgh

Marlene Cohen will describe insights in understanding populations of neurons. Twenty years ago, most NHP work focused on the activity of single neurons and relatively simple stimuli and behaviors. It is now possible to record from many neurons in multiple brain areas while monkeys make judgments about a variety of stimulus properties. Marlene will describe recent work showing that these complex data sets can reveal strikingly simple relationships between neuronal populations and visual perception.

Neural circuits for visual object processing

Yasmine El-Shamayleh1; 1Columbia University

Yasmine El-Shamayleh will review a well-established framework for studying visual object processing in the primate cerebral cortex – one that has stood the test of two decades of experimental investigation. She will then describe how she plans to leverage new, cell type-specific viral vector-based optogenetic approaches to begin to elucidate the detailed circuit-level mechanisms in extrastriate cortex that govern this visual function.

Parallel multi-stage processing of inferior temporal cortex: faces, objects, colors and places

Bevil Conway1; 1National Eye Institute

Bevil Conway will discuss how functional MRI in NHPs has advanced our understanding of the ventral visual pathway. Combining fMRI with neurophysiology has facilitated the systematic study of extrastriate cortex, guided targeted recordings from neurons in functionally identified patches of cortex, and provided a direct comparison of brain activity in humans and monkeys. This work underscores the importance of understanding how functionally identified populations of neurons interact to enable perception of colors, objects, places and faces.

< Back to 2021 Symposia

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University
Presenters: Susana Marcos, Brian Vohnsen, Ann Elsner, Juliette E. McGregor

< Back to 2021 Symposia

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions.

Presentations

Foveal aberrations and the impact on vision

Susana Marcos1; 1Institute of Optics, CSIC

Optical aberrations degrade the quality of images projected on the retina. The magnitude and orientation of the optical aberrations vary dramatically across individuals. Changes also occur with processes such as accommodation, and aging, and also with corneal and lens disease and surgery. Certain corrections such as multifocal lenses for presbyopia modify the aberration pattern to create simultaneous vision or extended depth-of-focus. Ocular aberrometers have made their way into the clinical practice. Besides, quantitative 3-D anterior segment imaging has allowed quantifying the morphology and alignment of the cornea and lens, linking ocular geometry and aberrations through custom eye models, and shedding light on the factors contributing to the optical degradation. However, perceived vision is affected by the eye’s aberrations in more ways than those purely predicted by optics, as the eye appears to be adapted to the magnitude and orientation of its own optical blur. Studies using Adaptive Optics, not only reveal the impact of manipulating the optical aberrations on vision, but also that the neural code for blur is driven by subject’s own aberrations.

The integrated Stiles-Crawford effect: understanding the role of pupil size and outer-segment length in foveal vision

Brian Vohnsen1; 1Advanced Optical Imaging Group, School of Physics, University College Dublin, Ireland

The Stiles-Crawford effect of the first kind (SCE-I) describes a psychophysical change in perceived brightness related to the angle of incidence of a ray of light onto the retina. The effect is commonly explained as being due to angular-dependent waveguiding by foveal cones, yet the SCE-I is largely absent from similar-shaped rods suggesting that a different mechanism than waveguiding is at play. To examine this, we have devised a flickering pupil method that directly measures the integrated SCE-I for normal pupil sizes in normal vision rather than relying on mathematical integration of the standard SCE-I function as determined with Maxwellian light. Our results show that the measured effective visibility for normal foveal vision is related to visual pigment density in the three-dimensional retina rather than waveguiding. We confirm the experimental findings with a numerical absorption model using Beer-Lambert’s law for the visual pigments.

Structure of cones and microvasculature in healthy and diseased eyes

Ann Elsner1; 1Indiana University School of Optometry

There are large differences in the distribution of cones in the living human retina, with the density at the fovea varying more than with greater eccentricities. The size and shape of the foveal avascular zone also varies across individuals, and distances between capillaries can be greatly enlarged in disease. While diseases such as age-related macular degeneration and diabetes impact greatly on both cones and retinal vessels, some cones can survive for decades although their distributions become more irregular. Surprisingly, in some diseased eyes, cone density at retinal locations outside those most compromised can exceed cone density for control subjects.

Imaging of calcium indicators in retinal ganglion cells for understanding foveal function

Juliette E. McGregor1; 1Centre for Visual Science, University of Rochester

The fovea mediates much of our conscious visual perception but is a delicate retinal structure that is difficult to investigate physiologically using traditional approaches. By expressing the calcium indicator protein GCaMP6s in retinal ganglion cells (RGCs) of the living primate we can optically read out foveal RGC activity in response to visual stimuli presented to the intact eye. Pairing this with adaptive optics ophthalmoscopy it is possible to both present highly stabilized visual stimuli to the fovea and read out retinal activity on a cellular scale in the living animal. This approach has allowed us to map the functional architecture of the fovea at the retinal level and to classify RGCs in vivo based on their responses to chromatic stimuli. Recently we have used this platform as a pre-clinical testbed to demonstrate successful restoration of foveal RGC responses following optogenetic therapy.

< Back to 2021 Symposia

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem
Presenters: Jeremy M Wolfe, Shaul Hochstein, Catherine Tallon-Baudry, James DiCarlo, Merav Ahissar

< Back to 2021 Symposia

Forty years ago, Anne Treisman presented Feature Integration Theory (FIT; Treisman & Gelade, 1980). FIT proposed a parallel, preattentive first stage and a serial second stage controlled by visual selective attention, so that search tasks could be divided into those performed by the first stage, in parallel, and those requiring serial processing and further “binding” in an object file (Kahneman, Treisman, & Gibbs, 1992). Ten years later, Jeremy Wolfe expanded FIT with Guided Search Theory (GST), suggesting that information from the first stage could guide selective attention in the second (Wolfe, Cave & Franzel, 1989; Wolfe, 1994). His lab’s recent visual search studies enhanced this theory (Wolfe, 2007), including studies of factors governing search (Wolfe & Horowitz, 2017), hybrid search (Wolfe, 2012; Nordfang, Wolfe, 2018), and scene comprehension capacity (Wick … Wolfe, 2019). Another ten years later, Shaul Hochstein and Merav Ahissar proposed Reverse Hierarchy Theory (RHT; Hochstein, Ahissar, 2002), turning FIT on its head, suggesting that early conscious gist perception, like early generalized perceptual learning (Ahissar, Hochstein, 1997, 2004), reflects high cortical level representations. Later feedback, returning to lower levels, allows for conscious perception of scene details, already represented in earlier areas. Feedback also enables detail-specific learning. Follow up found that top-level gist perception primacy leads to the counter-intuitive results that faces pop out of heterogeneous object displays (Hershler, Hochstein, 2005), individuals with neglect syndrome are better at global tasks (Pavlovskaya … Hochstein, 2015), and gist perception includes ensemble statistics (Khayat, Hochstein, 2018, 2019; Hochstein et al., 2018). Ahissar’s lab mapped RHT dynamics to auditory systems (Ahissar, 2007; Ahissar etal., 2008) in both perception and successful/failed (from developmental disabilities) skill acquisition (Lieder … Ahissar, 2019) James DiCarlo has been pivotal in confronting feedforward-only versus recurrency-integrating network models of extra-striate cortex, considering animal/human behavior (DiCarlo, Zoccolan, Rust, 2012; Yarmins … DiCarlo, 2014; Yamins, DiCarlo, 2016). His large-scale electrophysiology recordings from behaving primate ventral stream, presented with challenging object-recognition tasks, relate directly to whether recurrent connections are critical or superfluous (Kar … DiCarlo, 2019). He recently developed combined deep artificial neural network modeling, synthesized image presentation, and electrophysiological recording to control neural activity of specific neurons and circuits (Bashivan, Kar, DiCarlo, 2019). Cathrine Tallon-Baudry uses MEG/EEG recordings to study neural correlates of conscious perception (Tallon-Baudry, 2012). She studied roles of human brain oscillatory activity in object representation and visual search tasks (Tallon-Baudry, 2009), analyzing effects of attention and awareness (Wyart, Tallon-Baudry, 2009). She has directly tested, with behavior and MEG recording, implications of hierarchy and reverse hierarchy theories, including global information processing being first and mandatory in conscious perception (Campana, Tallon-Baudry, 2013; Campana … Tallon-Baudry, 2016) In summary, bottom-up versus top-down processing theories reflect on the essence of perception: the dichotomy of rapid vision-at-a-glance versus slower vision-with-scrutiny, roles of attention, hierarchy of visual representation levels, roles of feedback connections, sites and mechanisms of various visual phenomena, and sources of perceptual/cognitive deficits (Neglect, Dyslexia, ASD). Speakers at the proposed symposium will address these issues with both a historical and forward looking perspective.

Presentations

Is Guided Search 6.0 compatible with Reverse Hierarchy Theory

Jeremy M Wolfe1; 1Harvard Medical School and Visual Attention Lab Brigham & Women’s Hospital

It has been 30 years since the first version of the Guided Search (GS) model of visual search was published. As new data about search accumulated, GS needed modification. The latest version is GS6. GS argues that visual processing is capacity-limited and that attention is needed to “bind” features together into recognizable objects. The core idea of GS is that the deployment of attention is not random but is “guided” from object to object. For example, in a search for your black shoe, search would be guided toward black items. Earlier versions of GS focused on top-down (user-driven) and bottom-up (salience) guidance by basic features like color. Subsequent research adds guidance by history of search (e.g. priming), value of the target, and, most importantly, scene structure and meaning. Your search for the shoe will be guided by your understanding of the scene, including some sophisticated information about scene structure and meaning that is available “preattentively”. In acknowledging the initial, preattentive availability of something more than simple features, GS6 moves closer to ideas that are central to the Reverse Hierarchy Theory of Hochstein and Ahissar. As is so often true in our field, this is another instance where the answer is not Theory A or Theory B, even when they seem diametrically opposed. The next theory tends to borrow and synthesize good ideas from both predecessors.

Gist perception precedes awareness of details in various tasks and populations

Shaul Hochstein1; 1Life Sciences, Hebrew University, Jerusalem

Reverse Hierarchy Theory proposes several dramatic propositions regarding conscious visual perception. These include the suggestion that, while the visual system receives scene details and builds from them representations of the objects, layout, and structure of the scene, nevertheless, the first conscious percept is that of the gist of the scene – the result of implicit bottom-up processing. Only later does conscious perception attain scene details by return to lower cortical area representations. Recent studies at our lab analyzed phenomena whereby participants receive and perceive the gist of the scene before and without need for consciously knowing the details from which the gist is constructed. One striking conclusion is that “pop-out” is an early high-level effect, and is therefore not restricted to basic element features. Thus, faces pop-out from heterogeneous objects, and participants are unaware of rejected objects. Our recent studies of ensemble statistics perception find that computing set mean does not require knowledge of its individuals. This mathematically-improbable computation is both useful and natural for neural networks. I shall discuss just how and why set means are computed without need for explicit representation of individuals. Interestingly, our studies of neglect patients find that their deficit is in terms of tasks requiring focused attention to local details, and not for those requiring only global perception. Neglect patients are quite good at pop-out detection and include left-side elements in ensemble perception.

From global to local in conscious vison: behavior & MEG

Catherine Tallon-Baudry1; 1CNRS Cognitive Neuroscience, Ecole Normale Supérieure, Paris

The reverse hierarchy theory makes strong predictions on conscious vision. Local details would be processed in early visual areas before being rapidly and automatically combined into global information in higher order area, where conscious percepts would initially emerge. The theory thus predicts that consciousness arises initially in higher order visual areas, independently from attention and task, and that additional and optional attentional processes operating from top to bottom are needed to retrieve local details. We designed novel textured stimuli that, as opposed to Navon’s letters, are truly hierarchical. Taking advantage of both behavioral measures and of the decoding of MEG data, we show that global information is consciously perceived faster than local details, and that global information is computed regardless of task demands during early visual processing. These results support the idea that global dominance in conscious percepts originates in the hierarchical organization of the visual system. Implications for the nature of conscious visual experience and its underlying neural mechanisms will be discussed.

Next-generation models of recurrent computations in the ventral visual stream

James DiCarlo1; 1Neuroscience, McGovern Inst. & Brain & Cognitive Sci., MIT

Understanding mechanisms underlying visual intelligence requires combined efforts of brain and cognitive scientists, and forward engineering emulating intelligent behavior (“AI engineering”). This “reverse-engineering” approach has produced more accurate models of vision. Specifically, a family of deep artificial neural-network (ANN) architectures arose from biology’s neural network for object vision — the ventral visual stream. Engineering advances applied to this ANN family produced specific ANNs whose internal in silico “neurons” are surprisingly accurate models of individual ventral stream neurons, that now underlie artificial vision technologies. We and others have recently demonstrated a new use for these models in brain science — their ability to design patterns of light energy images on the retina that control neuronal activity deep in the brain. The reverse engineering iteration loop — respectable ANN models to new ventral stream data to even better ANN models — is accelerating. My talk will discuss this loop: experimental benchmarks for in silico ventral streams, key deviations from the biological ventral stream revealed by those benchmarks, and newer in silico ventral streams that partly close those differences. Recent experimental benchmarks argue that automatically-evoked recurrent processing is critically important to even the first 300msec of visual processing, implying that conceptually simpler, feedforward only, ANN models are no longer tenable as accurate in silico ventral streams. Our broader aim is to nurture and incentivize next generation models of the ventral stream via a community software platform termed “Brain-Score” with the goal of producing progress that individual research groups may be unable to achieve.

Visual and non-visual skill acquisition – success and failure

Merav Ahissar1; 1Psychology Department, Social Sciences & ELSC, Hebrew University, Israel

Acquiring expert skills requires years of experience – whether these skills are visual (e.g. face identification), motor (playing tennis) or cognitive (mastering chess). In 1977, Shiffrin & Schneider proposed an influential stimulus-driven, bottom-up theory of expertise automaticity, involving mapping stimuli to their consistent response. Integrating many studies since, I propose a general, top-down theory of skill acquisition. Novice performance is based on the high-level multiple-demand (Duncan, 2010) fronto-parietal system, and with practice, specific experiences are gradually represented in lower-level domain-specific temporal regions. This gradual process of learning-induced reverse-hierarchies is enabled by detection and integration of task-relevant regularities. Top-down driven learning allows formation of task-relevant mapping and representations. These in turn form a space which affords task-consistent interpolations (e.g. letters in a manner crucial for letter identification rather than visual similarity). These dynamics characterize successful skills. Some populations, however, have reduced sensitivity to task-related regularities, hindering their related skill acquisition, preventing specific expertise acquisition even after massive training. I propose that skill-acquisition failure, perceptual as cognitive, reflects specific difficulties in detecting and integrating task-relevant regularities, impeding formation of temporal-area expertise. Such is the case for individuals with dyslexia (reduced retention of temporal regularities; Jaff-Dax et al., 2017), who fail to form an expert visual word-form area, and for individuals with autism (who integrate regularities too slowly for online updating; Lieder et al., 2019). Based on this general conceptualization, I further propose that this systematic impediment.

< Back to 2021 Symposia

2021 Symposia

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions. More…

Wait for it: 20 years of temporal orienting

Organizers: Nir Shalev1,2,3, Anna Christina (Kia) Nobre1,2,3; 1Department of Experimental Psychology, University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford, 3Oxford Centre for Human Brain Activity, University of Oxford

Time is an essential dimension framing our behaviour. In considering adaptive behaviour in dynamic environments, it is essential to consider how our psychological and neural systems pick up on temporal regularities to prepare for events unfolding over time. The last two decades have witnessed a renaissance of interest in understanding how we orient attention in time to anticipate relevant moments. New experimental approaches have proliferated and demonstrated how we derive and utilise recurring temporal rhythms, associations, probabilities, and sequences to enhance perception. We bring together researchers from across the globe exploring the fourth dimension of selective attention with complementary approaches. More…

What we learn about the visual system by studying non-human primates: Past, present and future

Organizers: Rich Krauzlis1, Michele Basso2; 1National Eye Institute, 2Brain Research Institute, UCLA

Non-human primates (NHPs) are the premier animal model for understanding the brain circuits and neuronal properties that accomplish vision. This symposium will take a “look back” at what we have learned about vision over the past 20 years by studying NHPs, and also “look forward” to the emerging opportunities provided by new techniques and approaches. The 20th anniversary of VSS is the ideal occasion to present this overview of NHP research to the general VSS membership, with the broader goal of promoting increased dialogue and collaboration between NHP and non-NHP vision researchers. More…

What has the past 20 years of neuroimaging taught us about human vision and where do we go from here?

Organizers: Susan Wardle1, Chris Baker1; 1National Institutes of Health

Over the past 20 years, neuroimaging methods have become increasingly popular for studying the neural mechanisms of vision in the human brain. To celebrate 20 years of VSS this symposium will focus on the contribution that brain imaging techniques have made to our field of vision science. The aim is to provide both a historical context and an overview of current trends for the role of neuroimaging in vision science. This will lead to informed discussion about what future directions will prove most fruitful for answering fundamental questions in vision science. More…

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem

Interactions of bottom-up and top-down mechanisms in visual perception are heatedly debated to this day. The aim of the proposed symposium is to review the history, progress, and prospects of our understanding of the roles of feedforward and recurrent processing streams. Where and how does top-down influence kick in? Is it off-line, as suggested by some deep-learning networks? is it an essential aspect governing bottom-up flow at every stage, as in predictive processing? We shall critically consider the continued endurance of these models, their meshing with current state-of-the-art theories and accumulating evidence, and, most importantly, the outlook for future understanding. More…

What’s new in visual development?

Organizers: Oliver Braddick1, Janette Atkinson2; 1University of Oxford, 2University College London

Since 2000, visual developmental science has advanced beyond defining how and when basic visual functions emerge during childhood. Advances in structural MRI, fMRI and near-infrared spectroscopy have identified localised visual brain networks even in early months of life, including networks identifying objects and faces. Newly refined eye tracking has examined how oculomotor function relates to the effects of visual experience underlying strabismus and amblyopia. New evidence has allowed us to model developing visuocognitive processes such as decision-making and attention. This symposium illustrates how such advances, ideas and challenges enhance understanding of visual development, including infants and children with developmental disorders. More…

2020 Symposia

2020 Symposia

No Symposia were presented at the V-VSS 2020 meeting.

2021 Conversations on Open Science

Friday, May 21, 5:00 – 7:00 pm EDT

Organizer: VSS Student-Postdoc Advisory Committee
Moderator: Björn Jörges, York University
Speakers: Geoffrey Aguirre, Janine Bijsterbosch, Christopher Donkin, Alex Holcombe, and Russell A. Poldrack

Open Science has become an important part of the scientific landscape. Researchers are adopting open practices such as preregistrations and registered reports, open access, and the use of open source software, journals make data and code sharing more and more a desired or even required feature of research publications, and funders are increasingly evaluating the applicants’ open science track records along with their scientific proposals. It is therefore more important than ever for all scientists, and particularly for Early Career Researchers, to be able to navigate the Open Science space. For this reason, the Student Postdoc Committee has organized Conversations on Open Science as a means to introduce the VSS community to the basics of Open Science and some current debates.

Conversations on Open Science will start out with a short overview of the most important open practices. The speakers then delve deeper into two topics: preregistration and code and data sharing. We have invited two speakers for each topic: one of them argues in favor, while the other argues against, provides some nuance, or points out limitations. Both parties will first explain their respective perspectives, followed by a joint presentation in which some synthesis or common ground will be reached.

Geoffrey Aguirre

University of Pennsylvania

Geoffrey Aguirre is an Associate Professor of Neurology at the University of Pennsylvania. He has studied the human visual system using functional MRI for nearly twenty-five years, often combining brain imaging with complementary measures of perception and retinal structure. During his career he has contributed to the analytic and inferential foundation of neuroimaging studies. In recent years has worked to adopt and advocate for open-science tools, principally as a means to improve his own research. Contact Geoffrey at .

Janine Bijsterbosch

Washington University School of Medicine

Janine Bijsterbosch has worked in brain imaging since 2007. She is currently Assistant Professor in the Computational Imaging section of the Department of Radiology at Washington University in St Louis. The Personomics Lab headed by Dr. Bijsterbosch aims to understand how brain connectivity patterns differ from one person to the next, by studying the “personalized connectome”. Using big data resources such as the Human Connectome Project and UK Biobank, the Personomics Lab adopts cutting edge analysis techniques to study functional connectivity networks and their role in behavior, performance, mental health, disease risk, treatment response, and physiology. Dr. Bijsterbosch is Chair-Elect of the Open Science special interest group as part of the Organization for Human Brain Mapping. In addition, Dr. Bijsterbosch wrote a textbook on functional connectivity analyses, which was published by Oxford University Press in 2017. Contact Janine at .

Christopher Donkin

UNSW Sydney

Christopher Donkin is a cognitive psychologist at UNSW Sydney. His work tends to rely on a mix of computational modelling and experiments. He is interested in decision-making, memory, models, and metascience. While agreeing that open science is of utmost importance, many long series of conversations with Aba Szollosi about how knowledge is created has led to disagreement around the purported benefits of preregistration. Though the content of the talk will be specific to preregistration, the background knowledge underlying these arguments is more carefully laid out here.  Contact Chris at .

Alex Holcombe

University of Sydney

Alex Holcombe studies how humans perceive and process visual signals over time, in domains such as motion, position perception, and attentional tracking. Outside of the lab, he has been active in various open science initiatives. He is an associate editor at the journal Meta-psychology and he co-founded the Registered Replication Report article format at Perspectives on Psychological Science in 2014, co-founded the Association for Psychological Science journal Advances in Methods and Practices in Psychological Science in 2018, and served on the founding advisory boards of the preprint server PsyArxiv and the journal PLOS ONE. Contact Alex at .

Russell A. Poldrack

Stanford University

Russell A. Poldrack is the Albert Ray Lang Professor in the Department of Psychology and Professor (by courtesy) of Computer Science at Stanford University, and Director of the Stanford Center for Reproducible Neuroscience. His research uses neuroimaging to understand the brain systems underlying decision making and executive function. His lab is also engaged in the development of neuroinformatics tools to help improve the reproducibility and transparency of neuroscience, including the Openneuro.org and Neurovault.org data sharing projects and the Cognitive Atlas ontology. Contact Russ at .

Björn Jörges

York University

Björn Jörges studies the role of prediction for visual perception, as well as visuo-vestibular integration for the perception of object motion and self-motion. Beyond these topics, he also aspires to make science better, i.e., more diverse, more transparent and more robust. After finishing his PhD in Barcelona on the role of a strong earth gravity prior for perception and action, he started a Postdoc in the Multisensory Integration Lab at York University, where he currently investigates how the perception of self-motion changes in response to microgravity. Contact Björn at .

Vision Sciences Society