Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University
Presenters: Susana Marcos, Brian Vohnsen, Ann Elsner, Juliette E. McGregor

< Back to 2021 Symposia

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions.

Presentations

Foveal aberrations and the impact on vision

Susana Marcos1; 1Institute of Optics, CSIC

Optical aberrations degrade the quality of images projected on the retina. The magnitude and orientation of the optical aberrations vary dramatically across individuals. Changes also occur with processes such as accommodation, and aging, and also with corneal and lens disease and surgery. Certain corrections such as multifocal lenses for presbyopia modify the aberration pattern to create simultaneous vision or extended depth-of-focus. Ocular aberrometers have made their way into the clinical practice. Besides, quantitative 3-D anterior segment imaging has allowed quantifying the morphology and alignment of the cornea and lens, linking ocular geometry and aberrations through custom eye models, and shedding light on the factors contributing to the optical degradation. However, perceived vision is affected by the eye’s aberrations in more ways than those purely predicted by optics, as the eye appears to be adapted to the magnitude and orientation of its own optical blur. Studies using Adaptive Optics, not only reveal the impact of manipulating the optical aberrations on vision, but also that the neural code for blur is driven by subject’s own aberrations.

The integrated Stiles-Crawford effect: understanding the role of pupil size and outer-segment length in foveal vision

Brian Vohnsen1; 1Advanced Optical Imaging Group, School of Physics, University College Dublin, Ireland

The Stiles-Crawford effect of the first kind (SCE-I) describes a psychophysical change in perceived brightness related to the angle of incidence of a ray of light onto the retina. The effect is commonly explained as being due to angular-dependent waveguiding by foveal cones, yet the SCE-I is largely absent from similar-shaped rods suggesting that a different mechanism than waveguiding is at play. To examine this, we have devised a flickering pupil method that directly measures the integrated SCE-I for normal pupil sizes in normal vision rather than relying on mathematical integration of the standard SCE-I function as determined with Maxwellian light. Our results show that the measured effective visibility for normal foveal vision is related to visual pigment density in the three-dimensional retina rather than waveguiding. We confirm the experimental findings with a numerical absorption model using Beer-Lambert’s law for the visual pigments.

Structure of cones and microvasculature in healthy and diseased eyes

Ann Elsner1; 1Indiana University School of Optometry

There are large differences in the distribution of cones in the living human retina, with the density at the fovea varying more than with greater eccentricities. The size and shape of the foveal avascular zone also varies across individuals, and distances between capillaries can be greatly enlarged in disease. While diseases such as age-related macular degeneration and diabetes impact greatly on both cones and retinal vessels, some cones can survive for decades although their distributions become more irregular. Surprisingly, in some diseased eyes, cone density at retinal locations outside those most compromised can exceed cone density for control subjects.

Imaging of calcium indicators in retinal ganglion cells for understanding foveal function

Juliette E. McGregor1; 1Centre for Visual Science, University of Rochester

The fovea mediates much of our conscious visual perception but is a delicate retinal structure that is difficult to investigate physiologically using traditional approaches. By expressing the calcium indicator protein GCaMP6s in retinal ganglion cells (RGCs) of the living primate we can optically read out foveal RGC activity in response to visual stimuli presented to the intact eye. Pairing this with adaptive optics ophthalmoscopy it is possible to both present highly stabilized visual stimuli to the fovea and read out retinal activity on a cellular scale in the living animal. This approach has allowed us to map the functional architecture of the fovea at the retinal level and to classify RGCs in vivo based on their responses to chromatic stimuli. Recently we have used this platform as a pre-clinical testbed to demonstrate successful restoration of foveal RGC responses following optogenetic therapy.

< Back to 2021 Symposia

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem
Presenters: Jeremy M Wolfe, Shaul Hochstein, Catherine Tallon-Baudry, James DiCarlo, Merav Ahissar

< Back to 2021 Symposia

Forty years ago, Anne Treisman presented Feature Integration Theory (FIT; Treisman & Gelade, 1980). FIT proposed a parallel, preattentive first stage and a serial second stage controlled by visual selective attention, so that search tasks could be divided into those performed by the first stage, in parallel, and those requiring serial processing and further “binding” in an object file (Kahneman, Treisman, & Gibbs, 1992). Ten years later, Jeremy Wolfe expanded FIT with Guided Search Theory (GST), suggesting that information from the first stage could guide selective attention in the second (Wolfe, Cave & Franzel, 1989; Wolfe, 1994). His lab’s recent visual search studies enhanced this theory (Wolfe, 2007), including studies of factors governing search (Wolfe & Horowitz, 2017), hybrid search (Wolfe, 2012; Nordfang, Wolfe, 2018), and scene comprehension capacity (Wick … Wolfe, 2019). Another ten years later, Shaul Hochstein and Merav Ahissar proposed Reverse Hierarchy Theory (RHT; Hochstein, Ahissar, 2002), turning FIT on its head, suggesting that early conscious gist perception, like early generalized perceptual learning (Ahissar, Hochstein, 1997, 2004), reflects high cortical level representations. Later feedback, returning to lower levels, allows for conscious perception of scene details, already represented in earlier areas. Feedback also enables detail-specific learning. Follow up found that top-level gist perception primacy leads to the counter-intuitive results that faces pop out of heterogeneous object displays (Hershler, Hochstein, 2005), individuals with neglect syndrome are better at global tasks (Pavlovskaya … Hochstein, 2015), and gist perception includes ensemble statistics (Khayat, Hochstein, 2018, 2019; Hochstein et al., 2018). Ahissar’s lab mapped RHT dynamics to auditory systems (Ahissar, 2007; Ahissar etal., 2008) in both perception and successful/failed (from developmental disabilities) skill acquisition (Lieder … Ahissar, 2019) James DiCarlo has been pivotal in confronting feedforward-only versus recurrency-integrating network models of extra-striate cortex, considering animal/human behavior (DiCarlo, Zoccolan, Rust, 2012; Yarmins … DiCarlo, 2014; Yamins, DiCarlo, 2016). His large-scale electrophysiology recordings from behaving primate ventral stream, presented with challenging object-recognition tasks, relate directly to whether recurrent connections are critical or superfluous (Kar … DiCarlo, 2019). He recently developed combined deep artificial neural network modeling, synthesized image presentation, and electrophysiological recording to control neural activity of specific neurons and circuits (Bashivan, Kar, DiCarlo, 2019). Cathrine Tallon-Baudry uses MEG/EEG recordings to study neural correlates of conscious perception (Tallon-Baudry, 2012). She studied roles of human brain oscillatory activity in object representation and visual search tasks (Tallon-Baudry, 2009), analyzing effects of attention and awareness (Wyart, Tallon-Baudry, 2009). She has directly tested, with behavior and MEG recording, implications of hierarchy and reverse hierarchy theories, including global information processing being first and mandatory in conscious perception (Campana, Tallon-Baudry, 2013; Campana … Tallon-Baudry, 2016) In summary, bottom-up versus top-down processing theories reflect on the essence of perception: the dichotomy of rapid vision-at-a-glance versus slower vision-with-scrutiny, roles of attention, hierarchy of visual representation levels, roles of feedback connections, sites and mechanisms of various visual phenomena, and sources of perceptual/cognitive deficits (Neglect, Dyslexia, ASD). Speakers at the proposed symposium will address these issues with both a historical and forward looking perspective.

Presentations

Is Guided Search 6.0 compatible with Reverse Hierarchy Theory

Jeremy M Wolfe1; 1Harvard Medical School and Visual Attention Lab Brigham & Women’s Hospital

It has been 30 years since the first version of the Guided Search (GS) model of visual search was published. As new data about search accumulated, GS needed modification. The latest version is GS6. GS argues that visual processing is capacity-limited and that attention is needed to “bind” features together into recognizable objects. The core idea of GS is that the deployment of attention is not random but is “guided” from object to object. For example, in a search for your black shoe, search would be guided toward black items. Earlier versions of GS focused on top-down (user-driven) and bottom-up (salience) guidance by basic features like color. Subsequent research adds guidance by history of search (e.g. priming), value of the target, and, most importantly, scene structure and meaning. Your search for the shoe will be guided by your understanding of the scene, including some sophisticated information about scene structure and meaning that is available “preattentively”. In acknowledging the initial, preattentive availability of something more than simple features, GS6 moves closer to ideas that are central to the Reverse Hierarchy Theory of Hochstein and Ahissar. As is so often true in our field, this is another instance where the answer is not Theory A or Theory B, even when they seem diametrically opposed. The next theory tends to borrow and synthesize good ideas from both predecessors.

Gist perception precedes awareness of details in various tasks and populations

Shaul Hochstein1; 1Life Sciences, Hebrew University, Jerusalem

Reverse Hierarchy Theory proposes several dramatic propositions regarding conscious visual perception. These include the suggestion that, while the visual system receives scene details and builds from them representations of the objects, layout, and structure of the scene, nevertheless, the first conscious percept is that of the gist of the scene – the result of implicit bottom-up processing. Only later does conscious perception attain scene details by return to lower cortical area representations. Recent studies at our lab analyzed phenomena whereby participants receive and perceive the gist of the scene before and without need for consciously knowing the details from which the gist is constructed. One striking conclusion is that “pop-out” is an early high-level effect, and is therefore not restricted to basic element features. Thus, faces pop-out from heterogeneous objects, and participants are unaware of rejected objects. Our recent studies of ensemble statistics perception find that computing set mean does not require knowledge of its individuals. This mathematically-improbable computation is both useful and natural for neural networks. I shall discuss just how and why set means are computed without need for explicit representation of individuals. Interestingly, our studies of neglect patients find that their deficit is in terms of tasks requiring focused attention to local details, and not for those requiring only global perception. Neglect patients are quite good at pop-out detection and include left-side elements in ensemble perception.

From global to local in conscious vison: behavior & MEG

Catherine Tallon-Baudry1; 1CNRS Cognitive Neuroscience, Ecole Normale Supérieure, Paris

The reverse hierarchy theory makes strong predictions on conscious vision. Local details would be processed in early visual areas before being rapidly and automatically combined into global information in higher order area, where conscious percepts would initially emerge. The theory thus predicts that consciousness arises initially in higher order visual areas, independently from attention and task, and that additional and optional attentional processes operating from top to bottom are needed to retrieve local details. We designed novel textured stimuli that, as opposed to Navon’s letters, are truly hierarchical. Taking advantage of both behavioral measures and of the decoding of MEG data, we show that global information is consciously perceived faster than local details, and that global information is computed regardless of task demands during early visual processing. These results support the idea that global dominance in conscious percepts originates in the hierarchical organization of the visual system. Implications for the nature of conscious visual experience and its underlying neural mechanisms will be discussed.

Next-generation models of recurrent computations in the ventral visual stream

James DiCarlo1; 1Neuroscience, McGovern Inst. & Brain & Cognitive Sci., MIT

Understanding mechanisms underlying visual intelligence requires combined efforts of brain and cognitive scientists, and forward engineering emulating intelligent behavior (“AI engineering”). This “reverse-engineering” approach has produced more accurate models of vision. Specifically, a family of deep artificial neural-network (ANN) architectures arose from biology’s neural network for object vision — the ventral visual stream. Engineering advances applied to this ANN family produced specific ANNs whose internal in silico “neurons” are surprisingly accurate models of individual ventral stream neurons, that now underlie artificial vision technologies. We and others have recently demonstrated a new use for these models in brain science — their ability to design patterns of light energy images on the retina that control neuronal activity deep in the brain. The reverse engineering iteration loop — respectable ANN models to new ventral stream data to even better ANN models — is accelerating. My talk will discuss this loop: experimental benchmarks for in silico ventral streams, key deviations from the biological ventral stream revealed by those benchmarks, and newer in silico ventral streams that partly close those differences. Recent experimental benchmarks argue that automatically-evoked recurrent processing is critically important to even the first 300msec of visual processing, implying that conceptually simpler, feedforward only, ANN models are no longer tenable as accurate in silico ventral streams. Our broader aim is to nurture and incentivize next generation models of the ventral stream via a community software platform termed “Brain-Score” with the goal of producing progress that individual research groups may be unable to achieve.

Visual and non-visual skill acquisition – success and failure

Merav Ahissar1; 1Psychology Department, Social Sciences & ELSC, Hebrew University, Israel

Acquiring expert skills requires years of experience – whether these skills are visual (e.g. face identification), motor (playing tennis) or cognitive (mastering chess). In 1977, Shiffrin & Schneider proposed an influential stimulus-driven, bottom-up theory of expertise automaticity, involving mapping stimuli to their consistent response. Integrating many studies since, I propose a general, top-down theory of skill acquisition. Novice performance is based on the high-level multiple-demand (Duncan, 2010) fronto-parietal system, and with practice, specific experiences are gradually represented in lower-level domain-specific temporal regions. This gradual process of learning-induced reverse-hierarchies is enabled by detection and integration of task-relevant regularities. Top-down driven learning allows formation of task-relevant mapping and representations. These in turn form a space which affords task-consistent interpolations (e.g. letters in a manner crucial for letter identification rather than visual similarity). These dynamics characterize successful skills. Some populations, however, have reduced sensitivity to task-related regularities, hindering their related skill acquisition, preventing specific expertise acquisition even after massive training. I propose that skill-acquisition failure, perceptual as cognitive, reflects specific difficulties in detecting and integrating task-relevant regularities, impeding formation of temporal-area expertise. Such is the case for individuals with dyslexia (reduced retention of temporal regularities; Jaff-Dax et al., 2017), who fail to form an expert visual word-form area, and for individuals with autism (who integrate regularities too slowly for online updating; Lieder et al., 2019). Based on this general conceptualization, I further propose that this systematic impediment.

< Back to 2021 Symposia

2021 Symposia

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions. More…

Wait for it: 20 years of temporal orienting

Organizers: Nir Shalev1,2,3, Anna Christina (Kia) Nobre1,2,3; 1Department of Experimental Psychology, University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford, 3Oxford Centre for Human Brain Activity, University of Oxford

Time is an essential dimension framing our behaviour. In considering adaptive behaviour in dynamic environments, it is essential to consider how our psychological and neural systems pick up on temporal regularities to prepare for events unfolding over time. The last two decades have witnessed a renaissance of interest in understanding how we orient attention in time to anticipate relevant moments. New experimental approaches have proliferated and demonstrated how we derive and utilise recurring temporal rhythms, associations, probabilities, and sequences to enhance perception. We bring together researchers from across the globe exploring the fourth dimension of selective attention with complementary approaches. More…

What we learn about the visual system by studying non-human primates: Past, present and future

Organizers: Rich Krauzlis1, Michele Basso2; 1National Eye Institute, 2Brain Research Institute, UCLA

Non-human primates (NHPs) are the premier animal model for understanding the brain circuits and neuronal properties that accomplish vision. This symposium will take a “look back” at what we have learned about vision over the past 20 years by studying NHPs, and also “look forward” to the emerging opportunities provided by new techniques and approaches. The 20th anniversary of VSS is the ideal occasion to present this overview of NHP research to the general VSS membership, with the broader goal of promoting increased dialogue and collaboration between NHP and non-NHP vision researchers. More…

What has the past 20 years of neuroimaging taught us about human vision and where do we go from here?

Organizers: Susan Wardle1, Chris Baker1; 1National Institutes of Health

Over the past 20 years, neuroimaging methods have become increasingly popular for studying the neural mechanisms of vision in the human brain. To celebrate 20 years of VSS this symposium will focus on the contribution that brain imaging techniques have made to our field of vision science. The aim is to provide both a historical context and an overview of current trends for the role of neuroimaging in vision science. This will lead to informed discussion about what future directions will prove most fruitful for answering fundamental questions in vision science. More…

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem

Interactions of bottom-up and top-down mechanisms in visual perception are heatedly debated to this day. The aim of the proposed symposium is to review the history, progress, and prospects of our understanding of the roles of feedforward and recurrent processing streams. Where and how does top-down influence kick in? Is it off-line, as suggested by some deep-learning networks? is it an essential aspect governing bottom-up flow at every stage, as in predictive processing? We shall critically consider the continued endurance of these models, their meshing with current state-of-the-art theories and accumulating evidence, and, most importantly, the outlook for future understanding. More…

What’s new in visual development?

Organizers: Oliver Braddick1, Janette Atkinson2; 1University of Oxford, 2University College London

Since 2000, visual developmental science has advanced beyond defining how and when basic visual functions emerge during childhood. Advances in structural MRI, fMRI and near-infrared spectroscopy have identified localised visual brain networks even in early months of life, including networks identifying objects and faces. Newly refined eye tracking has examined how oculomotor function relates to the effects of visual experience underlying strabismus and amblyopia. New evidence has allowed us to model developing visuocognitive processes such as decision-making and attention. This symposium illustrates how such advances, ideas and challenges enhance understanding of visual development, including infants and children with developmental disorders. More…

Future Meetings

VSS 2022 – May 13-18
St. Pete Beach, Florida

VSS 2023 – May 19-24
St. Pete Beach, Florida

2021 New Tools for Conducting Eye Tracking Research

Saturday, May 22, 2021, 12:00 – 12:30 PM EDT
Monday, May 24, 2021, 9:00 – 9:30 AM EDT

Organizer: Chase Anderson
Speaker: Chase Anderson, Eyeware

Until recently, eye tracking research has been limited due to intrusive headgear or expensive sensors. This has restricted the ability of vision researchers to conduct studies at scale and within their budgets.

During this event, we’ll discuss how Eyeware has overcome these challenges with GazeSense. This software uses consumer-grade 3D cameras to offer robust eye tracking data which can be exposed live via an API or in CSV format for later analysis. By using depth & RGB information, Gazesense can maintain reliable tracking better than traditional 2D trackers over extended periods of time.

We will also be introducing Beam, which enables an iPhone to be used as an eye tracking device. Beam takes advantage of the True Depth, user-facing cameras on any iPhone with Face ID. This new development allows vision researchers to run eye tracking experiments remotely, at scale, and provides access to the data.

To learn more about our mission, visit Eyeware.tech or contact us at .

We hope to see you at the satellite event!

V-VSS File Upload

  • Please complete this form to upload your V-VSS Presentation File. The file must be PDF or MP4.
  • Click Browse, select a file, and then click Upload. Large files can take several minutes to upload.
    Accepted file types: pdf, mp4, Max. file size: 5 GB.

2021 An introduction to TELLab – The Experiential Learning LABoratory, a web-based platform for educators

Saturday, May 22, 2021, 8:00 – 9:00 AM EDT

Organizers: Jeff Mulligan, Independent contractor to UC Berkeley; Jeremy Wilmer, Wellesley College
Speakers: Ken Nakayama, Jeremy Wilmer, Justin Junge, Jeff Mulligan, Sarah Kerns

This satellite event will provide a tutorial overview of The Experiential Learning Lab (TELLab), a web-based system that allows students to create and run their own psychology experiments, either by copying and modifying one of the many existing experiments, or creating a new one entirely from scratch.  The TELLab project was begun a number of years ago by Ken Nakayama and others at Harvard University, and continues today under Ken’s leadership from his new position as adjunct professor at UC Berkeley.  To date, TELLab has been used by around 20 instructors and 5000 students.

After a short introduction, TELLab gurus will demonstrate the process of creating and running an experiment, exporting the data and analyzing the results.  Complete details can be found on TELLab’s satellite information website:  http://vss.tellab.org.  Potential attendees are encouraged to visit the site at http://lab.tellab.org beforehand to create their own account and explore the system on their own.

Hope to see you there.  Happy experimenting!

2021 Teaching Vision

Monday, May 24, 2021, 4:15 – 6:15 PM EDT
Wednesday, May 26, 2021, 8:30 – 10:30 AM EDT

Organizer: Dirk Bernhardt-Walther, University of Toronto
Speakers: Jessica Witt, Colorado State University; Benjamin Balas, North Dakota State University; Michelle Greene, Bates College; Michael Cohen, Amherst College; Dirk Bernhardt-Walther, University of Toronto

The Covid-19 pandemic has catapulted instructors at universities and colleges into a new reality of online teaching. They had to rapidly adapt and innovate to adjust their proven classroom-based courses to the new reality of physically distant learning, with challenges to material delivery, student engagement, and student assessment. In this Satellite Event we will provide a forum for instructors teaching vision-related courses to exchange ideas, best practices, and materials. We will offer advice by experienced instructors on practical demonstrations that can be performed by students at home, student engagement in an online setting, open pedagogies in the online/hybrid realm, as well as incorporating online laboratory work in teaching vision-related courses. We will discuss ideas for bridging the gap between demonstrations and structured observations and the use of quantitative models for problem-solving in vision science courses. We invite the VSS community to participate in an open panel discussion to share their own experiences with teaching during the pandemic.

Jessica Witt

Colorado State University

Teaching a Sensation & Perception Lab On-Line

Benjamin Balas

North Dakota State University

Vision science on paper: Analog demos to support problem-solving in Sensation & Perception

Michelle Greene

Bates College

Disposing with the disposable assignment: the power of open pedagogies for transformational learning

Michael Cohen

Amherst College

Strategies for assessing student learning

Dirk Bernhardt-Walther

University of Toronto

Forging an active student community in a large, asynchronous course

Vision Sciences Society