VSS 2022 Travel Award Application

Hotel FAQ

How do I book a room at the TradeWinds?
To book a room at the TradeWinds, follow the link on the Accommodations page or call (800) 808-9833. Be sure to enter or state group code “VSS22”.

Does the TradeWinds require a deposit to book a reservation?
Yes, a one-night room and tax guarantee is required to secure your room.

Is the hotel deposit refundable if I have to cancel my room reservation?
Yes, your hotel deposit is fully refundable until 72 hours prior to arrival.

Is there a waiting list if the hotel is sold out?
VSS does maintain a waiting list and if rooms become available, people on the waiting list will be contacted in order they were added to the waiting list. Contact VSS to be placed on the waiting list.

Is there a resort fee at the TradeWinds?
The TradeWinds typically charges a $45 resort fee per night, but this fee is waived for VSS attendees booking through the VSS web link. To see what is included, see TradeWinds’ Resort Amenity Fee.

What if I find a better rate for the TradeWinds on a 3rd party site?
VSS has done its due diligence to negotiate the lowest possible group rate at the TradeWinds. If you find a lower price during them meeting dates, please contact VSS. Keep in mind, if you book a room through a wholesaler like hotels.com, the TradeWinds will NOT waive your resort fee.

What are the contractual obligations with the TradeWinds?
VSS is contractually bound to fill a certain number of guest rooms (this is called a room block) during the annual meeting. By guaranteeing these rooms, VSS receives complimentary meeting space and reduced food and beverage pricing. This ultimately helps keep registration rates down.

What happens if VSS does not meet the contractual obligations?
VSS will be charged an attrition fee if we do not meet 85% of our room block and if unused rooms remain unsold prior to the meeting.
Groups like ours often book three or more years in advance to secure enough hotel rooms for meeting attendees. Attrition fees are compensation to the hotel for the rooms that might have been sold had they not been held in our room block inventory. Attrition fees are a standard practice in the United States.

Meeting Attendance Survey

To all members of VSS,

Plans are well underway for VSS 2022! We need some input from you to make the meeting a success for all.

The link below will lead you to a very brief survey asking about your plans for attendance at VSS 2022. Even if you are uncertain, please let us know about your current plans.

Your responses will be most useful if sent by Wednesday, November 24.

Take Survey

Thanks to all!
The VSS Board of Directors

John I. Yellott Travel Award for Vision Science

The John I. Yellott Travel Award for Vision Science will be given annually to a graduate student or postdoctoral researcher who will be attending the VSS conference to present research that provides new quantitative insights into the human visual system.

The award was created in 2022 to honor Jack Yellott’s legacy of innovative, quantitative, and rigorous research that spanned many areas of vision science. His work was known for its ingenuity, creativity and clear mathematical reasoning.

Jack served as founding chair of the Department of Cognitive Sciences at University of California-Irvine. Throughout his career he served as a mentor and close collaborator to many outstanding vision scientists. He was a visible and friendly presence at ARVO and VSS. He was always interested to visit with young investigators, to listen to them and to share his thoughts, and offer support to the next generation of vision scientists. See more information about the life and work of John I. Yellott.

Application for the Yellott travel award will be made following the procedures established for other VSS travel awards. Those seeking a Yellott travel award will also be asked to indicate how their VSS presentation provides new quantitative insights into the human visual system and relates to the work of John Yellott.

This award was established by friends of John (Jack) Yellott.

Schedule

Applications Open: January 10, 2022
Deadline to Apply: January 24, 2022
Recipients Announced: February 15, 2022

2021 Satellite Events

2021 Satellite Events

An introduction to TELLab – The Experiential Learning LABoratory, a web-based platform for educators

An introduction to TELLab 2.0 – A new-and-improved version of The Experiential Learning LABoratory, a webbased platform for educators

Canadian Vision Science Social

Measuring and Maximizing Eye Tracking Data Quality with EyeLinks

Mentoring Envisioned

New Tools for Conducting Eye Tracking Research

Performing Eye Tracking Studies in VR

phiVIS: Philosophy of Vision Science Workshop

Reunion: Visual Neuroscience From Spikes to Awareness

Run MATLAB/Psychtoolbox Experiments Online with
Pack & Go

Teaching Vision

Virtual VPixx Hardware with the LabMaestro Simulator

Visibility: A Gathering of LGBTQ+ Vision Scientists and
Friends

2021 Funding Workshops

US Funding Workshop

Saturday, May 22, 2021, 12:00 – 1:00 pm EDT

Moderator: Ruth Rosenholtz
Discussants: Joeanna Arthur, National Geospatial-Intelligence Agency; Todd Horowitz, National Cancer Institute; Michael Hout, National Science Foundation; and Cheri Wiggs, National Eye Institute
You have a great research idea, but you need money to make it happen. You need to write a grant. This workshop will address various funding mechanisms for vision research. Our panelists will discuss their organization’s interests and priorities, and give insight into the inner workings of their extramural research programs. There will be time for your questions.

Joeanna Arthur

National Geospatial-Intelligence Agency

Joeanna Arthur, Ph.D., is a Supervisory Research & Development Scientist and Senior Staff Scientist in the Predictive Analytics Research Group at the National-Geospatial Intelligence Agency (NGA) where she leads a transdisciplinary team of scientists advancing Geospatial Science and enhancing analytic tradecraft. She also serves as the agency’s Human Research Protection Official. Prior government assignments include Chief of Research(FBI/HIG), Lead Behavioral Scientist/Psychologist (DIA), Program Manager and Operational Test & Evaluation Lead (NGA). Her past and current research areas span the fields of cognitive neuroscience, operational psychology, human-system integration, human performance optimization, intelligence interviewing, research ethics, and applied social science. She received her doctorate degree in Psychology/Cognitive Neuroscience from the George Washington University (Washington, DC) and completed a post-doctoral research fellowship in the Department of Otolaryngology- Head and Neck Surgery at the John Hopkins University School of Medicine (Baltimore, MD). Dr. Arthur is one of the Intelligence Community’s first recipients of the Presidential Early Career Award in Science and Engineering (PECASE 2012, White House Office of Science and Technology Policy).

Todd Horowitz

National Cancer Institute

Todd Horowitz, Ph.D., is a Program Director in the Behavioral Research Program’s (BRP) Basic Biobehavioral and Psychological Sciences Branch (BBPSB), located in the Division of Cancer Control and Population Sciences (DCCPS) at the National Cancer Institute (NCI). Dr. Horowitz earned his doctorate in Cognitive Psychology at the University of California, Berkeley in 1995. Prior to joining NCI, he was Assistant Professor of Ophthalmology at Harvard Medical School and Associate Director of the Visual Attention Laboratory at Brigham and Women’s Hospital. He has published more than 70 peer-reviewed research papers in vision science and cognitive psychology. His research interests include attention, perception, medical image interpretation, cancer-related cognitive impairments, sleep, and circadian rhythms.

Michael Hout

National Science Foundation

Michael Hout, Ph.D., is a Program Director for Perception, Action, and Cognition in the Social, Behavioral, and Economic Sciences directorate (in the Behavioral and Cognitive Sciences division) of the National Science Foundation. He received his undergraduate degree at the University of Pittsburgh and his masters and doctoral degrees from Arizona State University. He is a rotating Program Director on professional leave from New Mexico State University where he runs a lab in the Psychology Department and co-directs an interdisciplinary virtual and augmented reality lab as well. Prior to joining the NSF he was a conference organizer for the Object Perception, Attention, and Memory meeting and was an Associate Editor at Attention, Perception, and Psychophysics. His research focuses primarily on visual cognition (including visual search, attention, and eye movements), spanning both basic theoretical research and applied scenarios such as professional medical/security screening, and search and rescue.

Cheri Wiggs

National Eye Institute

Cheri Wiggs, Ph.D., serves as a Program Director at the National Eye Institute (of the National Institutes of Health). She oversees extramural funding through three programs — Perception & Psychophysics, Myopia & Refractive Errors, and Low Vision & Blindness Rehabilitation. She received her PhD from Georgetown University in 1991 and came to the NIH as a researcher in the Laboratory of Brain and Cognition. She made her jump to the administrative side of science in 1998 as a Scientific Review Officer. She currently represents the NEI on several trans-NIH coordinating committees (including BRAIN, Behavioral and Social Sciences Research, Medical Rehabilitation Research) and was appointed to the NEI Director’s Audacious Goals Initiative Working Group.

Ruth Rosenholtz

MIT

Ruth Rosenholtz is a Principal Research Scientist in the Department of Brain & Cognitive Sciences at the Massachusetts Institute of Technology. She studies a wide range of visual phenomena, as well as applied vision, using a mix of behavioral methods and computational modeling. Her main research topics include attention and visual search; perceptual organization; and peripheral vision. She is a fellow of the APS, an associate editor for the Journal Vision, and a VSS board member. Her funding sources have included NSF, NIH, Toyota, and Ford.

Peer Review of NIH NRSA Fellowship Proposals

Tuesday, May 25, 5:00 – 5:30 pm EDT

Speaker: Cibu Thomas

The objective of this session is to provide the principal investigators and their sponsors an overview about the process by which peer review of predoctoral and postdoctoral NRSA proposals is implemented by the NIH Center for Scientific
Review.

Cibu Thomas

National Institutes of Health

Dr. Cibu Thomas earned his M.S. in Applied Cognition and Neuroscience from the University of Texas at Dallas, and his Ph.D. in Psychology from Carnegie Mellon University. After postdoctoral training at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital, Harvard Medical School, he served as a Research Fellow
at the Center for Neuroscience and Regenerative Medicine. He then served as a Staff Scientist for the Section on Learning and Plasticity in the Laboratory of Brain and Cognition at the National Institute of Mental Health, where his research focused on elucidating the principles governing brain plasticity and its relation to behavior using multimodal MRI and psychophysics. He is currently the scientific review officer for the NIH NRSA Fellowships study section F02B, which manages the scientific review of applications proposing training that is focused on understanding normal sensory (both auditory and visual), motor or sensorimotor function as well as disorders of cognitive, sensory, perceptual and motor development.

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University
Presenters: Susana Marcos, Brian Vohnsen, Ann Elsner, Juliette E. McGregor

< Back to 2021 Symposia

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions.

Presentations

Foveal aberrations and the impact on vision

Susana Marcos1; 1Institute of Optics, CSIC

Optical aberrations degrade the quality of images projected on the retina. The magnitude and orientation of the optical aberrations vary dramatically across individuals. Changes also occur with processes such as accommodation, and aging, and also with corneal and lens disease and surgery. Certain corrections such as multifocal lenses for presbyopia modify the aberration pattern to create simultaneous vision or extended depth-of-focus. Ocular aberrometers have made their way into the clinical practice. Besides, quantitative 3-D anterior segment imaging has allowed quantifying the morphology and alignment of the cornea and lens, linking ocular geometry and aberrations through custom eye models, and shedding light on the factors contributing to the optical degradation. However, perceived vision is affected by the eye’s aberrations in more ways than those purely predicted by optics, as the eye appears to be adapted to the magnitude and orientation of its own optical blur. Studies using Adaptive Optics, not only reveal the impact of manipulating the optical aberrations on vision, but also that the neural code for blur is driven by subject’s own aberrations.

The integrated Stiles-Crawford effect: understanding the role of pupil size and outer-segment length in foveal vision

Brian Vohnsen1; 1Advanced Optical Imaging Group, School of Physics, University College Dublin, Ireland

The Stiles-Crawford effect of the first kind (SCE-I) describes a psychophysical change in perceived brightness related to the angle of incidence of a ray of light onto the retina. The effect is commonly explained as being due to angular-dependent waveguiding by foveal cones, yet the SCE-I is largely absent from similar-shaped rods suggesting that a different mechanism than waveguiding is at play. To examine this, we have devised a flickering pupil method that directly measures the integrated SCE-I for normal pupil sizes in normal vision rather than relying on mathematical integration of the standard SCE-I function as determined with Maxwellian light. Our results show that the measured effective visibility for normal foveal vision is related to visual pigment density in the three-dimensional retina rather than waveguiding. We confirm the experimental findings with a numerical absorption model using Beer-Lambert’s law for the visual pigments.

Structure of cones and microvasculature in healthy and diseased eyes

Ann Elsner1; 1Indiana University School of Optometry

There are large differences in the distribution of cones in the living human retina, with the density at the fovea varying more than with greater eccentricities. The size and shape of the foveal avascular zone also varies across individuals, and distances between capillaries can be greatly enlarged in disease. While diseases such as age-related macular degeneration and diabetes impact greatly on both cones and retinal vessels, some cones can survive for decades although their distributions become more irregular. Surprisingly, in some diseased eyes, cone density at retinal locations outside those most compromised can exceed cone density for control subjects.

Imaging of calcium indicators in retinal ganglion cells for understanding foveal function

Juliette E. McGregor1; 1Centre for Visual Science, University of Rochester

The fovea mediates much of our conscious visual perception but is a delicate retinal structure that is difficult to investigate physiologically using traditional approaches. By expressing the calcium indicator protein GCaMP6s in retinal ganglion cells (RGCs) of the living primate we can optically read out foveal RGC activity in response to visual stimuli presented to the intact eye. Pairing this with adaptive optics ophthalmoscopy it is possible to both present highly stabilized visual stimuli to the fovea and read out retinal activity on a cellular scale in the living animal. This approach has allowed us to map the functional architecture of the fovea at the retinal level and to classify RGCs in vivo based on their responses to chromatic stimuli. Recently we have used this platform as a pre-clinical testbed to demonstrate successful restoration of foveal RGC responses following optogenetic therapy.

< Back to 2021 Symposia

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem
Presenters: Jeremy M Wolfe, Shaul Hochstein, Catherine Tallon-Baudry, James DiCarlo, Merav Ahissar

< Back to 2021 Symposia

Forty years ago, Anne Treisman presented Feature Integration Theory (FIT; Treisman & Gelade, 1980). FIT proposed a parallel, preattentive first stage and a serial second stage controlled by visual selective attention, so that search tasks could be divided into those performed by the first stage, in parallel, and those requiring serial processing and further “binding” in an object file (Kahneman, Treisman, & Gibbs, 1992). Ten years later, Jeremy Wolfe expanded FIT with Guided Search Theory (GST), suggesting that information from the first stage could guide selective attention in the second (Wolfe, Cave & Franzel, 1989; Wolfe, 1994). His lab’s recent visual search studies enhanced this theory (Wolfe, 2007), including studies of factors governing search (Wolfe & Horowitz, 2017), hybrid search (Wolfe, 2012; Nordfang, Wolfe, 2018), and scene comprehension capacity (Wick … Wolfe, 2019). Another ten years later, Shaul Hochstein and Merav Ahissar proposed Reverse Hierarchy Theory (RHT; Hochstein, Ahissar, 2002), turning FIT on its head, suggesting that early conscious gist perception, like early generalized perceptual learning (Ahissar, Hochstein, 1997, 2004), reflects high cortical level representations. Later feedback, returning to lower levels, allows for conscious perception of scene details, already represented in earlier areas. Feedback also enables detail-specific learning. Follow up found that top-level gist perception primacy leads to the counter-intuitive results that faces pop out of heterogeneous object displays (Hershler, Hochstein, 2005), individuals with neglect syndrome are better at global tasks (Pavlovskaya … Hochstein, 2015), and gist perception includes ensemble statistics (Khayat, Hochstein, 2018, 2019; Hochstein et al., 2018). Ahissar’s lab mapped RHT dynamics to auditory systems (Ahissar, 2007; Ahissar etal., 2008) in both perception and successful/failed (from developmental disabilities) skill acquisition (Lieder … Ahissar, 2019) James DiCarlo has been pivotal in confronting feedforward-only versus recurrency-integrating network models of extra-striate cortex, considering animal/human behavior (DiCarlo, Zoccolan, Rust, 2012; Yarmins … DiCarlo, 2014; Yamins, DiCarlo, 2016). His large-scale electrophysiology recordings from behaving primate ventral stream, presented with challenging object-recognition tasks, relate directly to whether recurrent connections are critical or superfluous (Kar … DiCarlo, 2019). He recently developed combined deep artificial neural network modeling, synthesized image presentation, and electrophysiological recording to control neural activity of specific neurons and circuits (Bashivan, Kar, DiCarlo, 2019). Cathrine Tallon-Baudry uses MEG/EEG recordings to study neural correlates of conscious perception (Tallon-Baudry, 2012). She studied roles of human brain oscillatory activity in object representation and visual search tasks (Tallon-Baudry, 2009), analyzing effects of attention and awareness (Wyart, Tallon-Baudry, 2009). She has directly tested, with behavior and MEG recording, implications of hierarchy and reverse hierarchy theories, including global information processing being first and mandatory in conscious perception (Campana, Tallon-Baudry, 2013; Campana … Tallon-Baudry, 2016) In summary, bottom-up versus top-down processing theories reflect on the essence of perception: the dichotomy of rapid vision-at-a-glance versus slower vision-with-scrutiny, roles of attention, hierarchy of visual representation levels, roles of feedback connections, sites and mechanisms of various visual phenomena, and sources of perceptual/cognitive deficits (Neglect, Dyslexia, ASD). Speakers at the proposed symposium will address these issues with both a historical and forward looking perspective.

Presentations

Is Guided Search 6.0 compatible with Reverse Hierarchy Theory

Jeremy M Wolfe1; 1Harvard Medical School and Visual Attention Lab Brigham & Women’s Hospital

It has been 30 years since the first version of the Guided Search (GS) model of visual search was published. As new data about search accumulated, GS needed modification. The latest version is GS6. GS argues that visual processing is capacity-limited and that attention is needed to “bind” features together into recognizable objects. The core idea of GS is that the deployment of attention is not random but is “guided” from object to object. For example, in a search for your black shoe, search would be guided toward black items. Earlier versions of GS focused on top-down (user-driven) and bottom-up (salience) guidance by basic features like color. Subsequent research adds guidance by history of search (e.g. priming), value of the target, and, most importantly, scene structure and meaning. Your search for the shoe will be guided by your understanding of the scene, including some sophisticated information about scene structure and meaning that is available “preattentively”. In acknowledging the initial, preattentive availability of something more than simple features, GS6 moves closer to ideas that are central to the Reverse Hierarchy Theory of Hochstein and Ahissar. As is so often true in our field, this is another instance where the answer is not Theory A or Theory B, even when they seem diametrically opposed. The next theory tends to borrow and synthesize good ideas from both predecessors.

Gist perception precedes awareness of details in various tasks and populations

Shaul Hochstein1; 1Life Sciences, Hebrew University, Jerusalem

Reverse Hierarchy Theory proposes several dramatic propositions regarding conscious visual perception. These include the suggestion that, while the visual system receives scene details and builds from them representations of the objects, layout, and structure of the scene, nevertheless, the first conscious percept is that of the gist of the scene – the result of implicit bottom-up processing. Only later does conscious perception attain scene details by return to lower cortical area representations. Recent studies at our lab analyzed phenomena whereby participants receive and perceive the gist of the scene before and without need for consciously knowing the details from which the gist is constructed. One striking conclusion is that “pop-out” is an early high-level effect, and is therefore not restricted to basic element features. Thus, faces pop-out from heterogeneous objects, and participants are unaware of rejected objects. Our recent studies of ensemble statistics perception find that computing set mean does not require knowledge of its individuals. This mathematically-improbable computation is both useful and natural for neural networks. I shall discuss just how and why set means are computed without need for explicit representation of individuals. Interestingly, our studies of neglect patients find that their deficit is in terms of tasks requiring focused attention to local details, and not for those requiring only global perception. Neglect patients are quite good at pop-out detection and include left-side elements in ensemble perception.

From global to local in conscious vison: behavior & MEG

Catherine Tallon-Baudry1; 1CNRS Cognitive Neuroscience, Ecole Normale Supérieure, Paris

The reverse hierarchy theory makes strong predictions on conscious vision. Local details would be processed in early visual areas before being rapidly and automatically combined into global information in higher order area, where conscious percepts would initially emerge. The theory thus predicts that consciousness arises initially in higher order visual areas, independently from attention and task, and that additional and optional attentional processes operating from top to bottom are needed to retrieve local details. We designed novel textured stimuli that, as opposed to Navon’s letters, are truly hierarchical. Taking advantage of both behavioral measures and of the decoding of MEG data, we show that global information is consciously perceived faster than local details, and that global information is computed regardless of task demands during early visual processing. These results support the idea that global dominance in conscious percepts originates in the hierarchical organization of the visual system. Implications for the nature of conscious visual experience and its underlying neural mechanisms will be discussed.

Next-generation models of recurrent computations in the ventral visual stream

James DiCarlo1; 1Neuroscience, McGovern Inst. & Brain & Cognitive Sci., MIT

Understanding mechanisms underlying visual intelligence requires combined efforts of brain and cognitive scientists, and forward engineering emulating intelligent behavior (“AI engineering”). This “reverse-engineering” approach has produced more accurate models of vision. Specifically, a family of deep artificial neural-network (ANN) architectures arose from biology’s neural network for object vision — the ventral visual stream. Engineering advances applied to this ANN family produced specific ANNs whose internal in silico “neurons” are surprisingly accurate models of individual ventral stream neurons, that now underlie artificial vision technologies. We and others have recently demonstrated a new use for these models in brain science — their ability to design patterns of light energy images on the retina that control neuronal activity deep in the brain. The reverse engineering iteration loop — respectable ANN models to new ventral stream data to even better ANN models — is accelerating. My talk will discuss this loop: experimental benchmarks for in silico ventral streams, key deviations from the biological ventral stream revealed by those benchmarks, and newer in silico ventral streams that partly close those differences. Recent experimental benchmarks argue that automatically-evoked recurrent processing is critically important to even the first 300msec of visual processing, implying that conceptually simpler, feedforward only, ANN models are no longer tenable as accurate in silico ventral streams. Our broader aim is to nurture and incentivize next generation models of the ventral stream via a community software platform termed “Brain-Score” with the goal of producing progress that individual research groups may be unable to achieve.

Visual and non-visual skill acquisition – success and failure

Merav Ahissar1; 1Psychology Department, Social Sciences & ELSC, Hebrew University, Israel

Acquiring expert skills requires years of experience – whether these skills are visual (e.g. face identification), motor (playing tennis) or cognitive (mastering chess). In 1977, Shiffrin & Schneider proposed an influential stimulus-driven, bottom-up theory of expertise automaticity, involving mapping stimuli to their consistent response. Integrating many studies since, I propose a general, top-down theory of skill acquisition. Novice performance is based on the high-level multiple-demand (Duncan, 2010) fronto-parietal system, and with practice, specific experiences are gradually represented in lower-level domain-specific temporal regions. This gradual process of learning-induced reverse-hierarchies is enabled by detection and integration of task-relevant regularities. Top-down driven learning allows formation of task-relevant mapping and representations. These in turn form a space which affords task-consistent interpolations (e.g. letters in a manner crucial for letter identification rather than visual similarity). These dynamics characterize successful skills. Some populations, however, have reduced sensitivity to task-related regularities, hindering their related skill acquisition, preventing specific expertise acquisition even after massive training. I propose that skill-acquisition failure, perceptual as cognitive, reflects specific difficulties in detecting and integrating task-relevant regularities, impeding formation of temporal-area expertise. Such is the case for individuals with dyslexia (reduced retention of temporal regularities; Jaff-Dax et al., 2017), who fail to form an expert visual word-form area, and for individuals with autism (who integrate regularities too slowly for online updating; Lieder et al., 2019). Based on this general conceptualization, I further propose that this systematic impediment.

< Back to 2021 Symposia

2021 Symposia

Early Processing of Foveal Vision

Organizers: Lisa Ostrin1, David Brainard2, Lynne Kiorpes3; 1University of Houston College of Optometry, 2University of Pennsylvania, 3New York University

This year’s biennial ARVO at VSS symposium focuses on early stages of visual processing at the fovea. Speakers will present recent work related to optical, vascular, and neural factors contributing to vision, as assessed with advanced imaging techniques. The work presented in this session encompasses clinical and translational research topics, and speakers will discuss normal and diseased conditions. More…

Wait for it: 20 years of temporal orienting

Organizers: Nir Shalev1,2,3, Anna Christina (Kia) Nobre1,2,3; 1Department of Experimental Psychology, University of Oxford, 2Wellcome Centre for Integrative Neuroscience, University of Oxford, 3Oxford Centre for Human Brain Activity, University of Oxford

Time is an essential dimension framing our behaviour. In considering adaptive behaviour in dynamic environments, it is essential to consider how our psychological and neural systems pick up on temporal regularities to prepare for events unfolding over time. The last two decades have witnessed a renaissance of interest in understanding how we orient attention in time to anticipate relevant moments. New experimental approaches have proliferated and demonstrated how we derive and utilise recurring temporal rhythms, associations, probabilities, and sequences to enhance perception. We bring together researchers from across the globe exploring the fourth dimension of selective attention with complementary approaches. More…

What we learn about the visual system by studying non-human primates: Past, present and future

Organizers: Rich Krauzlis1, Michele Basso2; 1National Eye Institute, 2Brain Research Institute, UCLA

Non-human primates (NHPs) are the premier animal model for understanding the brain circuits and neuronal properties that accomplish vision. This symposium will take a “look back” at what we have learned about vision over the past 20 years by studying NHPs, and also “look forward” to the emerging opportunities provided by new techniques and approaches. The 20th anniversary of VSS is the ideal occasion to present this overview of NHP research to the general VSS membership, with the broader goal of promoting increased dialogue and collaboration between NHP and non-NHP vision researchers. More…

What has the past 20 years of neuroimaging taught us about human vision and where do we go from here?

Organizers: Susan Wardle1, Chris Baker1; 1National Institutes of Health

Over the past 20 years, neuroimaging methods have become increasingly popular for studying the neural mechanisms of vision in the human brain. To celebrate 20 years of VSS this symposium will focus on the contribution that brain imaging techniques have made to our field of vision science. The aim is to provide both a historical context and an overview of current trends for the role of neuroimaging in vision science. This will lead to informed discussion about what future directions will prove most fruitful for answering fundamental questions in vision science. More…

Feedforward & Recurrent Streams in Visual Perception

Organizers: Shaul Hochstein1, Merav Ahissar2; 1Life Sciences, Hebrew University, Jerusalem, 2Psychology, Hebrew University, Jerusalem

Interactions of bottom-up and top-down mechanisms in visual perception are heatedly debated to this day. The aim of the proposed symposium is to review the history, progress, and prospects of our understanding of the roles of feedforward and recurrent processing streams. Where and how does top-down influence kick in? Is it off-line, as suggested by some deep-learning networks? is it an essential aspect governing bottom-up flow at every stage, as in predictive processing? We shall critically consider the continued endurance of these models, their meshing with current state-of-the-art theories and accumulating evidence, and, most importantly, the outlook for future understanding. More…

What’s new in visual development?

Organizers: Oliver Braddick1, Janette Atkinson2; 1University of Oxford, 2University College London

Since 2000, visual developmental science has advanced beyond defining how and when basic visual functions emerge during childhood. Advances in structural MRI, fMRI and near-infrared spectroscopy have identified localised visual brain networks even in early months of life, including networks identifying objects and faces. Newly refined eye tracking has examined how oculomotor function relates to the effects of visual experience underlying strabismus and amblyopia. New evidence has allowed us to model developing visuocognitive processes such as decision-making and attention. This symposium illustrates how such advances, ideas and challenges enhance understanding of visual development, including infants and children with developmental disorders. More…