Cutting across the top-down-bottom-up dichotomy in attentional capture research

Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): J. Eric T. Taylor, Brain and Mind Institute at Western University
Presenters: Nicholas Gaspelin, Matthew Hilchey, Dominique Lamy, Stefanie Becker, Andrew B. Leber

< Back to 2017 Symposia

Research on attentional selection describes the various factors that determine what information is ignored and what information is processed. These factors are commonly described as either bottom-up or top-down, indicating whether stimulus properties or an observer’s goals determine the outcome of selection. Research on selection typically adheres strongly to one of these two perspectives; the field is divided. The aim of this symposium is to generate discussions and highlight new developments in the study of attentional selection that do not conform to the bifurcated approach that has characterized the field for some time (or trifurcated, with respect to recent models emphasizing the role of selection history). The research presented in this symposium does not presuppose that selection can be easily or meaningfully dichotomized. As such, the theme of the symposium is cutting across the top-down-bottom-up dichotomy in attentional selection research. To achieve this, presenters in this session either share data that cannot be easily explained within the top-down or bot-tom-up framework, or they propose alternative models of existing descriptions of sources of attentional control. Theoretically, the symposium will begin with presentations that attempt to resolve the dichotomy with a new role for suppression (Gaspelin & Luck) or further bemuse the dichotomy with typically bottom-up patterns of behaviour in response to intransient stimuli (Hilchey, Taylor, & Pratt). The discussion then turns to demonstrations that the bottom-up, top-down, and selection history sources of control variously operate on different perceptual and attentional pro-cesses (Lamy & Zivony; Becker & Martin), complicating our categorization of sources of control. Finally, the session will conclude with an argument for more thorough descriptions of sources of control (Leber & Irons). In summary, these researchers will present cutting-edge developments using converging methodologies (chronometry, EEG, and eye-tracking measures) that further our understanding of attentional selection and advance attentional capture research beyond its current dichotomy. Given the heated history of this debate and the importance of the theoretical question, we expect that this symposium should be of interest to a wide audience of researchers at VSS, especially those interested in visual attention and cognitive control.

Mechanisms Underlying Suppression of Attentional Capture by Salient Stimuli

Speaker: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis
Additional Authors: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis; Carly J. Leonard, Center for Mind and Brain at the University of California, Davis; Steven J. Luck, Center for Mind and Brain at the University of California, Davis

Researchers have long debated the nature of cognitive control in vision, with the field being dominated by two theoretical camps. Stimulus-driven theories claim that visual attention is automatically captured by salient stimuli, whereas goal-driven theories argue that capture depends critically the goals of a viewer. To resolve this debate, we have previously provided key evidence for a new hybrid model called signal suppression hypothesis. According to this account, all salient stimuli generate an active salience signal which automatically attempts to guide visual attention. However, this signal can be actively suppressed. In the current talk, we review the converging evidence for this active suppression of salient items, using behavioral, eye tracking and electrophysiological methods. We will also discuss the cognitive mechanisms underlying suppression effects and directions for future research.

Beyond the new-event paradigm in visual attention research: Can completely static stimuli capture attention?

Speaker: Matthew Hilchey, University of Toronto
Additional Authors: Matthew D. Hilchey, University of Toronto, J. Eric T. Taylor, Brain and Mind Institute at Western University; Jay Pratt, University of Toronto

The last several decades of attention research have focused almost exclusively on paradigms that introduce new perceptual objects or salient sensory changes to the visual environment in order to determine how attention is captured to those locations. There are a handful of exceptions, and in the spirit of those studies, we asked whether or not a completely unchanging stimuli can attract attention using variations of classic additional singleton and cueing paradigms. In the additional singleton tasks, we presented a preview array of six uniform circles. After a short delay, one circle changed in form and luminance – the target location – and all but one location changed luminance, leaving the sixth location physically unchanged. The results indicated that attention was attracted toward the vicinity of the only unchanging stimulus, regardless of whether all circles around it increased or decreased luminance. In the cueing tasks, cueing was achieved by changing the luminance of 5 circles in the object preview array either 150 or 1000 ms before the onset of a target. Under certain conditions, we observed canonical patterns of facilitation and inhibition emerging from the location containing the physically unchanging cue stimuli. Taken together, the findings suggest that a completely unchanging stimulus, which bears no obvious resemblance to the target, can attract attention in certain situations.

Stimulus salience, current goals and selection history do not affect the same perceptual processes

Speaker: Dominique Lamy, Tel Aviv University
Additional Authors: Dominique Lamy, Tel Aviv University; Alon Zivony, Tel Aviv University

When exposed to a visual scene, our perceptual system performs several successive processes. During the preattentive stage, the attentional priority accruing to each location is computed. Then, attention is shifted towards the highest-priority location. Finally, the visual properties at that location are processed. Although most attention models posit that stimulus-driven and goal-directed processes combine to determine attentional priority, demonstrations of purely stimulus-driven capture are surprisingly rare. In addition, the consequences of stimulus-driven and goal-directed capture on perceptual processing have not been fully described. Specifically, whether attention can be disengaged from a distractor before its properties have been processed is unclear. Finally, the strict dichotomy between bottom-up and top-down attentional control has been challenged based on the claim that selection history also biases attentional weights on the priority map. Our objective was to clarify what perceptual processes stimulus salience, current goals and selection history affect. We used a feature-search spatial-cueing paradigm. We showed that (a) unlike stimulus salience and current goals, selection history does not modulate attentional priority, but only perceptual processes following attentional selection; (b) a salient distractor not matching search goals may capture attention but attention can be disengaged from this distractor’s location before its properties are fully processed; and (c) attentional capture by a distractor sharing the target feature entails that this distractor’s properties are mandatorily processed.

Which features guide visual attention, and how do they do it?

Speaker: Stefanie Becker, The University of Queensland
Additional Authors: Stefanie Becker, The University of Queensland; Aimee Martin, The University of Queensland

Previous studies purport to show that salient irrelevant items can attract attention involuntarily, against the intentions and goals of an observer. However, corresponding evidence originates predominantly from RT and eye movement studies, whereas EEG studies largely failed to support saliency capture. In the present study, we examined effects of salient colour distractors on search for a known colour target when the distractor was similar vs . dissimilar to the target. We used both eye tracking and EEG (in separate experiments), and also investigated participant’s awareness of the features of irrelevant distractors. The results showed that capture by irrelevant distractors was strongly top-down modulated, with target-similar dis-tractors attracting attention much more strongly, and being remembered better, than salient distractors. Awareness of the distractor correlated more strongly with initial capture rather than attentional dwelling on the distractor after it was selected. The salient distractor enjoyed no noticeable advantage over non-salient control distractors with regard to implicit measures, but was overall reported with higher accuracy than non-salient distractors. This raises the interesting possibility that salient items may primarily boost visual processes directly, by requiring less attention for accurate perception, not by summoning spatial attention.

Toward a profile of goal-directed attentional control

Speaker: Andrew B. Leber, The Ohio State University
Additional Authors: Andrew B. Leber, The Ohio State University; Jessica L. Irons, The Ohio State University

Recent criticism of the classic bottom-up/top-down dichotomy of attention has deservedly focused on the existence of experience-driven factors out-side this dichotomy. However, as researchers seek a better framework characterizing all control sources, a thorough re-evaluation of the top-down, or goal-directed, component is imperative. Studies of this component have richly documented the ways in which goals strategically modulate attentional control, but surprisingly little is known about how individuals arrive at their chosen strategies. Consider that manipulating goal-directed control commonly relies on experimenter instruction, which lacks ecological validity and may not always be complied with. To better characterize the factors governing goal-directed control, we recently created the adaptive choice visual search paradigm. Here, observers can freely choose between two tar-gets on each trial, while we cyclically vary the relative efficacy of searching for each target. That is, on some trials it is faster to search for a red target than a blue target, while on other trials the opposite is true . Results using this paradigm have shown that choice behavior is far from optimal, and appears largely determined by competing drives to maximize performance and minimize effort. Further, individual differences in performance are stable across sessions while also being malleable to experimental manipulations emphasizing one competing drive (e.g., reward, which motivates individuals to maximize performance). This research represents an initial step toward characterizing an individual profile of goal-directed control that extends beyond the classic understanding of “top-down” attention and promises to contribute to a more accurate framework of attentional control.

< Back to 2017 Symposia

A scene is more than the sum of its objects: The mechanisms of object-object and object-scene integration

Time/Room: Friday, May 19, 2017, 12:00 – 2:00 pm, Talk Room 1
Organizer(s): Liad Mudrik, Tel Aviv University and Melissa Võ, Goethe University Frankfurt
Presenters: Michelle Greene, Monica S. Castelhano, Melissa L.H. Võ, Nurit Gronau, Liad Mudrik

< Back to 2017 Symposia

Symposium Description

In the lab, vision researchers are typically trying to create “clean”, controlled environments and stimulations in order to tease apart the different processes that are involved in seeing. Yet in real life, visual comprehension is never a sterile process: objects appear with other objects in cluttered, rich scenes, which have certain spatial and semantic properties. In recent years, more and more studies are focusing on object-object and object-scene relations as possible guiding principles of vision. The proposed symposium aims to present current findings in this continuously developing field, while specifically focusing on two key questions that have attracted substantial scientific interest in recent years; how do scene-object and object-object relations influence object processing, and what are the necessary conditions for deciphering these relations. Greene, Castelhano and Võ will each tackle the first question in different ways, using information theoretic measures, visual search findings, eye movement, and EEG measures. The second question will be discussed with respect to attention and consciousness: Võ’s findings suggest automatic processing of object-scene relations, but do not rule out the need for attention. This view is corroborated and further stressed by Gronau’s results. With respect to consciousness, Mudrik, however, will present behavioral and neural data suggesting that consciousness may not be an immediate condition for relations processing, but rather serve as a necessary enabling factor. Taken together, these talks should lay the ground for an integrative discussion of both complimentary and conflicting findings. Whether these are based on different theoretical assumptions, methodologies or experimental approaches, the core of the symposium will speak to how to best tackle the investigation of the complexity of real-world scene perception.

Presentations

Measuring the Efficiency of Contextual Knowledge

Speaker: Michelle Greene, Stanford University

The last few years have brought us both large-scale image databases and the ability to crowd-source human data collection, allowing us to measure contextual statistics in real world scenes (Greene, 2013). How much contextual information is there, and how efficiently do people use it? We created a visual analog to a guessing game suggested by Claude Shannon (1951) to measure the information scenes and objects share. In our game, 555 participants on Amazon’s Mechanical Turk (AMT) viewed scenes in which a single object was covered by an opaque bounding box. Participants were instructed to guess about the identity of the hidden object until correct. Participants were paid per trial, and each trial terminated upon correctly guessing the object, so participants were incentivized to guess as efficiently as possible. Using information theoretic measures, we found that scene context can be encoded with less than 2 bits per object, a level of redundancy that is even greater than that of English text. To assess the information from scene category, we ran a second experiment in which the image was replaced by the scene category name. Participants still outperformed the entropy of the database, suggesting that the majority of contextual knowledge is carried by the category schema. Taken together, these results suggest that not only is there a great deal of information about objects coming from scene categories, but that this information is efficiently encoded by the human mind.

Where in the world?: Explaining Scene Context Effects during Visual Search through Object-Scene Spatial Associations

Speaker: Monica S. Castelhano, Queen’s University

The spatial relationship between objects and scenes and its effects on visual search performance has been well-established. Here, we examine how object-scene spatial associations support scene context effects on eye movement guidance and search efficiency. We reframed two classic visual search paradigms (set size and sudden onset) according to the spatial association between the target object and scene. Using the recently proposed Surface Guidance Framework, we operationalize target-relevant and target-irrelevant regions. Scenes are divided into three regions (upper, mid, lower) that correspond with possible relevant surfaces (wall, countertop, floor). Target-relevant regions are defined according to the surface on which the target is likely to appear (e.g., painting, toaster, rug). In the first experiment, we explored how spatial associations affect search by manipulating search size in either target-relevant or target-irrelevant regions. We found that only set size increases in target-relevant regions adversely affected search performance. In the second experiment, we manipulated whether a suddenly-onsetting distractor object appeared in a target-relevant or target-irrelevant region. We found that fixations to the distractor were significantly more likely and search performance was negatively affected in the target-relevant condition. The Surface Guidance Framework allows for further exploration of how object-scene spatial associations can be used to quickly narrow processing to specific areas of the scene and largely ignore information in other areas. Viewing effects of scene context through the lens of target-relevancy allows us to develop new understanding of how the spatial associations between objects and scenes can affect performance.

What drives semantic processing of objects in scenes?

Speaker: Melissa L.H. Võ, Goethe University Frankfurt

Objects hardly ever appear in isolation, but are usually embedded in a larger scene context. This context — determined e.g. by the co-occurrence of other objects or the semantics of the scene as a whole — has large impact on the processing of each and every object. Here I will present a series of eye tracking and EEG studies from our lab that 1) make use of the known time-course and neuronal signature of scene semantic processing to test whether seemingly meaningless textures of scenes are sufficient to modulate semantic object processing, and 2) raise the question of its automaticity. For instance, we have previously shown that semantically inconsistent objects trigger an N400 ERP response similar to the one known from language processing. Moreover, an additional but earlier N300 response signals perceptual processing difficulties that go in line with classic findings of impeded object identification from the 1980s. We have since used this neuronal signature to investigate scene context effects on object processing and recently found that a scene’s mere summary statistics — visualized as seemingly meaningless textures — elicit a very similar N400 response. Further, we have shown that observers looking for target letters superimposed on scenes fixated task-irrelevant semantically inconsistent objects embedded in the scenes to a greater degree and without explicit memory for these objects. Manipulating the number of superimposed letters reduced this effect, but not entirely. As part of this symposium, we will discuss the implications of these findings for the question as to whether object-scene integration requires attention.

Vision at a glance: the necessity of attention to contextual integration processes

Speaker: Nurit Gronau, The Open University of Israel

Objects that are conceptually consistent with their environment are typically grasped more rapidly and efficiently than objects that are inconsistent with it. The extent to which such contextual integration processes depend on visual attention, however, is largely disputed. The present research examined the necessity of visual attention to object-object and object-scene contextual integration processes during a brief visual glimpse. Participants performed an object classification task on associated object pairs that were either positioned in expected relative locations (e.g., a desk-lamp on a desk) or in unexpected, contextually inconsistent relative locations (e.g., a desk-lamp under a desk). When both stimuli were relevant to task requirements, latencies to spatially consistent object pairs were significantly shorter than to spatially inconsistent pairs. These contextual effects disappeared, however, when spatial attention was drawn to one of the two object stimuli while its counterpart object was positioned outside the focus of attention and was irrelevant to task-demands. Subsequent research examined object-object and object-scene associations which are based on categorical relations, rather than on specific spatial and functional relations. Here too, processing of the semantic/categorical relations necessitated allocation of spatial attention, unless an unattended object was explicitly defined as a to-be-detected target. Collectively, our research suggests that associative and integrative contextual processes underlying scene understanding rely on the availability of spatial attentional resources. However, stimuli which comply with task-requirements (e.g., a cat/dog in an animal, but not in a vehicle detection task) may benefit from efficient processing even when appearing outside the main focus of visual attention.

Object-object and object-scene integration: the role of conscious processing

Speaker: Liad Mudrik, Tel Aviv University

On a typical day, we perform numerous integration processes; we repeatedly integrate objects with the scenes in which they appear, and decipher the relations between objects, resting both on their tendency to co-occur and on their semantic associations. Such integration seems effortless, almost automatic, yet computationally speaking it is highly complicated and challenging. This apparent contradiction evokes the question of consciousness’ role in the process: is it automatic enough to obviate the need for conscious processing, or does its complexity necessitate the involvement of conscious experience? In this talk, I will present EEG, fMRI and behavioral experiments that tap into consciousness’ role in processing object-scene integration and object-object integration. The former revisits subjects’ ability to integrate the relations (congruency/incongruency) between an object and the scene in which it appears. The latter examines the processing of the relations between two objects, in an attempt to differentiate between associative relations (i.e., relations that rest on repeated co-occurrences of the two objects) vs. abstract ones (i.e., relations that are more conceptual, between two objects that do not tend to co-appear but are nevertheless related). I will claim that in both types of integration, consciousness may function as an enabling factor rather than an immediate necessary condition.

< Back to 2017 Symposia

2017 Symposia

S1 – A scene is more than the sum of its objects: The mechanisms of object-object and object-scene integration

Organizer(s): Liad Mudrik, Tel Aviv University and Melissa Võ, Goethe University Frankfurt
Time/Room: Friday, May 19, 2017, 12:00 – 2:00 pm, Talk Room 1

Our visual world is much more complex than most laboratory experiments make us believe. Nevertheless, this complexity turns out not to be a drawback, but actually a feature, because complex real-world scenes have defined spatial and semantic properties which allow us to efficiently perceive and interact with our environment. In this symposium we will present recent advances in assessing how scene-object and object-object relations influence processing, while discussing the necessary conditions for deciphering such relations. By considering the complexity of real-world scenes as information that can be exploited, we can develop new approaches for examining real-world scene perception. More…

S2 – The Brain Correlates of Perception and Action: from Neural Activity to Behavior

Organizer(s): Simona Monaco, Center for Mind/Brain Sciences, University of Trento & Annalisa Bosco, Dept of Pharmacy and Biotech, University of Bologna
Time/Room: Friday, May 19, 2017, 12:00 – 2:00 pm, Pavilion

This symposium offers a comprehensive view of the cortical and subcortical structures involved in perceptual-motor integration for eye and hand movements in contexts that resemble real life situations. By gathering scientists from neurophysiology to neuroimaging and psychophysics we provide an understanding of how vision is used to guide action from the neuronal level to behavior. This knowledge pushes our understanding of visually-guided motor control outside the constraints of the laboratory and into contexts that we daily encounter in the real world. More…

S3 – How can you be so sure? Behavioral, computational, and neuroscientific perspectives on metacognition in perceptual decision-making

Organizer(s): Megan Peters, University of California Los Angeles
Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Talk Room 1

Evaluating our certainty in a memory, thought, or perception seems as easy as answering the question, “Are you sure?” But how our brains make these determinations remains unknown. Specifically, does the brain use the same information to answer the questions, “What do you see?” and, “Are you sure?” What brain areas are responsible for doing these calculations, and what rules are used in the process? Why are we sometimes bad at judging the quality of our memories, thoughts, or perceptions? These are the questions we will try to answer in this symposium. More…

S4 – The Role of Ensemble Statistics in the Visual Periphery

Organizer(s): Brian Odegaard, University of California-Los Angeles
Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Pavilion

The past decades have seen the growth of a tremendous amount of research into the human visual system’s capacity to encode “summary statistics” of items in the world. One recent proposal in the literature has focused on the promise of ensemble statistics to provide an explanatory account of subjective experience in the visual periphery (Cohen, Dennett, & Kanwisher, Trends in Cognitive Sciences, 2016). This symposium will address how ensemble statistics are encoded outside the fovea, and to what extent this capacity explains our experience of the majority of our visual field. More…

S5 – Cutting across the top-down-bottom-up dichotomy in attentional capture research

Organizer(s): J. Eric T. Taylor, Brain and Mind Institute at Western University
Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Talk Room 1

Research on attentional selection describes the various factors that determine what information is ignored and what information is processed. Broadly speaking, researchers have adopted two explanations for how this occurs, which emphasize either automatic or controlled processing, often presenting evidence that is mutually contradictory. This symposium presents new evidence from five speakers that address this controversy from non-dichotomous perspectives. More…

S6 – Virtual Reality and Vision Science

Organizer(s): Bas Rokers, University of Wisconsin – Madison & Karen B. Schloss, University of Wisconsin – Madison
Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Pavilion

Virtual and augmented reality (VR/AR) research can answer scientific questions that were previously difficult or impossible to address. VR/AR may also provide novel methods to assist those with visual deficits and treat visual disorders. After a brief introduction by the organizers (Bas Rokers & Karen Schloss), 5 speakers representing both academia and industry will each give a 20-minute talk, providing an overview of existing research and identify promising new directions. ​The session will close with a 15 minute panel to deepen the dialog between industry and vision science. Topics include sensory integration, perception in naturalistic environments, and mixed reality. Symposium attendees may learn how to incorporate AR/VR into their research, identify current issues of interest to both academia and industry, and consider avenues of inquiry that may open with upcoming technological advances. More…

2017 Meet the Professors

Monday, May 22, 2017, 4:45 – 6:00 pm, Breck Deck North

Online registration for Meet the Professors is closed. There are still a few spaces available. Please meet at Breck Deck North at 4:30 pm if you are interested in attending.

Students and postdocs are invited to the second annual “Meet the Professors” event, Monday afternoon from 4:45 to 6:00 pm, immediately preceding the VSS Dinner and Demo Night. This is an opportunity for a free-wheeling, open-ended discussion with members of the VSS Board and other professors. You might chat about science, the annual meeting, building a career, or whatever comes up.

This year, the event will consist of two 30-minute sessions separated by a 15-minute snack break. Please select a different professor for each session. Space is limited and is assigned on a first-come, first-served basis.

Professors and VSS Board Members

Members of the VSS Board are indicated with an asterisk*, in case you have a specific interest in talking to a member of the board.

David Brainard* (University of Pennsylvania) studies human color vision, with particular interests in the consequences of spatial and spectral sampling by the photoreceptors and in the mechanisms mediating color constancy.

Eli Brenner* (Free University, Amsterdam) studies how visual information is used to guide our actions.

Marisa Carrasco (NYU) uses human psychophysics, neuroimaging, and computational modeling to investigate the relation between the psychological and neural mechanisms involved in visual perception and attention.

Isabel Gauthier (Vanderbilt University) uses behavioral and brain imaging methods to study perceptual expertise, object and face recognition, and individual differences in vision.

Julie Harris (St. Andrews) studies our perception of the 3D world, including binocular vision and 3D motion.  She also has an interest in animal camouflage.

Sheng He (University of Minnesota & Institute of Biophysics, CAS) uses psychophysical and neuroimaging (fMRI, EEG, MEG) methods to study spatiotemporal properties of vision, binocular interaction, visual attention, visual object recognition, and visual awareness.

Michael Herzog (EPFL – Switzerland) studies spatial and temporal vision in healthy and clinical populations.

Todd Horowitz (National Cancer Institute) is broadly interested in how vision science can be leveraged to reduce the burden of cancer, from  improving detection and diagnosis to understanding the cognitive complaints of cancer survivors.

Lynne Kiorpes* (NYU) uses behavioral and neurophysiological approaches to study visual development and visual disability. The goal is to understand the neural limitations on development and the effects of abnormal visual experience.

Dennis Levi (UC Berkeley) studies plasticity both in normal vision, and in humans deprived of normal binocular visual experience, using psychophysics and neuroimaging.

Ennio Mingolla (Northeastern) develops and tests of neural network models of visual perception, notably the segmentation, grouping, and contour formation processes of early and middle vision in primates, and on the transition of these models to technological applications.

Concetta Morrone (University of Pisa) studies the visual system in man and infants using psychophysical, electrophysiological, brain imaging and computational techniques. More recent research interests have been vision during eye-movement, perception of time and plasticity of the adult visual brain.

Tony Norcia* (Stanford University) studies the intricacies of visual development, partly to better understand visual functioning in the adult and abnormal visual processing.

Aude Oliva (MIT) studies human vision and memory, using methods from human perception and cognition, computer science and human neuroscience (fMRI, MEG)

Mary Peterson (University of Arizona) uses behavioral methods, neuropsychology, ERPs, and fMRI to investigate the competitive processes producing object perception and the interactions between perception and memory.

Jeff Schall* (Vanderbilt University) studies the neural and computational mechanisms that guide, control and monitor visually-guided gaze behavior.

James Tanaka (University of Victoria) studies the cognitive and neural processes of face recognition and object expertise.  He is interested in the perceptual strategies of real world experts, individuals on the autism spectrum and how a perceptual novice becomes an expert.

Preeti Verghese* (Smith-Kettlewell Eye Research Institute) studies spatial vision, visual search and attention, as well as eye and hand movements in normal vision and in individuals with central field loss.

Andrew Watson* (Apple) studies human spatial, temporal and motion processing, computational modeling of vision, and applications of vision science to imaging technology.

Jeremy Wolfe* (Harvard Med & Brigham and Women’s Hospital) studies visual attention and visual search with a special interest in socially important tasks like cancer screening in radiology.

2017 Satellite Events

Wednesday, May 17

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 17 – Friday, May 19, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
9:00 am – 12:00 pm Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zyg Pizlo, Purdue University; Anne Sereno, U. Texas Health Science Center at Houston; Qasim Zaidi, SUNY College of Optometry

The 6th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the VSS conference venue (the Tradewinds Island Resorts in St. Pete Beach, FL) May 17 – May 19. A keynote address will be given by Aude Oliva (MIT).

The early registration fee is $80 for regular participants, $40 for students. More information can be found on the workshop’s website: http://www.conf.purdue.edu/modvis/

Thursday, May 18

Implicit Guidance of Attention: Developing theoretical models

Thursday, May 18, 9:00 am – 6:00 pm, Jasmine/Palm

Organizers: Rebecca Todd, University of British Columbia and Chelazzi Leonardo, University of Verona

Speakers: Leo Chelazzi, Jane Raymond, Rebecca Todd, Andreas Keil, Clayton Hickey, Sarah Shomstein, Ayelet Landau, Brian Anderson, Jan Theeuwes

Visual selective attention is the process by which we tune ourselves to the world so that, of the millions of bits per second transmitted by the retina, the information that is most important to us reaches awareness and guides action. Recently, new areas of attention research have emerged, making sharp divisions between top-down volitional attention and bottom-up automatic capture by visual features much less clear than previously believed. Challenges to this intuitively appealing dichotomy have arisen as researchers have identified factors that guide attention non-strategically and often implicitly (a quality of bottom-up processes) but also rely on prior knowledge or experience (a quality of top-down systems). As a result, a number of researchers have been developing new theoretical frameworks that move beyond the classic attentional dichotomy. This roundtable discussion will bring together researchers from often-siloized investigative tracks who have been investigating effects of reward, emotion, semantic associations, and statistical learning on attentional guidance, as well as underlying neurocognitive mechanisms. The goal of this roundtable is to discuss these emerging frameworks and outstanding questions that arise from considering a broader range of research findings.

Friday, May 19

In the Fondest Memory of Bosco Tjan (Memorial Symposium)

Friday, May 19, 9:00 – 11:30 am, Talk Room 1-2

Organizers: Zhong-lin Lu, The Ohio State University and Susana Chung, University of California, Berkeley

Speakers: Zhong-lin Lu, Gordon Legge, Irving Biederman, Anirvan Nandy, Rachel Millin, Zili Liu, and Susana Chung

Professor Bosco S. Tjan was murdered at the pinnacle of a flourishing academic career on December 2, 2016. The vision science and cognitive neuroscience community lost a brilliant scientist and incisive commentator. I will briefly introduce Bosco’s life and career, and his contributions to vision science and cognitive neuroscience.

View Symposium Talks

Bruce Bridgeman Memorial Symposium

Friday, May 19, 9:00 – 11:30 am, Pavilion

Organizer: Susana Martinez-Conde, State University of New York

Speakers: Stephen L. Macknik, Stanley A. Klein, Susana Martinez-Conde, Paul Dassonville, Cathy Reed, and Laura Thomas

Professor Emeritus of Psychology Bruce Bridgeman was tragically killed on July 10, 2016, after being struck by a bus in Taipei, Taiwan. Those who knew Bruce will remember him for his sharp intellect, genuine sense of humor, intellectual curiosity, thoughtful mentorship, gentle personality, musical talent, and committed peace, social justice, and environmental activism. This symposium will highlight some of Bruce’s many important contributions to perception and cognition, which included spatial vision, perception/action interactions, and the functions and neural basis of consciousness.

View Symposium Talks

Saturday, May 20

How Immersive Eye Tracking Tools and VR Analytics Will Impact Vision Science Research

Saturday, May 20, 12:30 – 2:00 pm, Jasmine/Palm

Organizers: Courtney Gray, SensoMotoric Instruments, Inc. and Annett Schilling, SensoMotoric Instruments GmbH

Speakers: Stephen Macknik, SUNY Downstate Medical Center; Gabriel Diaz, Rochester Institute of Tech; Mary Hayhoe, University of Texas

This event covers the implications of new immersive HMD technologies and dedicated VR analysis solutions for vision science research. Researchers share their experiences and discuss how they believe VR eye tracking headsets and the ability to analyze data from immersive scenarios will positively impact visual cognition and scene perception research.

FoVea (Females of Vision et al) Workshop and Lunch

Saturday, May 20, 12:30 – 2:30 pm, Horizons

Organizers: Diane Beck, University of Illinois; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, McMaster University

Panelists: Marisa Carrasco, New York University and Allison Sekuler, McMaster University

FoVea is a group founded to advance the visibility, impact, and success of women in vision science. To that end, we plan to host a series of professional issues workshops during lunchtime at VSS. We encourage vision scientists of all genders to participate in the workshops.

The topic of the 2017 workshop is Negotiation: When To Do It and How To Do It Successfully. Two panelists will each give a presentation, and then will take questions and comments from the audience. The remainder of the workshop time will be spent networking with other attendees. The panelists are:

  • Marisa Carrasco, Professor of Psychology and Neural Science at New York University who served as the Chair of the Psychology Department for 6 years.
  • Allison Sekuler, Professor of Psychology, Neuroscience & Behaviour and Strategic Advisor to the President and VPs on Academic Issues, McMaster University; past Canada Research Chair in Cognitive Neuroscience (2001-2011), Associate VP & Dean, School of Graduate Studies (2008-2016), and interim VP Research (2015-2016).

A buffet lunch will be available. Registration is required so the appropriate amount of food can be on hand.

Sunday, May 21

Social Hour for Faculty at Primarily Undergraduate Institutions (PUIs)

Sunday, May 21, 12:30 – 2:00 pm, Royal Tern

Organizers: Eriko Self, California State University, Fullerton; Cathy Reed, Claremont McKenna College; and Nestor Matthews, Denison University

Do you work at a primarily undergraduate institution (PUI)? Do you have to find precious time for research and mentoring students among heavy teaching load? If so, bring your lunch or just bring yourself to PUI social and get to know other faculty at PUIs! It will be a great opportunity to share your ideas and concerns.

Vanderbilt-Rochester Vision Centers Party

Sunday, May 21, 7:30 – 10:00 pm, Beachside Sun Decks

Organizers: Geoffrey Woodman, Vanderbilt University and Duje Tadin, Rochester University

This event brings back the Vanderbilt-Rochester Party that began at the first VSS meetings. This social event will feature free drinks and snacks for all VSS attendees. It will provide attendees with the opportunity to socialize with members of the Rochester Center for Vision Science and the Vanderbilt Vision Research Center in attendance at VSS. This is a good opportunity to talk to potential mentors for graduate or postdoctoral training in vision science.

Monday, May 22

Applicational needs reinvent scientific views

Monday, May 22, 2:00 – 3:00 pm, Jasmine/Palm

Organizers: Katharina Rifai, Iliya V. Ivanov, and Siegfried Wahl, Institute of Ophthalmic Research, University of Tuebingen

Speakers: Eli Peli, Schepens Eye Research Institute; Peter Bex, Northeastern University; Susana Chung, UC Berkeley; Markus Lappe, University of Münster; Michele Rucci, Boston University; Jeff Mulligan, NASA Ames Research Center; Arijit Chakraborty, School of Optometry and Vision Science, University of Waterloo; Ian Erkelens, School of Optometry and Vision Science, University of Waterloo; Kevin MacKenzie, York University and Oculus VR, LCC

Applicational needs have often reinvented views on scientific problems and thus triggered break-throughs in models and methods. A recent example is augmented/virtual reality which challenges the visual system with reduced or enriched content and thus triggers scientific questions on visual system’s robustness.

Nonetheless, the driving character of applications within VSS research has not received focal attention until now. Therefore, we intend to bring together bright minds in a satellite event promoting the scientific drive created by applicational needs within VSS 2017.

Tutorial in Bayesian modeling

Monday, May 22, 2:00 – 4:30 pm, Sabal/Sawgrass

Organizer: Wei Ji Ma, New York University

Bayesian models are widespread in vision science. However, their inner workings are often obscure or intimidating to those without a background in modeling. This tutorial, which does not assume any background knowledge, will start by motivating Bayesian models through visual illusions. Then, you as participants will collectively choose a concrete experimental design to build a model for. We will develop the math of the Bayesian model of that task, and implement it in Matlab. You will take home complete code for a Bayesian model. Please bring pen, paper, and if possible, a laptop with Matlab.

Tutorial is limited to the first 50 people (first come, first-served).

The Experiential Learning Laboratory

Monday, May 22, 2:15 – 3:15 pm, Citrus/Glades

Organizers: Ken Nakayama, Na Li, and Jeremy Wilmer; Harvard University and Wellesley College

Psychology is one of most popular subjects with some the highest enrollments and at the undergraduate level. Psychology is also a science. Yet, the exposure of the undergraduate population to the actual “hands-on” practice doing such science is limited. It is rare in an undergraduate curriculum to see the kind of undergraduate laboratories that have been a longstanding tradition in the natural sciences and engineering. It is our premise that well conceived laboratory experiences by Psychology students have the potential to bring some important STEM practices and values to Psychology. This could increase the number of students who will have the sophistication to understand science at a deeper level, who will have the ability to create new knowledge through empirical investigation and who will develop the critical skills to evaluate scientific studies and claims. Critically important here is to supply conditions to engage students more fully by encouraging student initiated projects and to use this opportunity for them to gain mastery. TELLab with its ease of use and its ability to allow students to create their own experiments is what distinguishes it from other currently available systems. We invite teachers to try our system for their classes.

Tuesday, May 23

WorldViz VR Workshop

Tuesday, May 23, 1:00 – 2:30 pm, Sabal/Sawgrass

Organizer: Matthias Pusch, WorldViz

Virtual Reality is getting a lot of attention and press lately, but ‘hands on’ experiences with real use cases for this new technology are rare. This session will show what WorldViz has found to work for collaborative VR, and we will set up and try out an interactive VR experience together with the audience.

Wednesday, May 24

Honoring Al Ahumada – Al-apalooza! Talks

Wednesday, May 24, 3:00 – 5:00 pm, Horizons

Organizers: Jeff Mulligan, NASA Ames Research Center and Beau Watson, Apple

A celebration of the life, work, and play of Albert Jil Ahumada, Jr., a whimsical exploration of network learning for spatial and color vision, noise methods, models of photoreceptor positioning, etc. An afternoon session of informal talks will be open to all free of charge, followed by an evening banquet (payment required).

Full details will be posted as they are available at http://visionscience.com/alapalooza/.

Honoring Al Ahumada – Al-apalooza! Dinner

Wednesday, May 24, 7:00 – 10:00 pm, Beachside Sun Decks

Organizers: Jeff Mulligan, NASA Ames Research Center and Beau Watson, Apple

Full details will be posted as they are available at http://visionscience.com/alapalooza/.

Vision Sciences Society