In the Fondest Memory of Bosco Tjan (Memorial Symposium)

Friday, May 19, 2017, 9:00 – 11:30 am, Talk Room 2

Organizers: Zhong-lin Lu, The Ohio State University and Susana Chung, University of California, Berkeley

Speakers: Zhong-lin Lu, Gordon Legge, Irving Biederman, Anirvan Nandy, Rachel Millin, Zili Liu, and Susana Chung

Professor Bosco S. Tjan was murdered at the pinnacle of a flourishing academic career on December 2, 2016. The vision science and cognitive neuroscience community lost a brilliant scientist and incisive commentator. I will briefly introduce Bosco’s life and career, and his contributions to vision science and cognitive neuroscience.

Bosco Tjan: An ideal scientific role model

Zhong-Lin Lu, The Ohio State University

Professor Bosco S. Tjan was murdered at the pinnacle of a flourishing academic career on December 2, 2016. The vision science and cognitive neuroscience community lost a brilliant scientist and incisive commentator. I will briefly introduce Bosco’s life and career, and his contributions to vision science and cognitive neuroscience.

Bosco Tjan: A Mentor’s Perspective on Ideal Observers and an Ideal Student

Gordon Legge, University of Minnesota

I will share my perspective on Bosco’s early history in vision science, focusing on his interest in the theoretical framework of ideal observers. I will discuss examples from his work on 3D object recognition, letter recognition and reading.

Bosco Tjan: The Contributions to Our Understanding of Higher Level Vision Made by an Engineer in Psychologist’s Clothing

Irving Biederman, University of Southern California

Bosco maintained a long-standing interest in shape recognition. In an extensive series of collaborations, he provided invaluable input and guidance to research: a) assessing the nature of the representation of faces, b) applying ideal observer and reverse correlation methodologies to understanding face recognition, c) exploring what the defining operations for the localization of LOC, the region critical for shape recognition, were actually reflecting, and d) key contributions to the design and functioning of USC’s Dornsife Imaging Center for Cognitive Neuroscience.

Bosco Tjan: A Beautiful Mind

Anirvan Nandy, Salk Institute for Biological Studies

Bosco was fascinated with the phenomenon of visual crowding – our striking inability to recognize objects in clutter, especially in the peripheral visual fields. Bosco realized that the study of crowding provided an unique window into the study of object recognition, since crowding represents a “natural breakdown” of the object recognition system that we otherwise take for granted. I will talk about a parsimonious theory that Bosco & I had proposed and which aimed to unify several disparate aspects of crowding within a common framework.

Bosco’s insightful approach to fMRI

Rachel Millin, University of Washington

Bosco was both a brilliant vision scientist and a creative methodologist. Through his work using fMRI to study visual processing, he became interested in how we could apply our limited understanding of the fMRI signal to better understand our experimental results. I will discuss a model that Bosco and I developed to simulate fMRI in V1, which aims to distinguish neural from non-neural contributions to fMRI results in studies of visual perception.

BOLD-o-metric Function in Motion Discrimination

Zili Liu, UCLA

We investigated fMRI BOLD responses in random-dot motion direction discrimination, in both event-related and blocked designs. Behaviorally, we obtained the expected psychometric functions as the angular difference between the motion direction and reference direction was systematically varied. Surprisingly, however, we found little BOLD modulation in the visual cortex as the task demand varied. (In collaboration with Bosco Tjan, Ren Na, Taiyong Bi, and Fang Fang)

Bosco Tjan: The Translator

Susana Chung, University of California, Berkeley

Bosco was not a clinician, yet, he had a strong interest in translating his knowledge and skills in basic science to issues that relate to people with impaired vision. I will present some of my collaboration work with Bosco that had shed light on how the brain adapts to vision loss in patients with macular disease.

FoVea Travel and Networking Award

The FoVea 2017 Award Recipients can be found here

Females of Vision et al. (FoVea) is excited to announce its inaugural round of the FoVea Travel and Networking Award, funded by National Science Foundation. Submissions are due on February 20, 2017.

The FoVea Travel and Networking Award is open to female members of the Vision Science Society (VSS) in pre-doctoral, post-doctoral, and pre-tenure faculty or research scientist positions. Up to 5 female vision scientists will be awarded $1,600 to cover costs involved in attending the 2017 VSS meeting, including membership fees, conference registration fees, and travel expenses.

FoVea created this award as part of its mission to advance the visibility, impact, and success of women in vision science. A recent report from Cooper and Radonjić (2016) indicated that in 2015, the ratio of women to men in VSS was near equal at the pre-doctoral level (1:1.13), but decreased as career stage increased. The decline is symptomatic of forces that impede the professional development of female vision scientists. A key aspect of professional development is building a professional network to support scientific pursuits and to provide mentorship at critical junctions in one’s academic career. The FoVea Travel and Networking Award will help female vision scientists build their professional network by encouraging them to meet with at least one Networking Target at the VSS meeting to discuss their research and consider potential for collaboration. The Networking Target(s) can be of any gender.

The goals of the FoVea Travel and Networking award are to:

  1. Increase the visibility of women by giving them the opportunity to meet and have a one-on-one discussion with a senior scientist(s) at the meeting.
  2. Increase the productivity of women by potentially stimulating collaborative research with the Networking Target(s).
  3. Increase the networking skills of women, both those who apply for, and win the awards, and those who peruse the written reports of awardees on the FoVea website.
  4. Allow excellent female vision scientists who might not otherwise be able to attend the conference to afford it.
  5. Give awards that can be listed on female vision scientists’ CVs thereby enhancing their professional profile.

Application Instructions

Applicants are asked to email the following materials to Karen Schloss at by February 20, 2017. All application related emails should include “FoVea Award 2017” and the applicant’s name it the subject line. The CV, proposal, and letter of agreement from the Networking Target must be combined into a single PDF. The letter of recommendation should be sent in a separate email.

Application materials

  1. CV
  2. A proposal describing the applicant’s plan to network with at least one senior scientist during the VSS 2017 meeting (750 word limit). The plan should include an explanation for why the applicant chose this(these) particular Networking Target(s), a plan for what topics she will discuss with her Networking Target(s) during the meeting, and a statement of how she hopes forging a relationship with the Networking Target(s) will help advance her research/career agenda.
  3. A letter of agreement from the senior scientist(s) named as the Networking Target(s). Networking Targets can be of any gender.
  4. A letter of recommendation from the applicant’s advisor, research supervisor, or department head. Please include the applicant’s name in the subject line of the submission email.

Awardees will agree to write a report on their networking methods and outcomes after the conference by July 1st, 2017. FoVea will post these reports on its website within 9 months of the conference.

Eligibility

Applicants must be a female vison scientist who is a graduate student, postdoctoral fellow, research scientist (non-tenure track), or junior faculty member (pre-tenure).

Review Process

Applications will be reviewed by a committee consisting of three members of the VSS community with Karen Schloss as Chair. Awards will be announced in mid March.

FoVea Committee: Diane Beck, Mary Peterson, Karen Schloss, and Allison Sekuler

2017 Public Lecture – Nancy Kanwisher

Nancy Kanwisher

MIT

Nancy Kanwisher received her B.S. and Ph.D. from MIT working with Molly Potter. After a postdoc as a MacArthur Fellow in Peace and International Security, and a second postdoc in the lab of Anne Treisman at UC Berkeley, she held faculty positions at UCLA and then Harvard, before returning to MIT in 1997, where she is now an Investigator at the McGovern Institute for Brain Research, a faculty member in the Department of Brain & Cognitive Sciences, and a member of the Center for Minds, Brains, and Machines. Kanwisher’s work uses brain imaging to discover the functional organization of the human brain as a window into the architecture of the mind. Kanwisher has received the Troland Award, the Golden Brain Award, and a MacVicar Faculty Fellow teaching Award from MIT, and she is a member of the National Academy of Sciences and the American Academy of Arts and Sciences. You can view her short lectures about human cognitive neuroscience for lay audiences here: http://nancysbraintalks.mit.edu

Functional Imaging of the Human Brain as a Window into the Mind

Saturday, May 20, 11:00 am – 12:00 pm, Museum of Fine Arts, St. Petersburg, Florida

Twenty-five years ago with the invention fMRI it became possible to image neural activity in the normal human brain. This remarkable tool has given us a striking new picture of the human brain, in which many regions have been shown to carry out highly specific mental functions, like the perception of faces, speech sounds, and music, and even very abstract mental functions like understanding a sentence or thinking about another person’s thoughts. These discoveries show that human minds and brains are not single general-purpose devices, but are instead made up of numerous distinct processors, each carrying out different functions. I’ll discuss some of the evidence for highly specialized brain regions, and what we know about each. I’ll also consider the tantalizing unanswered questions we are trying to tackle now: What other specialized brain regions do we have?  What are the connections between these each of these specialized regions and the rest of the brain? How do these regions develop over infancy and childhood?  How do these regions work together to produce uniquely human intelligence?

Attending the Public Lecture

The lecture is free to the public with admission to the museum. Museum members are free; Adults $17; Seniors 65 and older $15; Military with Id $15; College Students $10; Students 7-18 $10; Children 6 and under are free. VSS attendees will receive free admission to the Museum by showing your meeting badge.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

2017 Keynote – Katherine J. Kuchenbecker

Katherine J. Kuchenbecker

Director of the new Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany

Associate Professor (on leave), Mechanical Engineering and Applied Mechanics Department, University of Pennsylvania, Philadelphia, USA

Haptography: Capturing and Displaying Touch

Saturday, May 20, 2017, 7:15 pm, Talk Room 1-2

When you touch objects in your surroundings, you can discern each item’s physical properties from the rich array of haptic cues that you feel, including both the tactile sensations in your skin and the kinesthetic cues from your muscles and joints. Although physical interaction with the world is at the core of human experience, very few robotic and computer interfaces provide the user with high-fidelity touch feedback, limiting their intuitiveness. By way of two detailed examples, this talk will describe the approach of haptography, which uses biomimetic sensors and signal processing to capture tactile sensations, plus novel algorithms and actuation systems to display realistic touch cues to the user. First, we invented a novel way to map deformations and vibrations sensed by a robotic fingertip to the actuation of a fingertip tactile display in real time. We then demonstrated the striking utility of such cues in a simulated tissue palpation task through integration with a da Vinci surgical robot. Second, we created the world’s most realistic haptic virtual surfaces by recording and modeling what a user feels when touching real objects with an instrumented stylus. The perceptual effects of displaying the resulting data-driven friction forces, tapping transients, and texture vibrations were quantified by having users compare the original surfaces to their virtual versions. While much work remains to be done, we are starting to see the tantalizing potential of systems that leverage tactile cues to allow a user to interact with distant or virtual environments as though they were real and within reach.

Biography

Katherine J. Kuchenbecker is Director of the new Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She is currently on leave from her appointment as Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, where she held the Class of 1940 Bicentennial Endowed Term Chair and a secondary appointment in Computer and Information Science. Kuchenbecker earned a PhD (2006) in Mechanical Engineering at Stanford University and was a postdoctoral fellow at the Johns Hopkins University before joining the faculty at Penn in 2007. Her research centers on haptic interfaces, which enable a user to touch virtual and distant objects as though they were real and within reach, as well as haptic sensing systems, which allow robots to physically interact with and feel real objects. She delivered a widely viewed TEDYouth talk on haptics in 2012, and she has received several honors including a 2009 NSF CAREER Award, the 2012 IEEE Robotics and Automation Society Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, and many best paper and best demonstration awards.

How can you be so sure? Behavioral, computational, and neuroscientific perspectives on metacognition in perceptual decision-making

S3 – How can you be so sure? Behavioral, computational, and neuro-scientific perspectives on metacogni-tion in perceptual decision-making

Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Megan Peters, University of California Los Angeles
Presenters: Megan Peters, Ariel Zylberberg, Michele Basso, Wei Ji Ma, Pascal Mamassian

< Back to 2017 Symposia

Metacognition, or our ability to monitor the uncertainty of our thoughts, decisions, and perceptions, is of critical importance across many domains . Here we focus on metacognition in perceptual decisions — the continuous inferences that we make about the most likely state of the world based on incoming sensory information. How does a police officer evaluate the fidelity of his perception that a perpetrator has drawn a weapon? How does a driver compute her certainty in whether a fleeting visual percept is a child or a soccer ball, impacting her decision to swerve? These kinds of questions are central to daily life, yet how such ‘confidence’ is computed in the brain remains unknown. In recent years, increasingly keen interest has been directed towards exploring such metacognitive mechanisms from computational (e.g., Rahnev et al., 2011, Nat Neuro; Peters & Lau, 2015, eLife), neuroimaging (e.g., Fleming et al., 2010, Science), brain stimulation (e.g., Fetsch et al., 2014, Neuron), and neuronal electro-physiology (e.g., Kiani & Shadlen, 2009, Science; Zylberberg et al., 2016, eLife) perspectives. Importantly, the computation of confidence is also of increasing interest to the broader range of researchers studying the computations underlying perceptual decision-making in general. Our central focus is on how confidence is computed in neuronal populations, with attention to (a) whether perceptual decisions and metacognitive judgments depend on the same or different computations, and (b) why confidence judgments sometimes fail to optimally track the accuracy of perceptual decisions. Key themes for this symposium will include neural correlates of confidence, behavioral consequences of evidence manipulation on confidence judgments, and computational characterizations of the relationship between perceptual decisions and our confidence in them. Our principal goal is to attract scientists studying or interested in confidence/uncertainty, sensory metacognition, and perceptual decision-making from both human and animal perspectives, spanning from the computational to the neurobiological level. We bring together speakers from across these disciplines, from animal electrophysiology and behavior through computational models of human uncertainty, to communicate their most recent and exciting findings. Given the recency of many of the findings discussed, our symposium will cover terrain largely untouched by the main program. We hope that the breadth of research programs represented in this symposium will encourage a diverse group of scientists to attend and actively participate in the discussion.

Transcranial magnetic stimulation to visual cortex induces suboptimal introspection

Speaker: Megan Peters, University of California Los Angeles
Additional Authors: Megan Peters, University of California Los Angeles; Jeremy Fesi, The Graduate Center of the City University of New York; Namema Amendi, The Graduate Center of the City University of New York; Jeffrey D. Knotts, University of California Los Angeles; Hakwan Lau, UCLA

In neurological cases of blindsight, patients with damage to primary visual cortex can discriminate objects but report no visual experience of them. This form of ‘unconscious perception’ provides a powerful opportunity to study perceptual awareness, but because the disorder is rare, many researchers have sought to induce the effect in neurologically intact observers. One promising approach is to apply transcranial magnetic stimulation (TMS) to visual cortex to induce blindsight (Boyer et al., 2005), but this method has been criticized for being susceptible to criterion bias confounds: perhaps TMS merely reduces internal visual signal strength, and observers are unwilling to report that they faintly saw a stimulus even if they can still discriminate it (Lloyd et al., 2013). Here we applied a rigorous response-bias free 2-interval forced-choice method for rating subjective experience in studies of unconscious perception (Peters and Lau, 2015) to address this concern. We used Bayesian ideal observer analysis to demonstrate that observers’ introspective judgments about stimulus visibility are suboptimal even when the task does not require that they maintain a response criterion — unlike in visual masking. Specifically, observers appear metacognitively blind to the noise introduced by TMS, in a way that is akin to neurological cases of blindsight. These findings are consistent with the hypothesis that metacognitive judgments require observers to develop an internal model of the statistical properties of their own signal processing architecture, and that introspective suboptimality arises when that internal model abruptly becomes invalid due to external manipulations.

The influence of evidence volatility on choice, reaction time and confidence in a perceptual decision

Speaker: Ariel Zylberberg, Columbia University
Additional Authors: Ariel Zylberberg, Columbia University; Christopher R. Fetsch, Columbia University; Michael N. Shadlen, Columbia University

Many decisions are thought to arise via the accumulation of noisy evidence to a threshold or bound. In perceptual decision-making, the bounded evidence accumulation framework explains the effect of stimulus strength, characterized by signal-to-noise ratio, on decision speed, accuracy and confidence. This framework also makes intriguing predictions about the behavioral influence of the noise itself. An increase in noise should lead to faster decisions, reduced accuracy and, paradoxically, higher confidence. To test these predictions, we introduce a novel sensory manipulation that mimics the addition of unbiased noise to motion-selective regions of visual cortex. We verified the effect of this manipulation with neuronal recordings from macaque areas MT/MST. For both humans and monkeys, increasing the noise induced faster decisions and greater confidence over a range of stimuli for which accuracy was minimally impaired. The magnitude of the effects was in agreement with predictions of a bounded evidence accumulation model.

A role for the superior colliculus in decision-making and confidence

Speaker: Michele Basso, University of California Los Angeles
Additional Authors: Michele Basso, University of California Los Angeles; Piercesare Grimaldi, University of California Los Angeles; Trinity Crapse, University of California Los Angeles

Evidence implicates the superior colliculus (SC) in attention and perceptual decision-making . In a simple target-selection task, we previously showed that discriminability between target and distractor neuronal activity in the SC correlated with decision accuracy, consistent with the hypothesis that SC encodes a decision variable. Here we extend these results to determine whether SC also correlates with decision criterion and confidence. Trained monkeys performed a simple perceptual decision task in two conditions to induce behavioral response bias (criterion shift): (1) the probability of two perceptual stimuli was equal, and (2) the probability of one perceptual stimulus was higher than the other. We observed consistent changes in behavioral response bias (shifts in decision criterion) that were directly cor-related with SC neuronal activity. Furthermore, electrical stimulation of SC mimicked the effect of stimulus probability manipulations, demonstrating that SC correlates with and is causally involved in setting decision criteria. To assess confidence, monkeys were offered a ‘safe bet’ option on 50% of trials in a similar task. The ‘safe bet’ always yielded a small reward, encouraging monkeys to select the ‘safe bet’ when they were less confident rather than risk no reward for a wrong decision. Both monkeys showed metacognitive sensitivity: they chose the ‘safe bet’ more on more difficult trials. Single- and multi-neuron recordings from SC revealed two distinct neuronal populations: one that discharged more robustly for more confident trials, and one that did so for less confident trials. Together these finding show how SC encodes information about decisions and decisional confidence.

Testing the Bayesian confidence hypothesis

Speaker: Wei Ji Ma, New York University
Additional Authors: Wei Ji Ma, New York University; Will Adler, New York University; Ronald van den Berg, University of Uppsala

Asking subjects to rate their confidence is one of the oldest procedures in psychophysics. Remarkably, quantitative models of confidence ratings have been scarce. What could be called the “Bayesian confidence hypothesis” states that an observer’s confidence rating distribution is completely determined by posterior probability. This hypothesis predicts specific quantitative relationships between performance and confidence. It also predicts that stimulus combinations that produce the same posterior will also produce the same confidence distribution. We tested these predictions in three contexts: a) perceptual categorization; b) visual working memory; c) the interpretation of scientific data.

Integration of visual confidence over time and across stimulus dimensions

Speaker: Pascal Mamassian, Ecole Normale Supérieure
Additional Authors: Pascal Mamassian, Ecole Normale Supérieure; Vincent de Gardelle, Université Paris 1; Alan Lee, Lingnan University

Visual confidence refers to our ability to estimate our own performance in a visual decision task. Several studies have highlighted the relatively high efficiency of this meta-perceptual ability, at least for simple visual discrimination tasks. Are observers equally good when visual confidence spans more than one stimulus dimension or more than a single decision? To address these issues, we used the method of confidence forced-choice judgments where participants are prompted to choose between two alter-natives the stimulus for which they expect their performance to be better (Barthelmé & Mamassian, 2009, PLoS CB). In one experiment, we asked observers to make confidence choice judgments between two different tasks (an orientation-discrimination task and a spatial-frequency-discrimi-nation task). We found that participants were equally good at making these across-dimensions confidence judgments as when choices were restricted to a single dimension, suggesting that visual confidence judgments share a common currency. In another experiment, we asked observers to make confidence-choice judgments between two ensembles of 2, 4, or 8 stimuli. We found that participants were increasingly good at making ensemble confidence judgments, suggesting that visual confidence judgments can accumulate information across several trials. Overall, these results help us better understand how visual confidence is computed and used over time and across stimulus dimensions.

< Back to 2017 Symposia

Cutting across the top-down-bottom-up dichotomy in attentional capture research

Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): J. Eric T. Taylor, Brain and Mind Institute at Western University
Presenters: Nicholas Gaspelin, Matthew Hilchey, Dominique Lamy, Stefanie Becker, Andrew B. Leber

< Back to 2017 Symposia

Research on attentional selection describes the various factors that determine what information is ignored and what information is processed. These factors are commonly described as either bottom-up or top-down, indicating whether stimulus properties or an observer’s goals determine the outcome of selection. Research on selection typically adheres strongly to one of these two perspectives; the field is divided. The aim of this symposium is to generate discussions and highlight new developments in the study of attentional selection that do not conform to the bifurcated approach that has characterized the field for some time (or trifurcated, with respect to recent models emphasizing the role of selection history). The research presented in this symposium does not presuppose that selection can be easily or meaningfully dichotomized. As such, the theme of the symposium is cutting across the top-down-bottom-up dichotomy in attentional selection research. To achieve this, presenters in this session either share data that cannot be easily explained within the top-down or bot-tom-up framework, or they propose alternative models of existing descriptions of sources of attentional control. Theoretically, the symposium will begin with presentations that attempt to resolve the dichotomy with a new role for suppression (Gaspelin & Luck) or further bemuse the dichotomy with typically bottom-up patterns of behaviour in response to intransient stimuli (Hilchey, Taylor, & Pratt). The discussion then turns to demonstrations that the bottom-up, top-down, and selection history sources of control variously operate on different perceptual and attentional pro-cesses (Lamy & Zivony; Becker & Martin), complicating our categorization of sources of control. Finally, the session will conclude with an argument for more thorough descriptions of sources of control (Leber & Irons). In summary, these researchers will present cutting-edge developments using converging methodologies (chronometry, EEG, and eye-tracking measures) that further our understanding of attentional selection and advance attentional capture research beyond its current dichotomy. Given the heated history of this debate and the importance of the theoretical question, we expect that this symposium should be of interest to a wide audience of researchers at VSS, especially those interested in visual attention and cognitive control.

Mechanisms Underlying Suppression of Attentional Capture by Salient Stimuli

Speaker: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis
Additional Authors: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis; Carly J. Leonard, Center for Mind and Brain at the University of California, Davis; Steven J. Luck, Center for Mind and Brain at the University of California, Davis

Researchers have long debated the nature of cognitive control in vision, with the field being dominated by two theoretical camps. Stimulus-driven theories claim that visual attention is automatically captured by salient stimuli, whereas goal-driven theories argue that capture depends critically the goals of a viewer. To resolve this debate, we have previously provided key evidence for a new hybrid model called signal suppression hypothesis. According to this account, all salient stimuli generate an active salience signal which automatically attempts to guide visual attention. However, this signal can be actively suppressed. In the current talk, we review the converging evidence for this active suppression of salient items, using behavioral, eye tracking and electrophysiological methods. We will also discuss the cognitive mechanisms underlying suppression effects and directions for future research.

Beyond the new-event paradigm in visual attention research: Can completely static stimuli capture attention?

Speaker: Matthew Hilchey, University of Toronto
Additional Authors: Matthew D. Hilchey, University of Toronto, J. Eric T. Taylor, Brain and Mind Institute at Western University; Jay Pratt, University of Toronto

The last several decades of attention research have focused almost exclusively on paradigms that introduce new perceptual objects or salient sensory changes to the visual environment in order to determine how attention is captured to those locations. There are a handful of exceptions, and in the spirit of those studies, we asked whether or not a completely unchanging stimuli can attract attention using variations of classic additional singleton and cueing paradigms. In the additional singleton tasks, we presented a preview array of six uniform circles. After a short delay, one circle changed in form and luminance – the target location – and all but one location changed luminance, leaving the sixth location physically unchanged. The results indicated that attention was attracted toward the vicinity of the only unchanging stimulus, regardless of whether all circles around it increased or decreased luminance. In the cueing tasks, cueing was achieved by changing the luminance of 5 circles in the object preview array either 150 or 1000 ms before the onset of a target. Under certain conditions, we observed canonical patterns of facilitation and inhibition emerging from the location containing the physically unchanging cue stimuli. Taken together, the findings suggest that a completely unchanging stimulus, which bears no obvious resemblance to the target, can attract attention in certain situations.

Stimulus salience, current goals and selection history do not affect the same perceptual processes

Speaker: Dominique Lamy, Tel Aviv University
Additional Authors: Dominique Lamy, Tel Aviv University; Alon Zivony, Tel Aviv University

When exposed to a visual scene, our perceptual system performs several successive processes. During the preattentive stage, the attentional priority accruing to each location is computed. Then, attention is shifted towards the highest-priority location. Finally, the visual properties at that location are processed. Although most attention models posit that stimulus-driven and goal-directed processes combine to determine attentional priority, demonstrations of purely stimulus-driven capture are surprisingly rare. In addition, the consequences of stimulus-driven and goal-directed capture on perceptual processing have not been fully described. Specifically, whether attention can be disengaged from a distractor before its properties have been processed is unclear. Finally, the strict dichotomy between bottom-up and top-down attentional control has been challenged based on the claim that selection history also biases attentional weights on the priority map. Our objective was to clarify what perceptual processes stimulus salience, current goals and selection history affect. We used a feature-search spatial-cueing paradigm. We showed that (a) unlike stimulus salience and current goals, selection history does not modulate attentional priority, but only perceptual processes following attentional selection; (b) a salient distractor not matching search goals may capture attention but attention can be disengaged from this distractor’s location before its properties are fully processed; and (c) attentional capture by a distractor sharing the target feature entails that this distractor’s properties are mandatorily processed.

Which features guide visual attention, and how do they do it?

Speaker: Stefanie Becker, The University of Queensland
Additional Authors: Stefanie Becker, The University of Queensland; Aimee Martin, The University of Queensland

Previous studies purport to show that salient irrelevant items can attract attention involuntarily, against the intentions and goals of an observer. However, corresponding evidence originates predominantly from RT and eye movement studies, whereas EEG studies largely failed to support saliency capture. In the present study, we examined effects of salient colour distractors on search for a known colour target when the distractor was similar vs . dissimilar to the target. We used both eye tracking and EEG (in separate experiments), and also investigated participant’s awareness of the features of irrelevant distractors. The results showed that capture by irrelevant distractors was strongly top-down modulated, with target-similar dis-tractors attracting attention much more strongly, and being remembered better, than salient distractors. Awareness of the distractor correlated more strongly with initial capture rather than attentional dwelling on the distractor after it was selected. The salient distractor enjoyed no noticeable advantage over non-salient control distractors with regard to implicit measures, but was overall reported with higher accuracy than non-salient distractors. This raises the interesting possibility that salient items may primarily boost visual processes directly, by requiring less attention for accurate perception, not by summoning spatial attention.

Toward a profile of goal-directed attentional control

Speaker: Andrew B. Leber, The Ohio State University
Additional Authors: Andrew B. Leber, The Ohio State University; Jessica L. Irons, The Ohio State University

Recent criticism of the classic bottom-up/top-down dichotomy of attention has deservedly focused on the existence of experience-driven factors out-side this dichotomy. However, as researchers seek a better framework characterizing all control sources, a thorough re-evaluation of the top-down, or goal-directed, component is imperative. Studies of this component have richly documented the ways in which goals strategically modulate attentional control, but surprisingly little is known about how individuals arrive at their chosen strategies. Consider that manipulating goal-directed control commonly relies on experimenter instruction, which lacks ecological validity and may not always be complied with. To better characterize the factors governing goal-directed control, we recently created the adaptive choice visual search paradigm. Here, observers can freely choose between two tar-gets on each trial, while we cyclically vary the relative efficacy of searching for each target. That is, on some trials it is faster to search for a red target than a blue target, while on other trials the opposite is true . Results using this paradigm have shown that choice behavior is far from optimal, and appears largely determined by competing drives to maximize performance and minimize effort. Further, individual differences in performance are stable across sessions while also being malleable to experimental manipulations emphasizing one competing drive (e.g., reward, which motivates individuals to maximize performance). This research represents an initial step toward characterizing an individual profile of goal-directed control that extends beyond the classic understanding of “top-down” attention and promises to contribute to a more accurate framework of attentional control.

< Back to 2017 Symposia

2017 Symposia

S1 – A scene is more than the sum of its objects: The mechanisms of object-object and object-scene integration

Organizer(s): Liad Mudrik, Tel Aviv University and Melissa Võ, Goethe University Frankfurt
Time/Room: Friday, May 19, 2017, 12:00 – 2:00 pm, Talk Room 1

Our visual world is much more complex than most laboratory experiments make us believe. Nevertheless, this complexity turns out not to be a drawback, but actually a feature, because complex real-world scenes have defined spatial and semantic properties which allow us to efficiently perceive and interact with our environment. In this symposium we will present recent advances in assessing how scene-object and object-object relations influence processing, while discussing the necessary conditions for deciphering such relations. By considering the complexity of real-world scenes as information that can be exploited, we can develop new approaches for examining real-world scene perception. More…

S2 – The Brain Correlates of Perception and Action: from Neural Activity to Behavior

Organizer(s): Simona Monaco, Center for Mind/Brain Sciences, University of Trento & Annalisa Bosco, Dept of Pharmacy and Biotech, University of Bologna
Time/Room: Friday, May 19, 2017, 12:00 – 2:00 pm, Pavilion

This symposium offers a comprehensive view of the cortical and subcortical structures involved in perceptual-motor integration for eye and hand movements in contexts that resemble real life situations. By gathering scientists from neurophysiology to neuroimaging and psychophysics we provide an understanding of how vision is used to guide action from the neuronal level to behavior. This knowledge pushes our understanding of visually-guided motor control outside the constraints of the laboratory and into contexts that we daily encounter in the real world. More…

S3 – How can you be so sure? Behavioral, computational, and neuroscientific perspectives on metacognition in perceptual decision-making

Organizer(s): Megan Peters, University of California Los Angeles
Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Talk Room 1

Evaluating our certainty in a memory, thought, or perception seems as easy as answering the question, “Are you sure?” But how our brains make these determinations remains unknown. Specifically, does the brain use the same information to answer the questions, “What do you see?” and, “Are you sure?” What brain areas are responsible for doing these calculations, and what rules are used in the process? Why are we sometimes bad at judging the quality of our memories, thoughts, or perceptions? These are the questions we will try to answer in this symposium. More…

S4 – The Role of Ensemble Statistics in the Visual Periphery

Organizer(s): Brian Odegaard, University of California-Los Angeles
Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Pavilion

The past decades have seen the growth of a tremendous amount of research into the human visual system’s capacity to encode “summary statistics” of items in the world. One recent proposal in the literature has focused on the promise of ensemble statistics to provide an explanatory account of subjective experience in the visual periphery (Cohen, Dennett, & Kanwisher, Trends in Cognitive Sciences, 2016). This symposium will address how ensemble statistics are encoded outside the fovea, and to what extent this capacity explains our experience of the majority of our visual field. More…

S5 – Cutting across the top-down-bottom-up dichotomy in attentional capture research

Organizer(s): J. Eric T. Taylor, Brain and Mind Institute at Western University
Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Talk Room 1

Research on attentional selection describes the various factors that determine what information is ignored and what information is processed. Broadly speaking, researchers have adopted two explanations for how this occurs, which emphasize either automatic or controlled processing, often presenting evidence that is mutually contradictory. This symposium presents new evidence from five speakers that address this controversy from non-dichotomous perspectives. More…

S6 – Virtual Reality and Vision Science

Organizer(s): Bas Rokers, University of Wisconsin – Madison & Karen B. Schloss, University of Wisconsin – Madison
Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Pavilion

Virtual and augmented reality (VR/AR) research can answer scientific questions that were previously difficult or impossible to address. VR/AR may also provide novel methods to assist those with visual deficits and treat visual disorders. After a brief introduction by the organizers (Bas Rokers & Karen Schloss), 5 speakers representing both academia and industry will each give a 20-minute talk, providing an overview of existing research and identify promising new directions. ​The session will close with a 15 minute panel to deepen the dialog between industry and vision science. Topics include sensory integration, perception in naturalistic environments, and mixed reality. Symposium attendees may learn how to incorporate AR/VR into their research, identify current issues of interest to both academia and industry, and consider avenues of inquiry that may open with upcoming technological advances. More…

2017 Meet the Professors

Monday, May 22, 2017, 4:45 – 6:00 pm, Breck Deck North

Online registration for Meet the Professors is closed. There are still a few spaces available. Please meet at Breck Deck North at 4:30 pm if you are interested in attending.

Students and postdocs are invited to the second annual “Meet the Professors” event, Monday afternoon from 4:45 to 6:00 pm, immediately preceding the VSS Dinner and Demo Night. This is an opportunity for a free-wheeling, open-ended discussion with members of the VSS Board and other professors. You might chat about science, the annual meeting, building a career, or whatever comes up.

This year, the event will consist of two 30-minute sessions separated by a 15-minute snack break. Please select a different professor for each session. Space is limited and is assigned on a first-come, first-served basis.

Professors and VSS Board Members

Members of the VSS Board are indicated with an asterisk*, in case you have a specific interest in talking to a member of the board.

David Brainard* (University of Pennsylvania) studies human color vision, with particular interests in the consequences of spatial and spectral sampling by the photoreceptors and in the mechanisms mediating color constancy.

Eli Brenner* (Free University, Amsterdam) studies how visual information is used to guide our actions.

Marisa Carrasco (NYU) uses human psychophysics, neuroimaging, and computational modeling to investigate the relation between the psychological and neural mechanisms involved in visual perception and attention.

Isabel Gauthier (Vanderbilt University) uses behavioral and brain imaging methods to study perceptual expertise, object and face recognition, and individual differences in vision.

Julie Harris (St. Andrews) studies our perception of the 3D world, including binocular vision and 3D motion.  She also has an interest in animal camouflage.

Sheng He (University of Minnesota & Institute of Biophysics, CAS) uses psychophysical and neuroimaging (fMRI, EEG, MEG) methods to study spatiotemporal properties of vision, binocular interaction, visual attention, visual object recognition, and visual awareness.

Michael Herzog (EPFL – Switzerland) studies spatial and temporal vision in healthy and clinical populations.

Todd Horowitz (National Cancer Institute) is broadly interested in how vision science can be leveraged to reduce the burden of cancer, from  improving detection and diagnosis to understanding the cognitive complaints of cancer survivors.

Lynne Kiorpes* (NYU) uses behavioral and neurophysiological approaches to study visual development and visual disability. The goal is to understand the neural limitations on development and the effects of abnormal visual experience.

Dennis Levi (UC Berkeley) studies plasticity both in normal vision, and in humans deprived of normal binocular visual experience, using psychophysics and neuroimaging.

Ennio Mingolla (Northeastern) develops and tests of neural network models of visual perception, notably the segmentation, grouping, and contour formation processes of early and middle vision in primates, and on the transition of these models to technological applications.

Concetta Morrone (University of Pisa) studies the visual system in man and infants using psychophysical, electrophysiological, brain imaging and computational techniques. More recent research interests have been vision during eye-movement, perception of time and plasticity of the adult visual brain.

Tony Norcia* (Stanford University) studies the intricacies of visual development, partly to better understand visual functioning in the adult and abnormal visual processing.

Aude Oliva (MIT) studies human vision and memory, using methods from human perception and cognition, computer science and human neuroscience (fMRI, MEG)

Mary Peterson (University of Arizona) uses behavioral methods, neuropsychology, ERPs, and fMRI to investigate the competitive processes producing object perception and the interactions between perception and memory.

Jeff Schall* (Vanderbilt University) studies the neural and computational mechanisms that guide, control and monitor visually-guided gaze behavior.

James Tanaka (University of Victoria) studies the cognitive and neural processes of face recognition and object expertise.  He is interested in the perceptual strategies of real world experts, individuals on the autism spectrum and how a perceptual novice becomes an expert.

Preeti Verghese* (Smith-Kettlewell Eye Research Institute) studies spatial vision, visual search and attention, as well as eye and hand movements in normal vision and in individuals with central field loss.

Andrew Watson* (Apple) studies human spatial, temporal and motion processing, computational modeling of vision, and applications of vision science to imaging technology.

Jeremy Wolfe* (Harvard Med & Brigham and Women’s Hospital) studies visual attention and visual search with a special interest in socially important tasks like cancer screening in radiology.

2017 Satellite Events

Wednesday, May 17

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 17 – Friday, May 19, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
9:00 am – 12:00 pm Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zyg Pizlo, Purdue University; Anne Sereno, U. Texas Health Science Center at Houston; Qasim Zaidi, SUNY College of Optometry

The 6th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the VSS conference venue (the Tradewinds Island Resorts in St. Pete Beach, FL) May 17 – May 19. A keynote address will be given by Aude Oliva (MIT).

The early registration fee is $80 for regular participants, $40 for students. More information can be found on the workshop’s website: http://www.conf.purdue.edu/modvis/

Thursday, May 18

Implicit Guidance of Attention: Developing theoretical models

Thursday, May 18, 9:00 am – 6:00 pm, Jasmine/Palm

Organizers: Rebecca Todd, University of British Columbia and Chelazzi Leonardo, University of Verona

Speakers: Leo Chelazzi, Jane Raymond, Rebecca Todd, Andreas Keil, Clayton Hickey, Sarah Shomstein, Ayelet Landau, Brian Anderson, Jan Theeuwes

Visual selective attention is the process by which we tune ourselves to the world so that, of the millions of bits per second transmitted by the retina, the information that is most important to us reaches awareness and guides action. Recently, new areas of attention research have emerged, making sharp divisions between top-down volitional attention and bottom-up automatic capture by visual features much less clear than previously believed. Challenges to this intuitively appealing dichotomy have arisen as researchers have identified factors that guide attention non-strategically and often implicitly (a quality of bottom-up processes) but also rely on prior knowledge or experience (a quality of top-down systems). As a result, a number of researchers have been developing new theoretical frameworks that move beyond the classic attentional dichotomy. This roundtable discussion will bring together researchers from often-siloized investigative tracks who have been investigating effects of reward, emotion, semantic associations, and statistical learning on attentional guidance, as well as underlying neurocognitive mechanisms. The goal of this roundtable is to discuss these emerging frameworks and outstanding questions that arise from considering a broader range of research findings.

Friday, May 19

In the Fondest Memory of Bosco Tjan (Memorial Symposium)

Friday, May 19, 9:00 – 11:30 am, Talk Room 1-2

Organizers: Zhong-lin Lu, The Ohio State University and Susana Chung, University of California, Berkeley

Speakers: Zhong-lin Lu, Gordon Legge, Irving Biederman, Anirvan Nandy, Rachel Millin, Zili Liu, and Susana Chung

Professor Bosco S. Tjan was murdered at the pinnacle of a flourishing academic career on December 2, 2016. The vision science and cognitive neuroscience community lost a brilliant scientist and incisive commentator. I will briefly introduce Bosco’s life and career, and his contributions to vision science and cognitive neuroscience.

View Symposium Talks

Bruce Bridgeman Memorial Symposium

Friday, May 19, 9:00 – 11:30 am, Pavilion

Organizer: Susana Martinez-Conde, State University of New York

Speakers: Stephen L. Macknik, Stanley A. Klein, Susana Martinez-Conde, Paul Dassonville, Cathy Reed, and Laura Thomas

Professor Emeritus of Psychology Bruce Bridgeman was tragically killed on July 10, 2016, after being struck by a bus in Taipei, Taiwan. Those who knew Bruce will remember him for his sharp intellect, genuine sense of humor, intellectual curiosity, thoughtful mentorship, gentle personality, musical talent, and committed peace, social justice, and environmental activism. This symposium will highlight some of Bruce’s many important contributions to perception and cognition, which included spatial vision, perception/action interactions, and the functions and neural basis of consciousness.

View Symposium Talks

Saturday, May 20

How Immersive Eye Tracking Tools and VR Analytics Will Impact Vision Science Research

Saturday, May 20, 12:30 – 2:00 pm, Jasmine/Palm

Organizers: Courtney Gray, SensoMotoric Instruments, Inc. and Annett Schilling, SensoMotoric Instruments GmbH

Speakers: Stephen Macknik, SUNY Downstate Medical Center; Gabriel Diaz, Rochester Institute of Tech; Mary Hayhoe, University of Texas

This event covers the implications of new immersive HMD technologies and dedicated VR analysis solutions for vision science research. Researchers share their experiences and discuss how they believe VR eye tracking headsets and the ability to analyze data from immersive scenarios will positively impact visual cognition and scene perception research.

FoVea (Females of Vision et al) Workshop and Lunch

Saturday, May 20, 12:30 – 2:30 pm, Horizons

Organizers: Diane Beck, University of Illinois; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, McMaster University

Panelists: Marisa Carrasco, New York University and Allison Sekuler, McMaster University

FoVea is a group founded to advance the visibility, impact, and success of women in vision science. To that end, we plan to host a series of professional issues workshops during lunchtime at VSS. We encourage vision scientists of all genders to participate in the workshops.

The topic of the 2017 workshop is Negotiation: When To Do It and How To Do It Successfully. Two panelists will each give a presentation, and then will take questions and comments from the audience. The remainder of the workshop time will be spent networking with other attendees. The panelists are:

  • Marisa Carrasco, Professor of Psychology and Neural Science at New York University who served as the Chair of the Psychology Department for 6 years.
  • Allison Sekuler, Professor of Psychology, Neuroscience & Behaviour and Strategic Advisor to the President and VPs on Academic Issues, McMaster University; past Canada Research Chair in Cognitive Neuroscience (2001-2011), Associate VP & Dean, School of Graduate Studies (2008-2016), and interim VP Research (2015-2016).

A buffet lunch will be available. Registration is required so the appropriate amount of food can be on hand.

Sunday, May 21

Social Hour for Faculty at Primarily Undergraduate Institutions (PUIs)

Sunday, May 21, 12:30 – 2:00 pm, Royal Tern

Organizers: Eriko Self, California State University, Fullerton; Cathy Reed, Claremont McKenna College; and Nestor Matthews, Denison University

Do you work at a primarily undergraduate institution (PUI)? Do you have to find precious time for research and mentoring students among heavy teaching load? If so, bring your lunch or just bring yourself to PUI social and get to know other faculty at PUIs! It will be a great opportunity to share your ideas and concerns.

Vanderbilt-Rochester Vision Centers Party

Sunday, May 21, 7:30 – 10:00 pm, Beachside Sun Decks

Organizers: Geoffrey Woodman, Vanderbilt University and Duje Tadin, Rochester University

This event brings back the Vanderbilt-Rochester Party that began at the first VSS meetings. This social event will feature free drinks and snacks for all VSS attendees. It will provide attendees with the opportunity to socialize with members of the Rochester Center for Vision Science and the Vanderbilt Vision Research Center in attendance at VSS. This is a good opportunity to talk to potential mentors for graduate or postdoctoral training in vision science.

Monday, May 22

Applicational needs reinvent scientific views

Monday, May 22, 2:00 – 3:00 pm, Jasmine/Palm

Organizers: Katharina Rifai, Iliya V. Ivanov, and Siegfried Wahl, Institute of Ophthalmic Research, University of Tuebingen

Speakers: Eli Peli, Schepens Eye Research Institute; Peter Bex, Northeastern University; Susana Chung, UC Berkeley; Markus Lappe, University of Münster; Michele Rucci, Boston University; Jeff Mulligan, NASA Ames Research Center; Arijit Chakraborty, School of Optometry and Vision Science, University of Waterloo; Ian Erkelens, School of Optometry and Vision Science, University of Waterloo; Kevin MacKenzie, York University and Oculus VR, LCC

Applicational needs have often reinvented views on scientific problems and thus triggered break-throughs in models and methods. A recent example is augmented/virtual reality which challenges the visual system with reduced or enriched content and thus triggers scientific questions on visual system’s robustness.

Nonetheless, the driving character of applications within VSS research has not received focal attention until now. Therefore, we intend to bring together bright minds in a satellite event promoting the scientific drive created by applicational needs within VSS 2017.

Tutorial in Bayesian modeling

Monday, May 22, 2:00 – 4:30 pm, Sabal/Sawgrass

Organizer: Wei Ji Ma, New York University

Bayesian models are widespread in vision science. However, their inner workings are often obscure or intimidating to those without a background in modeling. This tutorial, which does not assume any background knowledge, will start by motivating Bayesian models through visual illusions. Then, you as participants will collectively choose a concrete experimental design to build a model for. We will develop the math of the Bayesian model of that task, and implement it in Matlab. You will take home complete code for a Bayesian model. Please bring pen, paper, and if possible, a laptop with Matlab.

Tutorial is limited to the first 50 people (first come, first-served).

The Experiential Learning Laboratory

Monday, May 22, 2:15 – 3:15 pm, Citrus/Glades

Organizers: Ken Nakayama, Na Li, and Jeremy Wilmer; Harvard University and Wellesley College

Psychology is one of most popular subjects with some the highest enrollments and at the undergraduate level. Psychology is also a science. Yet, the exposure of the undergraduate population to the actual “hands-on” practice doing such science is limited. It is rare in an undergraduate curriculum to see the kind of undergraduate laboratories that have been a longstanding tradition in the natural sciences and engineering. It is our premise that well conceived laboratory experiences by Psychology students have the potential to bring some important STEM practices and values to Psychology. This could increase the number of students who will have the sophistication to understand science at a deeper level, who will have the ability to create new knowledge through empirical investigation and who will develop the critical skills to evaluate scientific studies and claims. Critically important here is to supply conditions to engage students more fully by encouraging student initiated projects and to use this opportunity for them to gain mastery. TELLab with its ease of use and its ability to allow students to create their own experiments is what distinguishes it from other currently available systems. We invite teachers to try our system for their classes.

Tuesday, May 23

WorldViz VR Workshop

Tuesday, May 23, 1:00 – 2:30 pm, Sabal/Sawgrass

Organizer: Matthias Pusch, WorldViz

Virtual Reality is getting a lot of attention and press lately, but ‘hands on’ experiences with real use cases for this new technology are rare. This session will show what WorldViz has found to work for collaborative VR, and we will set up and try out an interactive VR experience together with the audience.

Wednesday, May 24

Honoring Al Ahumada – Al-apalooza! Talks

Wednesday, May 24, 3:00 – 5:00 pm, Horizons

Organizers: Jeff Mulligan, NASA Ames Research Center and Beau Watson, Apple

A celebration of the life, work, and play of Albert Jil Ahumada, Jr., a whimsical exploration of network learning for spatial and color vision, noise methods, models of photoreceptor positioning, etc. An afternoon session of informal talks will be open to all free of charge, followed by an evening banquet (payment required).

Full details will be posted as they are available at http://visionscience.com/alapalooza/.

Honoring Al Ahumada – Al-apalooza! Dinner

Wednesday, May 24, 7:00 – 10:00 pm, Beachside Sun Decks

Organizers: Jeff Mulligan, NASA Ames Research Center and Beau Watson, Apple

Full details will be posted as they are available at http://visionscience.com/alapalooza/.

Vision Sciences Society