FoVea Travel and Networking Award

The FoVea 2017 Award Recipients can be found here

Females of Vision et al. (FoVea) is excited to announce its inaugural round of the FoVea Travel and Networking Award, funded by National Science Foundation. Submissions are due on February 20, 2017.

The FoVea Travel and Networking Award is open to female members of the Vision Science Society (VSS) in pre-doctoral, post-doctoral, and pre-tenure faculty or research scientist positions. Up to 5 female vision scientists will be awarded $1,600 to cover costs involved in attending the 2017 VSS meeting, including membership fees, conference registration fees, and travel expenses.

FoVea created this award as part of its mission to advance the visibility, impact, and success of women in vision science. A recent report from Cooper and Radonjić (2016) indicated that in 2015, the ratio of women to men in VSS was near equal at the pre-doctoral level (1:1.13), but decreased as career stage increased. The decline is symptomatic of forces that impede the professional development of female vision scientists. A key aspect of professional development is building a professional network to support scientific pursuits and to provide mentorship at critical junctions in one’s academic career. The FoVea Travel and Networking Award will help female vision scientists build their professional network by encouraging them to meet with at least one Networking Target at the VSS meeting to discuss their research and consider potential for collaboration. The Networking Target(s) can be of any gender.

The goals of the FoVea Travel and Networking award are to:

  1. Increase the visibility of women by giving them the opportunity to meet and have a one-on-one discussion with a senior scientist(s) at the meeting.
  2. Increase the productivity of women by potentially stimulating collaborative research with the Networking Target(s).
  3. Increase the networking skills of women, both those who apply for, and win the awards, and those who peruse the written reports of awardees on the FoVea website.
  4. Allow excellent female vision scientists who might not otherwise be able to attend the conference to afford it.
  5. Give awards that can be listed on female vision scientists’ CVs thereby enhancing their professional profile.

Application Instructions

Applicants are asked to email the following materials to Karen Schloss at by February 20, 2017. All application related emails should include “FoVea Award 2017” and the applicant’s name it the subject line. The CV, proposal, and letter of agreement from the Networking Target must be combined into a single PDF. The letter of recommendation should be sent in a separate email.

Application materials

  1. CV
  2. A proposal describing the applicant’s plan to network with at least one senior scientist during the VSS 2017 meeting (750 word limit). The plan should include an explanation for why the applicant chose this(these) particular Networking Target(s), a plan for what topics she will discuss with her Networking Target(s) during the meeting, and a statement of how she hopes forging a relationship with the Networking Target(s) will help advance her research/career agenda.
  3. A letter of agreement from the senior scientist(s) named as the Networking Target(s). Networking Targets can be of any gender.
  4. A letter of recommendation from the applicant’s advisor, research supervisor, or department head. Please include the applicant’s name in the subject line of the submission email.

Awardees will agree to write a report on their networking methods and outcomes after the conference by July 1st, 2017. FoVea will post these reports on its website within 9 months of the conference.


Applicants must be a female vison scientist who is a graduate student, postdoctoral fellow, research scientist (non-tenure track), or junior faculty member (pre-tenure).

Review Process

Applications will be reviewed by a committee consisting of three members of the VSS community with Karen Schloss as Chair. Awards will be announced in mid March.

FoVea Committee: Diane Beck, Mary Peterson, Karen Schloss, and Allison Sekuler


Functional Brain Imaging in Development and Disorder

Tuesday, May 9, 1:00 – 2:30 pm at ARVO 2017, Baltimore, Maryland
Presenters: Geoffrey K. Aguirre, Jan Atkinson, Tessa M. Dekker, Deborah Giaschi

This symposium will feature four talks that apply functional brain imaging to the study of both visual development and visual disorders. Functional brain imaging, primarily fMRI, enables non-invasive and quantitative assessment of neural function in the human brain. The four talks in the symposium will cover topics that include the reorganization of visual cortex in blindness, studies of cortical response in children with amblyopia, the normal development of population receptive fields in visual cortex, and the effect of early cortical damage on visual development.

Post-retinal structure and function in human blindness

Speaker: Geoffrey K. Aguirre, Department of Neurology, University of Pennsylvania

Neuroimaging the typical and atypical developing visual brain: dorsal vulnerability and cerebral visual impairment

Speaker: Professor Jan Atkinson Ph.D, FMedSci; Acad. Europaea; FBA, Emeritus Professor of Psychology and Developmental Cognitive Neuroscience, University College London, Visiting Professor, University of Oxford

Development of retinotopic representations in visual cortex during childhood

Speaker: Tessa M. Dekker, Division of Psychology and Language Sciences & Institute of Ophthalmology, University College London

Neural correlates of motion perception deficits in amblyopia

Speaker: Deborah Giaschi, Department of Ophthalmology and Visual Science, University of British Columbia

2017 Public Lecture – Nancy Kanwisher

Nancy Kanwisher


Nancy Kanwisher received her B.S. and Ph.D. from MIT working with Molly Potter. After a postdoc as a MacArthur Fellow in Peace and International Security, and a second postdoc in the lab of Anne Treisman at UC Berkeley, she held faculty positions at UCLA and then Harvard, before returning to MIT in 1997, where she is now an Investigator at the McGovern Institute for Brain Research, a faculty member in the Department of Brain & Cognitive Sciences, and a member of the Center for Minds, Brains, and Machines. Kanwisher’s work uses brain imaging to discover the functional organization of the human brain as a window into the architecture of the mind. Kanwisher has received the Troland Award, the Golden Brain Award, and a MacVicar Faculty Fellow teaching Award from MIT, and she is a member of the National Academy of Sciences and the American Academy of Arts and Sciences. You can view her short lectures about human cognitive neuroscience for lay audiences here:

Functional Imaging of the Human Brain as a Window into the Mind

Saturday, May 20, 11:00 am – 12:00 pm, Museum of Fine Arts, St. Petersburg, Florida

Twenty-five years ago with the invention fMRI it became possible to image neural activity in the normal human brain. This remarkable tool has given us a striking new picture of the human brain, in which many regions have been shown to carry out highly specific mental functions, like the perception of faces, speech sounds, and music, and even very abstract mental functions like understanding a sentence or thinking about another person’s thoughts. These discoveries show that human minds and brains are not single general-purpose devices, but are instead made up of numerous distinct processors, each carrying out different functions. I’ll discuss some of the evidence for highly specialized brain regions, and what we know about each. I’ll also consider the tantalizing unanswered questions we are trying to tackle now: What other specialized brain regions do we have?  What are the connections between these each of these specialized regions and the rest of the brain? How do these regions develop over infancy and childhood?  How do these regions work together to produce uniquely human intelligence?

Attending the Public Lecture

The lecture is free to the public with admission to the museum. Museum members are free; Adults $17; Seniors 65 and older $15; Military with Id $15; College Students $10; Students 7-18 $10; Children 6 and under are free. VSS attendees will receive free admission to the Museum by showing your meeting badge.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

2017 Keynote – Katherine J. Kuchenbecker

Katherine J. Kuchenbecker

Director of the new Haptic Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany

Associate Professor (on leave), Mechanical Engineering and Applied Mechanics Department, University of Pennsylvania, Philadelphia, USA

Haptography: Capturing and Displaying Touch

Saturday, May 20, 2017, 7:15 pm, Talk Room 1-2

When you touch objects in your surroundings, you can discern each item’s physical properties from the rich array of haptic cues that you feel, including both the tactile sensations in your skin and the kinesthetic cues from your muscles and joints. Although physical interaction with the world is at the core of human experience, very few robotic and computer interfaces provide the user with high-fidelity touch feedback, limiting their intuitiveness. By way of two detailed examples, this talk will describe the approach of haptography, which uses biomimetic sensors and signal processing to capture tactile sensations, plus novel algorithms and actuation systems to display realistic touch cues to the user. First, we invented a novel way to map deformations and vibrations sensed by a robotic fingertip to the actuation of a fingertip tactile display in real time. We then demonstrated the striking utility of such cues in a simulated tissue palpation task through integration with a da Vinci surgical robot. Second, we created the world’s most realistic haptic virtual surfaces by recording and modeling what a user feels when touching real objects with an instrumented stylus. The perceptual effects of displaying the resulting data-driven friction forces, tapping transients, and texture vibrations were quantified by having users compare the original surfaces to their virtual versions. While much work remains to be done, we are starting to see the tantalizing potential of systems that leverage tactile cues to allow a user to interact with distant or virtual environments as though they were real and within reach.


Katherine J. Kuchenbecker is Director of the new Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She is currently on leave from her appointment as Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, where she held the Class of 1940 Bicentennial Endowed Term Chair and a secondary appointment in Computer and Information Science. Kuchenbecker earned a PhD (2006) in Mechanical Engineering at Stanford University and was a postdoctoral fellow at the Johns Hopkins University before joining the faculty at Penn in 2007. Her research centers on haptic interfaces, which enable a user to touch virtual and distant objects as though they were real and within reach, as well as haptic sensing systems, which allow robots to physically interact with and feel real objects. She delivered a widely viewed TEDYouth talk on haptics in 2012, and she has received several honors including a 2009 NSF CAREER Award, the 2012 IEEE Robotics and Automation Society Academic Early Career Award, a 2014 Penn Lindback Award for Distinguished Teaching, and many best paper and best demonstration awards.

Virtual Reality and Vision Science

S6 – Virtual Reality and Vision Science

Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Pavilion
Organizer(s): Bas Rokers, University of Wisconsin – Madison & Karen B. Schloss, University of Wisconsin – Madison
Presenters: Jacqueline Fulvio, Robin Held, Emily Cooper, Stefano Baldassi, David Luebke

< Back to 2017 Symposia

Virtual reality (VR) and augmented reality (AR) provide exciting new opportunities for vision research. In VR sensory cues are presented to simulate an observer’s presence in a virtual environment. In AR sensory cues are presented that embed virtual stimuli in the real world. This symposium will bring together speakers from academia and industry to present new scientific discoveries enabled by VR/AR technology, discuss recent and forthcoming advances in the technology, and identify exciting new avenues of inquiry. From a basic research perspective, VR and AR allow us to answer fundamental scientific questions that have been difficult or impossible to address in the past. VR/AR headsets provide a number of potential benefits over traditional psychophysical methods, such as incorporating a large field of view, high frame rate/low persistence, and low latency head tracking. These technological innovations facilitate experimental research in highly controlled, yet naturalistic three-dimensional environments. However, VR/AR also introduces its own set of unique challenges of which potential researchers should be aware. Speakers from academia will discuss ways they have used VR/AR as a tool to advance knowledge about 3D perception, multisensory integration, and navigation in naturalistic three-dimensional environments. Speakers will also present research on perceptual learning and neural plasticity, which may benefit from training in cue-rich environments that simulate real-world conditions. These talks will shed light on how VR/AR may ultimately be used to mitigate visual deficits and contribute to the treatment of visual disorders. Speakers from industry will highlight recent technological advances that can make VR such a powerful tool for research. Industry has made significant strides solving engineering problems involving latency, field of view, and presence. However, challenges remain, such as resolving cue conflicts and eliminating motion sickness. Although some of these issues may be solved through engineering, others are due to limitations of the visual system and require solutions informed by basic research within the vision science community. This symposium aims to provide a platform that deepens the dialog between academia and industry. VR holds unprecedented potential for building assistive technologies that will aid people with sensory and cognitive disabilities. Hearing from speakers in industry will give vision scientists an overview of anticipated technological developments, which will help them evaluate how they may incorporate VR/AR in their future research. In turn vision researchers may help identify science-based solutions to current engineering challenges. In sum, this symposium will bring together two communities for the mutually beneficial advancement of VR-based research. Who may want to attend: This symposium will be of interest to researchers who wish to consider incorporating AR/VR into their research, get an overview of existing challenges, and get a sense of future directions of mutual interest to industry and academia. The talks will be valuable to researchers at all stages of their careers. Hearing from representatives from both industry and academia may be useful for early stage researchers seeking opportunities beyond the highly competitive academic marketplace and may help researchers at all stages identify funding sources in the highly competitive granting landscape.

Extra-retinal cues improve accuracy of 3D motion perception in virtual reality environments

Speaker: Jacqueline Fulvio, University of Wisconsin – Madison
Additional Authors: Jacqueline M. Fulvio & Bas Rokers, Department of Psychology, UW-Madison

Our senses provide imperfect information about the world that surrounds us, but we can improve the accuracy of our perception by combining sensory information from multiple sources. Unfortunately, much of the research in visual perception has utilized methods of stimulus presentation that eliminate potential sources of information. It is often the case for example, that observers are asked to maintain a fixed head position while viewing stimuli generated on flat 2D displays. We will present recent work on the perception of 3D motion using the Oculus Rift, a virtual reality (VR) head-mounted display with head-tracking functionality. We describe the impact of uncertainty in visual cues presented in isolation, which have surprising consequences for the accuracy of 3D motion perception. We will then describe how extra-retinal cues, such as head motion, improve visual accuracy. We will conclude with a discussion of the potential and limitations of VR technology for the understanding visual perception.

Perceptual considerations for the design of mixed-reality content

Speaker: Robin Held, Microsoft
Additional Authors: Robin Held, Microsoft

Virtual-reality head-mounted displays (VR HMDs) block out the real world while engulfing the user in a purely digital setting. Meanwhile, mixed-reality (MR) HMDs embed digital content within the real-world while maintaining the user’s perception of her or his surroundings. This ability to simultaneously perceive both rendered content and real objects presents unique challenges for the design of MR content. I will briefly review the technologies underlying current MR headsets, including display hardware, tracking systems, and spatial audio. I will also discuss how the existing implementations of those technologies impact the user’s perception of the content. Finally, I will show how to apply that knowledge to optimize MR content for comfort and aesthetics.

Designing and assessing near-eye displays to increase user inclusivity

Speaker: Emily Cooper, Dartmouth College
Additional Authors: Nitish Padmanaban, Robert Konrad, and Gordon Wetzstein, Department of Electrical Engineering, Stanford University

From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. But in each case, one thing stays the same: the primary interface between the computer and the user is a visual display. Recent years have seen impressive growth in near-eye display systems, which are the basis of most virtual and augmented reality experiences. There are, however, a unique set of challenges to designing a display that is literally strapped to the user’s face. With an estimated half of all adults in the United States requiring some level of visual correction, maximizing inclusivity for near-eye displays is essential. I will describe work that combines principles from optics, optometry, and visual perception to identify and address major limitations of near-eye displays both for users with normal vision and those that require common corrective lenses. I will also describe ongoing work assessing the potential for near-eye displays to assist people with less common visual impairments at performing day-to-day tasks.

See-through Wearable Augmented Reality: challenges and oppor-tunities for vision science

Speaker: Stefano Baldassi, Meta Company
Additional Authors: Stefano Baldassi & Moqian Tian, Analytics & Neuro-science Department, Meta Company

We will present Meta’s Augmented Reality technology and the challenges faced in product development that may generate strong mutual connections between vision science and technology, as well as new areas of research for vision science and research methods using AR. The first line of challenges comes from the overlap between virtual content and the real world due to the non-opacity of the rendered pixels and the see-through optics. What are the optimal luminance, contrast and color profile to enable least interference? Will the solutions be qualitatively different in photonic and scotopic conditions? With SLAM, the virtual objects can be locked onto the real scene. Does the real world provide the same environmental context to the virtual object as a real object? Last, what are the implication of digital content in the periphery, given Meta’s industry-leading 90° FOV? The second line of challenges is in the domain of perception and action and multi-sensory integration. Meta supports manipulation of virtual objects. In the absence of haptic stimulation, when hands interact with the virtual object we currently rely on visual and proprioceptive cues to guide touch. How is the visuo-motor control of hands affected by manipulations without hap-tics? In order to enable people to interact with the virtual objects realistically and effectively, are cues like occlusion and haptic feedback necessary? Will time locked sound introduce valuable cues?

Computational Display for Virtual and Augmented Reality

Speaker: David Luebke, NVIDIA
Additional Authors: David Luebke, VP Graphics Research, NVIDIA

Wearable displays for virtual & augmented reality face tremendous challenges, including: Near-Eye Display: how to put a display as close to the eye as a pair of eyeglasses, where we cannot bring it into focus? Field of view: how to fill the user’s entire vision with displayed content? Resolu-tion: how to fill that wide field of view with enough pixels, and how to ren-der all of those pixels? A “brute force” display would require 10,000×8,000 pixels per eye! Bulk: displays should be as unobtrusive as sunglasses, but optics dictate that most VR displays today are bigger than ski goggles. Focus cues: today’s VR displays provide binocular display but only a fixed optical depth, thus missing the monocular depth cues from defocus blur and introducing vergence-accommodation conflict. To overcome these challenges requires understanding and innovation in vision science, optics, display technology, and computer graphics. I will describe several “computational display” VR/AR prototypes in which we co-design the optics, display, and rendering algorithm with the human visual system to achieve new tradeoffs. These include light field displays, which sacrifice spatial resolution to provide thin near-eye display and focus cues; pinlight displays, which use a novel and very simple optical stack to produce wide field-of-view see-through display; and a new approach to foveated rendering, which uses eye tracking and renders the peripheral image with less detail than the foveal region. I’ll also talk about our current efforts to “operation-alize” vision science research, which focuses on peripheral vision, crowding, and saccadic suppression artifacts.

< Back to 2017 Symposia

The Role of Ensemble Statistics in the Visual Periphery

S4 -The Role of Ensemble Statistics in the Visual Periphery

Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Pavilion
Organizer(s): Brian Odegaard, University of California-Los Angeles
Presenters: Michael Cohen, David Whitney, Ruth Rosenholtz, Tim Brady, Brian Odegaard

< Back to 2017 Symposia

The past decades have seen the growth of a tremendous amount of research into the human visual system’s capacity to encode “summary statistics” of items in the world. Studies have shown that the visual system possesses a remarkable ability to compute properties such as average size, position, motion direction, gaze direction, emotional expression, and liveliness, as well as variability in color and facial expression, documenting the phenomena across various domains and stimuli. One recent proposal in the literature has focused on the promise of ensemble statistics to provide an explanatory account of subjective experience in the visual periphery (Cohen, Dennett, & Kanwisher, Trends in Cognitive Sciences, 2016). In addition to this idea, others have suggested that summary statistics underlie performance in visual tasks in a broad manner. These hypotheses open up intriguing questions: how are ensemble statistics encoded outside the fovea, and to what extent does this capacity explain our experience of the majority of our visual field? In this proposed symposium, we aim to discuss recent empirical findings, theories, and methodological considerations in pursuit of answers to many questions in this growing area of research, including the following: (1) How does the ability to process summary statistics in the periphery compare to this ability at the center of the visual field? (2) What role (if any) does attention play in the ability to compute summary statistics in the periphery? (3) Which computational modeling frameworks provide compelling, explanatory accounts of this phenomenon? (4) Which summary statistics (e.g., mean, variance) are encoded in the periphery, and are there limitations on the precision/capacity of these estimates? By addressing questions such as those listed above, we hope that participants emerge from this symposium with a more thorough understanding of the role of ensemble statistics in the visual periphery, and how this phenomenon may account for subjective experience across the visual field. Our proposed group of speakers is shown below, and we hope that faculty, post-docs, and graduate students alike would find this symposium to be particularly informative, innovative, and impactful.

Ensemble statistics and the richness of perceptual experience

Speaker: Michael Cohen, MIT

While our subjective impression is of a detailed visual world, a wide variety of empirical results suggest that perception is actually rather limited. Findings from change blindness and inattentional blindness highlight how much of the huge amounts of the visual world regularly go unnoticed. Furthermore, direct estimates of the capacity of visual attention and working memory reveal that surprisingly few items can be processed and maintained at once. Why do we think we see so much when these empirical results suggests we see so little? One possible answer to this question resides in the representational power of visual ensembles and summary statistics. Under this view, those items that cannot be represented as individual objects or with great precision are nevertheless represented as part of a broader statistical summary. By representing much of the world as an ensemble, observers have perceptual access to different aspects of the entire field of view, not just a few select items. Thus, ensemble statistics play a critical role in our ability to account for and characterize the apparent richness of perceptual experience.

Ensemble representations as a basis for rich perceptual experiences

Speaker: David Whitney, University of California-Berkeley

Much of our rich visual experience comes in the form of ensemble representations, the perception of summary statistical information in groups of objects—such as the average size of items, the average emotional expression of faces in a crowd, or the average heading direction of point-light walkers. These ensemble percepts occur over space and time, are robust to outliers, and can occur in the visual periphery. Ensemble representations can even convey unique and emergent social information like the gaze of an audience, the animacy of a scene, or the panic in a crowd, information that is not necessarily available at the level of the individual crowd members. The visual system can make these high-level interpretations of social and emotional content with exposures as brief as 50 ms, thus revealing an extraordinarily efficient process for compressing what would otherwise be an overwhelming amount of information. Much of what is believed to count as rich social, emotional, and cognitive experience actually comes in the form of basic, compulsory, visual summary statistical processes.

Summary statistic encoding plus limits on decision complexity underlie the richness of visual perception as well as its quirky failures

Speaker: Ruth Rosenholtz, MIT

Visual perception is full of puzzles. Human observers effortlessly per-form many visual tasks, and have the sense of a rich percept of the visual world. Yet when probed for details they are at a loss . How does one explain this combination of marvelous successes and puzzling failures? Numerous researchers have explained the failures in terms of severe limits on resources of attention and memory. But if so, how can one explain the successes? My lab has argued that many experimental results pointing to apparent attentional limits instead derived at least in part from losses in peripheral vision. Furthermore, we demonstrated that those losses could arise from peripheral vision encoding its inputs in terms of a rich set of local image statistics. This scheme is theoretically distinct from encoding ensemble statistics of a set of similar items. I propose that many of the remaining attention/memory limits can be unified in terms of a limit on decision complexity. This decision complexity is difficult to reason about, because the complexity of a given task depends upon the underlying encoding. A complex, general-purpose encoding likely evolved to make certain tasks easy at the expense of others. Recent advances in understanding this encoding — including in peripheral vision — may help us finally make sense of the puzzling strengths and limitations of visual perception.

The role of spatial ensemble statistics in visual working memory and scene perception

Speaker: Tim Brady, University of California-San Diego

At any given moment, much of the relevant information about the visual world is in the periphery rather than the fovea. The periphery is particularly useful for providing information about scene structure and spatial layout, as well as informing us about the spatial distribution and features of the objects we are not explicitly attending and fixating. What is the nature of our representation of this information about scene structure and the spatial distribution of objects? In this talk, I’ll discuss evidence that representations of the spatial distribution of simple visual features (like orientation, spatial frequency, color), termed spatial ensemble statistics, are specifically related to our ability to quickly and accurately recognize visual scenes. I’ll also show that these spatial ensemble statistics are a critical part of the information we maintain in visual working memory – providing information about the entire set of objects, not just a select few, across eye movements, blinks, occlusions and other interruptions of the visual scene.

Summary Statistics in the Periphery: A Metacognitive Approach

Speaker: Brian Odegaard, University of California-Los Angeles

Recent evidence indicates that human observers often overestimate their capacity to make perceptual judgments in the visual periphery. How can we quantify the degree to which this overestimation occurs? We describe how applications of Signal Detection Theoretic frameworks provide one promising approach to measure both detection biases and task performance capacities for peripheral stimuli. By combining these techniques with new metacognitive measures of perceptual confidence (such as meta-d’; Maniscalco & Lau, 2012), one can obtain a clearer picture regarding (1) when subjects can simply perform perceptual tasks in the periphery, and (2) when they have true metacognitive awareness of the visual surround. In this talk, we describe results from recent experiments employing these quantitative techniques, comparing and contrasting the visual system’s capacity to encode summary statistics in both the center and periphery of the visual field.

< Back to 2017 Symposia

The Brain Correlates of Perception and Action: from Neural Activity to Behavior

S2 – The Brain Correlates of Perception and Action: from Neural Activity to Behavior

Time/Room: Friday, May 19, 2017, 12:00 – 2:00 pm, Pavilion
Organizer(s): Simona Monaco, Center for Mind/Brain Sciences, University of Trento & Annalisa Bosco, Department of Pharmacy and Biotech, University of Bologna
Presenters: J. Douglas Crawford, Patrizia Fattori, Simona Monaco, Annalisa Bosco, Jody C. Culham

< Back to 2017 Symposia

Symposium Description

In recent years neuroimaging and neurophysiology have enabled cognitive neuroscience to identify numerous brain areas that are involved in sensorimotor integration for action. This research has revealed cortical and subcortical brain structures that work in coordination to allow accurate hand and eye movements. The visual information about objects in the environment is integrated into the motor plan through a cascade of events known as visuo-motor integration. These mechanisms allow not only to extract relevant visual information for action, but also to continuously update this information throughout action plan and execution. As our brain evolved to act towards real objects in the natural environment, studying hand and eye movements in experimental situations that resemble the real world is critical for our understanding of the action system. This aspect has been relatively neglected in the cognitive sciences, mostly because of the challenges associated with the experimental setups and technologies. This symposium provides a comprehensive view of the neural mechanisms underlying sensory-motor integration for the production of eye and hand movements in situations that are common to real life. The range of topics covered by the speakers encompasses the visual as well as the motor and cognitive neuro-sciences, and therefore are relevant to junior and senior scientists specialized in any of these areas. We bring together researchers from macaque neurophysiology to human neuroimaging and behavior. The combination of works that use these cutting-edge techniques offers a unique insight into the effects that are detected at the neuronal level, extended to neural populations and trans-lated into behavior. There will be five speakers. Doug Crawford will address the neuronal mechanisms underlying perceptual-mo-tor integration during head-unrestrained gaze shifts in the frontal eye field and superior colliculus of macaques. Patrizia Fattori will describe how the activity of neurons in the dorsomedial visual stream of macaques is modulated by gaze and hand movement direction as well as properties of real objects. Jody Culham will illustrate the neural representation for visually guided actions and real objects in the human brain revealed by functional magnetic resonance imaging (fMRI). Simona Monaco will describe the neural mechanisms in the human brain underlying the influence of intended action on sensory processing and the involvement of the early visual cortex in action planning and execution. Annalisa Bosco will detail the behavioral aspects of the influence exerted by action on perception in human participants.

Visual-motor transformations at the Neuronal Level in the Gaze System

Speaker: J. Douglas Crawford, Centre for Vision Research, York University
Additional Authors: AmirSaman Sajad, Center for Integrative & Cognitive Neuroscience, Vanderbilt University and Morteza Sadeh, Centre for Vision Research, York University

The fundamental question in perceptual-motor integration is how, and at what level, do sensory signals become motor signals? Does this occur between brain areas, within brain areas, or even within individual neu-rons? Various training or cognitive paradigms have been combined with neurophysiology and/or neuroimaging to address this question, but the visuomotor transformations for ordinary gaze saccades remain elusive. To address these questions, we developed a method for fitting visual and motor response fields against various spatial models without any special training, based on trial-to-trial variations in behavior (DeSouza et al .2011). More recently we used this to track visual-motor transformations through time. We find that superior colliculus and frontal eye field visual responses encode target direction, whereas their motor responses encode final gaze position relative to initial eye orientation (Sajad et al. 2015; Sadeh et al. 2016). This occurs both between neuron populations, but can also be observed within individual visuomotor cells. When a memory delay is imposed, a gradual transition of intermediate codes is observed (perhaps due to an imperfect memory loop), with a further ‘leap’ toward gaze motor coding in the final memory-motor transformation (Sajad et al. 2016). However, we found a similar spatiotemporal transition even within the brief burst of neural activity that accompanies a reactive, visually-evoked saccade. What these data suggest is that visuomotor transformations are a network phenomenon that is simultaneously observable at the level of individual neurons, and distributed across different neuronal populations and structures.

Neurons for eye and hand action in the monkey medial posterior parietal cortex

Speaker: Patrizia Fattori, University of Bologna
Additional Authors: Fattori Patrizia, Breveglieri Rossella, Galletti Claudio, Department of Pharmacy and Biotechnology, University of Bologna

In the last decades, several components of the visual control of eye and hand movements have been disentangled by studying single neurons in the brain of awake macaque monkeys. In this presentation, particular attention will be given to the influence of the direction of gaze upon the reaching activity of neurons of the dorsomedial visual stream. We recorded from the caudal part of the medial posterior parietal cortex, finding neurons sensitive to the direction and amplitude of arm reaching actions. The reaching activity of these neurons was influenced by the direction of gaze, some neurons preferring foveal reaching, others peripheral reaching. Manipulations of eye/target positions and of hand position showed that the reaching activity could be in eye-centered, head-centered, or a mixed frame of reference according to the considered neuron. We also found neurons modulated by the visual features of real objects and neurons modulated also by grasping movements, such as wrist orientation and grip formation. So it seems that the entire neural machinery for encoding eye and hand action is hosted in the dorsomedial visual stream. This machinery takes part in the sequence of visuomotor transformations required to encode many aspects of the reach-to-grasp actions.

The role of the early visual cortex in action

Speaker: Simona Monaco, Center for Mind/Brain Sciences, University of Trento
Additional Authors: Simona Monaco, Center for Mind/Brain Sciences, University of Trento; Doug Crawford, Centre for Vision Research, York University; Luca Turella, Center for Mind/Brain Sciences, University of Trento; Jody Culham, Brain and Mind Institution

Functional magnetic resonance imaging has recently allowed showing that intended action modulates the sensory processing of object orientation in areas of the action network in the human brain. In particular, intended actions can be decoded in the early visual cortex using multivoxel pattern analyses before the movements are initiated, regardless of whether the tar-get object is visible or not. In addition, the early visual cortex is rerecruited during actions in the dark towards stimuli that have been previously seen. These results suggest three main points. First, the action-driven modulation of sensory processing is shown at the neural level in a network of areas that include the early visual cortex. Second, the role of the early visual cortex goes well beyond the processing of sensory information for perception and might be the target of reentrant feedback for sensory-motor integration. Third, the early visual cortex shows action-driven modulation during both action planning and execution, suggesting a continuous exchange of information with higher-order visual-motor areas for the production of a motor output.

The influence of action execution on object size perception

Speaker: Annalisa Bosco, Department of Pharmacy and Biotechnology, University of Bologna
Additional Authors: Annalisa Bosco, Department of Pharmacy and Biotechnology, University of Bologna; Patrizia Fattori, Department of Pharmacy and Biotechnology, University of Bologna

When performing an action, our perception is focused towards object visual properties that enable us to execute the action successfully. However, the motor system is also able to influence perception, but only few studies reported evidence for hand action-induced visual perception modifications. Here, we aimed to study for a feature-specific perceptual modulation before and after a reaching and grasping action. Two groups of subjects were instructed to either grasp or reach to different sized bars and, before and after the action, to perform a size perceptual task by manual and verbal report. Each group was tested in two experimental conditions: no prior knowledge of action type, where subjects did not know the successive type of movement, and prior knowledge of action type, where they were aware about the successive type of movement. In both manual and verbal perceptual size responses, we found that after a grasping movement the size perception was significantly modified. Additionally, this modification was enhanced when the subjects knew in advance the type of movement to execute in the subsequent phase of task. These data suggest that the knowledge of action type and the execution of the action shape the perception of object properties.

Neuroimaging reveals the human neural representations for visually guided grasping of real objects and pictures

Speaker: Jody C. Culham, Brain and Mind Institute, University of Western Ontario
Additional Authors: Jody C. Culham, University of Western Ontario; Sara Fabbri, Radboud University Nijmegen; Jacqueline C. Snow, University of Nevada, Reno; Erez Freud, Carnegie-Mellon University

Neuroimaging, particularly functional magnetic resonance imaging (fMRI), has revealed many human brain areas that are involved in the processing of visual information for the planning and guidance of actions. One area of particular interest is the anterior intraparietal sulcus (aIPS), which is thought to play a key role in processing information about object shape for the visual control of grasping. However, much fMRI research has relied on artificial stimuli, such as two-dimensional photos, and artificial actions, such as pantomimed grasping. Recent fMRI studies from our lab have used representational similarity analysis on the patterns of fMRI activation from brain areas such as aIPS to infer neural coding in participants performing real actions upon real objects. This research has revealed the visual features of the object (particularly elongation) and the type of grasp (including the number of digits and precision required) that are coded in aIPS and other regions. Moreover, this work has suggested that these neural representations are affected by the realness of the object, particularly during grasping. Taken together, these results highlight the value of using more ecological paradigms to study sensorimotor control.

< Back to 2017 Symposia

How can you be so sure? Behavioral, computational, and neuroscientific perspectives on metacognition in perceptual decision-making

S3 – How can you be so sure? Behavioral, computational, and neuro-scientific perspectives on metacogni-tion in perceptual decision-making

Time/Room: Friday, May 19, 2017, 2:30 – 4:30 pm, Talk Room 1
Organizer(s): Megan Peters, University of California Los Angeles
Presenters: Megan Peters, Ariel Zylberberg, Michele Basso, Wei Ji Ma, Pascal Mamassian

< Back to 2017 Symposia

Metacognition, or our ability to monitor the uncertainty of our thoughts, decisions, and perceptions, is of critical importance across many domains . Here we focus on metacognition in perceptual decisions — the continuous inferences that we make about the most likely state of the world based on incoming sensory information. How does a police officer evaluate the fidelity of his perception that a perpetrator has drawn a weapon? How does a driver compute her certainty in whether a fleeting visual percept is a child or a soccer ball, impacting her decision to swerve? These kinds of questions are central to daily life, yet how such ‘confidence’ is computed in the brain remains unknown. In recent years, increasingly keen interest has been directed towards exploring such metacognitive mechanisms from computational (e.g., Rahnev et al., 2011, Nat Neuro; Peters & Lau, 2015, eLife), neuroimaging (e.g., Fleming et al., 2010, Science), brain stimulation (e.g., Fetsch et al., 2014, Neuron), and neuronal electro-physiology (e.g., Kiani & Shadlen, 2009, Science; Zylberberg et al., 2016, eLife) perspectives. Importantly, the computation of confidence is also of increasing interest to the broader range of researchers studying the computations underlying perceptual decision-making in general. Our central focus is on how confidence is computed in neuronal populations, with attention to (a) whether perceptual decisions and metacognitive judgments depend on the same or different computations, and (b) why confidence judgments sometimes fail to optimally track the accuracy of perceptual decisions. Key themes for this symposium will include neural correlates of confidence, behavioral consequences of evidence manipulation on confidence judgments, and computational characterizations of the relationship between perceptual decisions and our confidence in them. Our principal goal is to attract scientists studying or interested in confidence/uncertainty, sensory metacognition, and perceptual decision-making from both human and animal perspectives, spanning from the computational to the neurobiological level. We bring together speakers from across these disciplines, from animal electrophysiology and behavior through computational models of human uncertainty, to communicate their most recent and exciting findings. Given the recency of many of the findings discussed, our symposium will cover terrain largely untouched by the main program. We hope that the breadth of research programs represented in this symposium will encourage a diverse group of scientists to attend and actively participate in the discussion.

Transcranial magnetic stimulation to visual cortex induces suboptimal introspection

Speaker: Megan Peters, University of California Los Angeles
Additional Authors: Megan Peters, University of California Los Angeles; Jeremy Fesi, The Graduate Center of the City University of New York; Namema Amendi, The Graduate Center of the City University of New York; Jeffrey D. Knotts, University of California Los Angeles; Hakwan Lau, UCLA

In neurological cases of blindsight, patients with damage to primary visual cortex can discriminate objects but report no visual experience of them. This form of ‘unconscious perception’ provides a powerful opportunity to study perceptual awareness, but because the disorder is rare, many researchers have sought to induce the effect in neurologically intact observers. One promising approach is to apply transcranial magnetic stimulation (TMS) to visual cortex to induce blindsight (Boyer et al., 2005), but this method has been criticized for being susceptible to criterion bias confounds: perhaps TMS merely reduces internal visual signal strength, and observers are unwilling to report that they faintly saw a stimulus even if they can still discriminate it (Lloyd et al., 2013). Here we applied a rigorous response-bias free 2-interval forced-choice method for rating subjective experience in studies of unconscious perception (Peters and Lau, 2015) to address this concern. We used Bayesian ideal observer analysis to demonstrate that observers’ introspective judgments about stimulus visibility are suboptimal even when the task does not require that they maintain a response criterion — unlike in visual masking. Specifically, observers appear metacognitively blind to the noise introduced by TMS, in a way that is akin to neurological cases of blindsight. These findings are consistent with the hypothesis that metacognitive judgments require observers to develop an internal model of the statistical properties of their own signal processing architecture, and that introspective suboptimality arises when that internal model abruptly becomes invalid due to external manipulations.

The influence of evidence volatility on choice, reaction time and confidence in a perceptual decision

Speaker: Ariel Zylberberg, Columbia University
Additional Authors: Ariel Zylberberg, Columbia University; Christopher R. Fetsch, Columbia University; Michael N. Shadlen, Columbia University

Many decisions are thought to arise via the accumulation of noisy evidence to a threshold or bound. In perceptual decision-making, the bounded evidence accumulation framework explains the effect of stimulus strength, characterized by signal-to-noise ratio, on decision speed, accuracy and confidence. This framework also makes intriguing predictions about the behavioral influence of the noise itself. An increase in noise should lead to faster decisions, reduced accuracy and, paradoxically, higher confidence. To test these predictions, we introduce a novel sensory manipulation that mimics the addition of unbiased noise to motion-selective regions of visual cortex. We verified the effect of this manipulation with neuronal recordings from macaque areas MT/MST. For both humans and monkeys, increasing the noise induced faster decisions and greater confidence over a range of stimuli for which accuracy was minimally impaired. The magnitude of the effects was in agreement with predictions of a bounded evidence accumulation model.

A role for the superior colliculus in decision-making and confidence

Speaker: Michele Basso, University of California Los Angeles
Additional Authors: Michele Basso, University of California Los Angeles; Piercesare Grimaldi, University of California Los Angeles; Trinity Crapse, University of California Los Angeles

Evidence implicates the superior colliculus (SC) in attention and perceptual decision-making . In a simple target-selection task, we previously showed that discriminability between target and distractor neuronal activity in the SC correlated with decision accuracy, consistent with the hypothesis that SC encodes a decision variable. Here we extend these results to determine whether SC also correlates with decision criterion and confidence. Trained monkeys performed a simple perceptual decision task in two conditions to induce behavioral response bias (criterion shift): (1) the probability of two perceptual stimuli was equal, and (2) the probability of one perceptual stimulus was higher than the other. We observed consistent changes in behavioral response bias (shifts in decision criterion) that were directly cor-related with SC neuronal activity. Furthermore, electrical stimulation of SC mimicked the effect of stimulus probability manipulations, demonstrating that SC correlates with and is causally involved in setting decision criteria. To assess confidence, monkeys were offered a ‘safe bet’ option on 50% of trials in a similar task. The ‘safe bet’ always yielded a small reward, encouraging monkeys to select the ‘safe bet’ when they were less confident rather than risk no reward for a wrong decision. Both monkeys showed metacognitive sensitivity: they chose the ‘safe bet’ more on more difficult trials. Single- and multi-neuron recordings from SC revealed two distinct neuronal populations: one that discharged more robustly for more confident trials, and one that did so for less confident trials. Together these finding show how SC encodes information about decisions and decisional confidence.

Testing the Bayesian confidence hypothesis

Speaker: Wei Ji Ma, New York University
Additional Authors: Wei Ji Ma, New York University; Will Adler, New York University; Ronald van den Berg, University of Uppsala

Asking subjects to rate their confidence is one of the oldest procedures in psychophysics. Remarkably, quantitative models of confidence ratings have been scarce. What could be called the “Bayesian confidence hypothesis” states that an observer’s confidence rating distribution is completely determined by posterior probability. This hypothesis predicts specific quantitative relationships between performance and confidence. It also predicts that stimulus combinations that produce the same posterior will also produce the same confidence distribution. We tested these predictions in three contexts: a) perceptual categorization; b) visual working memory; c) the interpretation of scientific data.

Integration of visual confidence over time and across stimulus dimensions

Speaker: Pascal Mamassian, Ecole Normale Supérieure
Additional Authors: Pascal Mamassian, Ecole Normale Supérieure; Vincent de Gardelle, Université Paris 1; Alan Lee, Lingnan University

Visual confidence refers to our ability to estimate our own performance in a visual decision task. Several studies have highlighted the relatively high efficiency of this meta-perceptual ability, at least for simple visual discrimination tasks. Are observers equally good when visual confidence spans more than one stimulus dimension or more than a single decision? To address these issues, we used the method of confidence forced-choice judgments where participants are prompted to choose between two alter-natives the stimulus for which they expect their performance to be better (Barthelmé & Mamassian, 2009, PLoS CB). In one experiment, we asked observers to make confidence choice judgments between two different tasks (an orientation-discrimination task and a spatial-frequency-discrimi-nation task). We found that participants were equally good at making these across-dimensions confidence judgments as when choices were restricted to a single dimension, suggesting that visual confidence judgments share a common currency. In another experiment, we asked observers to make confidence-choice judgments between two ensembles of 2, 4, or 8 stimuli. We found that participants were increasingly good at making ensemble confidence judgments, suggesting that visual confidence judgments can accumulate information across several trials. Overall, these results help us better understand how visual confidence is computed and used over time and across stimulus dimensions.

< Back to 2017 Symposia

Cutting across the top-down-bottom-up dichotomy in attentional capture research

Time/Room: Friday, May 19, 2017, 5:00 – 7:00 pm, Talk Room 1
Organizer(s): J. Eric T. Taylor, Brain and Mind Institute at Western University
Presenters: Nicholas Gaspelin, Matthew Hilchey, Dominique Lamy, Stefanie Becker, Andrew B. Leber

< Back to 2017 Symposia

Research on attentional selection describes the various factors that determine what information is ignored and what information is processed. These factors are commonly described as either bottom-up or top-down, indicating whether stimulus properties or an observer’s goals determine the outcome of selection. Research on selection typically adheres strongly to one of these two perspectives; the field is divided. The aim of this symposium is to generate discussions and highlight new developments in the study of attentional selection that do not conform to the bifurcated approach that has characterized the field for some time (or trifurcated, with respect to recent models emphasizing the role of selection history). The research presented in this symposium does not presuppose that selection can be easily or meaningfully dichotomized. As such, the theme of the symposium is cutting across the top-down-bottom-up dichotomy in attentional selection research. To achieve this, presenters in this session either share data that cannot be easily explained within the top-down or bot-tom-up framework, or they propose alternative models of existing descriptions of sources of attentional control. Theoretically, the symposium will begin with presentations that attempt to resolve the dichotomy with a new role for suppression (Gaspelin & Luck) or further bemuse the dichotomy with typically bottom-up patterns of behaviour in response to intransient stimuli (Hilchey, Taylor, & Pratt). The discussion then turns to demonstrations that the bottom-up, top-down, and selection history sources of control variously operate on different perceptual and attentional pro-cesses (Lamy & Zivony; Becker & Martin), complicating our categorization of sources of control. Finally, the session will conclude with an argument for more thorough descriptions of sources of control (Leber & Irons). In summary, these researchers will present cutting-edge developments using converging methodologies (chronometry, EEG, and eye-tracking measures) that further our understanding of attentional selection and advance attentional capture research beyond its current dichotomy. Given the heated history of this debate and the importance of the theoretical question, we expect that this symposium should be of interest to a wide audience of researchers at VSS, especially those interested in visual attention and cognitive control.

Mechanisms Underlying Suppression of Attentional Capture by Salient Stimuli

Speaker: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis
Additional Authors: Nicholas Gaspelin, Center for Mind and Brain at the University of California, Davis; Carly J. Leonard, Center for Mind and Brain at the University of California, Davis; Steven J. Luck, Center for Mind and Brain at the University of California, Davis

Researchers have long debated the nature of cognitive control in vision, with the field being dominated by two theoretical camps. Stimulus-driven theories claim that visual attention is automatically captured by salient stimuli, whereas goal-driven theories argue that capture depends critically the goals of a viewer. To resolve this debate, we have previously provided key evidence for a new hybrid model called signal suppression hypothesis. According to this account, all salient stimuli generate an active salience signal which automatically attempts to guide visual attention. However, this signal can be actively suppressed. In the current talk, we review the converging evidence for this active suppression of salient items, using behavioral, eye tracking and electrophysiological methods. We will also discuss the cognitive mechanisms underlying suppression effects and directions for future research.

Beyond the new-event paradigm in visual attention research: Can completely static stimuli capture attention?

Speaker: Matthew Hilchey, University of Toronto
Additional Authors: Matthew D. Hilchey, University of Toronto, J. Eric T. Taylor, Brain and Mind Institute at Western University; Jay Pratt, University of Toronto

The last several decades of attention research have focused almost exclusively on paradigms that introduce new perceptual objects or salient sensory changes to the visual environment in order to determine how attention is captured to those locations. There are a handful of exceptions, and in the spirit of those studies, we asked whether or not a completely unchanging stimuli can attract attention using variations of classic additional singleton and cueing paradigms. In the additional singleton tasks, we presented a preview array of six uniform circles. After a short delay, one circle changed in form and luminance – the target location – and all but one location changed luminance, leaving the sixth location physically unchanged. The results indicated that attention was attracted toward the vicinity of the only unchanging stimulus, regardless of whether all circles around it increased or decreased luminance. In the cueing tasks, cueing was achieved by changing the luminance of 5 circles in the object preview array either 150 or 1000 ms before the onset of a target. Under certain conditions, we observed canonical patterns of facilitation and inhibition emerging from the location containing the physically unchanging cue stimuli. Taken together, the findings suggest that a completely unchanging stimulus, which bears no obvious resemblance to the target, can attract attention in certain situations.

Stimulus salience, current goals and selection history do not affect the same perceptual processes

Speaker: Dominique Lamy, Tel Aviv University
Additional Authors: Dominique Lamy, Tel Aviv University; Alon Zivony, Tel Aviv University

When exposed to a visual scene, our perceptual system performs several successive processes. During the preattentive stage, the attentional priority accruing to each location is computed. Then, attention is shifted towards the highest-priority location. Finally, the visual properties at that location are processed. Although most attention models posit that stimulus-driven and goal-directed processes combine to determine attentional priority, demonstrations of purely stimulus-driven capture are surprisingly rare. In addition, the consequences of stimulus-driven and goal-directed capture on perceptual processing have not been fully described. Specifically, whether attention can be disengaged from a distractor before its properties have been processed is unclear. Finally, the strict dichotomy between bottom-up and top-down attentional control has been challenged based on the claim that selection history also biases attentional weights on the priority map. Our objective was to clarify what perceptual processes stimulus salience, current goals and selection history affect. We used a feature-search spatial-cueing paradigm. We showed that (a) unlike stimulus salience and current goals, selection history does not modulate attentional priority, but only perceptual processes following attentional selection; (b) a salient distractor not matching search goals may capture attention but attention can be disengaged from this distractor’s location before its properties are fully processed; and (c) attentional capture by a distractor sharing the target feature entails that this distractor’s properties are mandatorily processed.

Which features guide visual attention, and how do they do it?

Speaker: Stefanie Becker, The University of Queensland
Additional Authors: Stefanie Becker, The University of Queensland; Aimee Martin, The University of Queensland

Previous studies purport to show that salient irrelevant items can attract attention involuntarily, against the intentions and goals of an observer. However, corresponding evidence originates predominantly from RT and eye movement studies, whereas EEG studies largely failed to support saliency capture. In the present study, we examined effects of salient colour distractors on search for a known colour target when the distractor was similar vs . dissimilar to the target. We used both eye tracking and EEG (in separate experiments), and also investigated participant’s awareness of the features of irrelevant distractors. The results showed that capture by irrelevant distractors was strongly top-down modulated, with target-similar dis-tractors attracting attention much more strongly, and being remembered better, than salient distractors. Awareness of the distractor correlated more strongly with initial capture rather than attentional dwelling on the distractor after it was selected. The salient distractor enjoyed no noticeable advantage over non-salient control distractors with regard to implicit measures, but was overall reported with higher accuracy than non-salient distractors. This raises the interesting possibility that salient items may primarily boost visual processes directly, by requiring less attention for accurate perception, not by summoning spatial attention.

Toward a profile of goal-directed attentional control

Speaker: Andrew B. Leber, The Ohio State University
Additional Authors: Andrew B. Leber, The Ohio State University; Jessica L. Irons, The Ohio State University

Recent criticism of the classic bottom-up/top-down dichotomy of attention has deservedly focused on the existence of experience-driven factors out-side this dichotomy. However, as researchers seek a better framework characterizing all control sources, a thorough re-evaluation of the top-down, or goal-directed, component is imperative. Studies of this component have richly documented the ways in which goals strategically modulate attentional control, but surprisingly little is known about how individuals arrive at their chosen strategies. Consider that manipulating goal-directed control commonly relies on experimenter instruction, which lacks ecological validity and may not always be complied with. To better characterize the factors governing goal-directed control, we recently created the adaptive choice visual search paradigm. Here, observers can freely choose between two tar-gets on each trial, while we cyclically vary the relative efficacy of searching for each target. That is, on some trials it is faster to search for a red target than a blue target, while on other trials the opposite is true . Results using this paradigm have shown that choice behavior is far from optimal, and appears largely determined by competing drives to maximize performance and minimize effort. Further, individual differences in performance are stable across sessions while also being malleable to experimental manipulations emphasizing one competing drive (e.g., reward, which motivates individuals to maximize performance). This research represents an initial step toward characterizing an individual profile of goal-directed control that extends beyond the classic understanding of “top-down” attention and promises to contribute to a more accurate framework of attentional control.

< Back to 2017 Symposia

Vision Sciences Society