Cart

[woocommerce_cart]

2019 National Eye Institute Travel Grants

Congratulations to this year’s recipients of the National Eye Institute Travel Grants.

Early Career Scientist Travel Grants

Brian Anderson
Texas A&M University

Nancy Carlisle
Lehigh University

Daniel R. Coates
University of Houston

Emily Cooper
University of California, Berkeley

Yasmine El-Shamayleh
Columbia University

Nicholas Gaspelin
Binghamton University

Sharon Gilad-Gutnick
Massachusetts Institute of
Technology

Jason Haberman
Rhodes College

Andrew Haun
University of Wisconsin – Madison

Biyu He
New York University Langone Health

Melissa Kibbe
Boston University

Julie Markant
Tulane University

Ashleigh Maxcey
The Ohio State University

Vincent McGinty
Rutgers University – Newark

Abigail Noyce
Boston University

David Osher
The Ohio State University

Megan Peters
University of California, Riverside

Dobromir Rahnev
Georgia Institute of Technology

Karen Schloss
University of Wisconsin – Madison

Viola Stoermer
University of California, San Diego

Caglar A Tas
University of Tennessee – Knoxville

Brandon Thomas
University of Wisconsin – Whitewater

Rachel Wu
University of California, Riverside

Bei Xiao
American University

Postdoctoral Travel Grants

Kirsten Adam
University of California, San Diego

Stephen Adamo
University of Central Florida

Concetta Alberti
Northeastern University

Reem Alzahabi
Tufts University

Eleonora Bartoli
Baylor College of Medicine

Shlomit Ben-Ami
Massachusetts Institute of
Technology

Tashauna Blankenship
Boston University

Andrew Coia
University of Chicago

Patrick Cox
The George Washington University

Rachel Denison
New York University

Kacie Dougherty
Vanderbilt University

Amirhossein Ghaderi
York University

Saeideh Ghahghaei
The Smith-Kettlewell
Eye Research Institute

Alon Hafri
Johns Hopkins University

Taylor Hayes
University of California, Davis

Shipra Kanjlia
Johns Hopkins University

Ramisha Knight
University of Illinois at
Urbana-Champaign

Brian Maniscalco
University of California, Riverside

Everett Mettler
University of California, Los Angeles

Dina Popovkina
University of Washington

Ramanujan Raghavan
New York University

Reshanne Reeder
Otto-von-Guericke University

Arryn Robbins
Carthage College

Zvi Roth
National Institute of Mental Health, NIH

Noelle Stiles
University of Southern California

David Sutterer
Vanderbilt University

Katherine EM Tregillus
University of Minnesota

Stefan Uddenberg
Princeton University

Alex White
University of Washington

John Wilder
University of Toronto

Bo Yeong Won
University of California, Davis

Jacob Yates
University of Rochester

Jennifer Yoon
New York University

2019 FABBS Early Career Impact Award

Congratulations to Julie Golomb, the VSS nominee and recipient of the 2019 Federation of Associations in Behavioral & Brain Sciences (FABBS) Early Career Impact Award.

The FABBS Early Career Impact Award honors early career scientists of FABBS member societies during the first 10 years post-PhD and recognizes scientists who have made major contributions to the sciences of mind, brain, and behavior. The goal is to enhance public visibility of these sciences and the particular research through the dissemination efforts of the FABBS in collaboration with the member societies and award winners.

Julie Golomb

Associate Professor
Ohio State University

Julie Golomb earned her bachelor’s degree in neuroscience from Brandeis University and her doctorate from Yale University. She completed post-doctoral research at MIT before joining the faculty at Ohio State in 2012 and receiving tenure in 2018. Her lab’s research is funded by grants from the National Institutes of Health, the Alfred P. Sloan Foundation, and the Ohio Supercomputer Center. For more information about Dr. Golomb and an overview of her article, go to Making Sense from Dots of Light on the FABBS website.

Making Sense from Dots of Light

For Julie Golomb, it all started with a college course in visual perception. “I realized that all of these things I take for granted about how I perceive the world are actually really hard challenges for the brain to solve.”
How do we recognize our coffee mug? How do we pick out a friend’s face in the crowd? Or know that the round, white and black thing flying at us is, in fact, a soccer ball?
This constant bombardment of rich and usually moving pictures start out simply as dots of light hitting different spots on the retina.
Those dots create a map of where things are in the world before heading to the brain, where the deep processing takes place that Golomb studies in her lab.
While the brain is busy almost instantaneously processing incoming data, the world outside is continuously moving and changing, as are our eyes–an emphasis in Golomb’s lab.
In one experiment, Golomb may ask volunteers to determine whether two objects that appear on a computer monitor are the same shape. “Or we’ll flash a bunch of different objects on the screen and then ask, ‘What color was presented in a certain location?’”
Among interesting findings: When asked to pay attention to two squares of different colors, such as red and blue, volunteers might mistakenly describe one of the colors afterward as purple.
“The brain has a hard job, and it does a remarkable job,” Golomb says. “But it is not perfect.” A lot of learning about the brain is based on its mistakes.
Golomb also asks volunteers to complete tasks while connected to tools such as functional MRI, which images their brain, or an EEG machine, which records electrical activity on the scalp. She uses sophisticated computer models to analyze how the brains are processing information.
As the technology changes and develops, so do the possibilities with brain research. And it’s not just new equipment. “We’re asking better questions and new questions based on what we’re continually learning.”

17th Annual Dinner and Demo Night

Monday, May 20, 2019, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks and limited indoor seating in Banyan Breezeway
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall

Please join us Monday evening for the 17th Annual VSS Dinner and Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada, Reno; Karen Schloss, University of Wisconsin; Gennady Erlikhman, University of Nevada, Reno; and Benjamin Wolfe, MIT.

Demos are free to view for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a VSS Friends and Family Pass to attend the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. Guest passes may also be purchased at the BBQ event, beginning at 5:45 pm.

The following demos will be presented from 7:00 to 10:00 pm, in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall:

For the Last Time: The Ever-Popular Beuchet Chair

Peter Thompson, Rob Stone, and Tim Andrews, University of York

A favorite at demo Night for many years, the Beuchet chair is back for one last hurrah. The two parts of the chair are at different distances and the visual system fails to apply size constancy appropriately. The result is people can be shrunk or made giants.

Paradoxical impact of memory on color appearance of faces

Rosa Lafer-Sousa, MIT

What is the function of color vision? In this demo we impair retinal mechanisms of color using monochromatic sodium light, and probe memory colors for familiar objects in a naturalistic setting. We showcase a surprising finding: faces, and only faces, provoke a paradoxical memory color, providing evidence that color contributes to face encoding and social communication.

Immersive and long lasting afterimages – experiences of altered self

Daw-An Wu, California Institute of Technology

Dark Adaptation + Bright Flashes = Rod Afterimages!

Shikaku no Mori: gamified vision tests

Kenchi Hosokawa, Kazushi Maruya, and Shin’ya Nishida, NTT Communication Science Laboratories

We gamified several vision tests. Those games can be played in a short time (~ 3 minutes) and with a more entertained way. Test sensitivities are enough to be used as initial screening tests (see pretest data on poster in Sunday Pavilion session). Those games are usable for self-check.

The UW Virtual Brain Project: Exploring the visual and auditory systems in virtual reality

Karen B. Schloss, Chris Racey, Simon Smith, Ross Tredinnick, Nathaniel Miller, Melissa Schoenlein, and Bas Rokers, University of Wisconsin – Madison

The UW Virtual Brain Project allows you to explore the visual system and auditory system in virtual reality. It helps to visualize the flow of information from sensory input to cortex cortical processing. The ultimate aim of the project is to improve neuroscience education by leveraging natural abilities for space-based learning.

Fun with Birefringent Surfaces and Polarized Light

Gideon Caplovitz, University of Nevada Reno

What could possibly go wrong?

Generating hyper-realistic faces for use in vision science experiments

Joshua Peterson, Princeton University; Jordan Suchow, Stevens Institute of Technology; Stefan Uddenberg, Princeton University

Easily alter your photographic appearance in a bunch of interesting ways! We have developed a system to morph any face image along psychologically relevant dimensions using recent advances in deep neural networks (namely GANs).

Hidden in Plain Sight!

Peter April, Jean-Francois Hamelin, Danny Michaud, Sophie Kenny, VPixx Technologies

Can visual information be hidden in plain sight? We use the PROPixx 1440Hz projector, and the TRACKPixx 2kHz eye tracker, to demonstrate images which are invisible until you make a rapid eye movement. We implement retinal stabilization to show other images that fade during fixations. Do your eyes deceive?

The Magical Alberti Frame

Niko Troje and Adam Bebko, York University

Pictures are two things: objects in space and representations of spaces existing elsewhere. In this virtual reality experience, users use a magical frame to capture pictures that momentarily appear identical to the scene they reside in, but when users move, the pictures evoke unexpected and eerie perceptual changes and distortions.

Café-Wall illusion caused by shadows on a surface of three dimensional object

Kazushi Maruya, NTT Communication Science Laboratories; Yuki Fujita, Tokyo University of the Arts; Tomoko Ohtani, Tokyo University of the Arts

Café-Wall illusion is a famous optical illusion that parallel gray lines between displaced rows of black and white squares are appeared to be angled with respect to one another. In this demonstration, we show that the Café-wall pattern can be emerged when shadows are cast by multiple cuboids onto a 3D surface of varying depths.

Foveal Gravity: A Robust Illusion of Color-Location Misbinding

Cristina R. Ceja, Nicole L. Jardine, and Steven L. Franconer, Northwestern University

Here we present a novel, robust color-location misbinding illusion that we call foveal gravity: objects and their features can be perceived accurately, but are often mislocalized to locations closer to fovea under divided attention.

Multi Person VR walking experience with and without accuracy correction

Matthias Pusch and Andy Bell, WorldViz

Consumer VR systems are great fun but they have limited accuracy when it comes to precisely tracking research participants. This demo will allow participants to experience first hand how inaccurate these systems can be in an interactive multi-user setting within a large walkable virtual space.

Impossible Integration of Size and Weight: The Set-Subset Illusion

Isabel Won, Steven Gross, and Chaz Firestone, Johns Hopkins University
Perception can produce experiences that are *impossible*, such as a triangle with three 90° sides, or a circular staircase that ascends in every direction. Are there impossible experiences that we can not only see, but also *feel*? Here, we demonstrate the “Set-Subset Illusion” — whereby a set of objects can, impossibly, feel lighter than a member of that set!

The Illusory and Invisible Audiovisual Rabbit Illusions

Noelle Stiles, University of Southern California; Armand R. Tanguay, Jr., University of Southern California, Caltech; Ishani Ganguly, Caltech; Monica Li, Caltech, University of California, Berkeley; Carmel A. Levitan, Caltech, Occidental College; Yukiyasu Kamitani, Kyoto University; Shinsuke Shimojo, Caltech

Neuroscience often focuses on the prediction of future perception based on prior perception. However, information is also processed postdictively, such that later stimuli impact percepts of prior stimuli. We will demonstrate that audition can postdictively relocate an illusory flash or suppress a real flash in the Illusory and Invisible Audiovisual Rabbit Illusions.

Chopsticks Fusion

Ray Gottlieb, College of Syntonic Optometry

Have you noticed that your normal stereoscopic perception is never as strong as the stark, solid 3-dimensionality that you see in a stereoscope or virtual reality device? Chopstick Fusion is a simple and inexpensive stereo practice that develops spatial volume perception. I’ll bring chopsticks for everyone.

Moiré effects on real object’s appearances

Takahiro Kawabe and Masataka Sawayama, NTT Communication Science Laboratories; Tamio Hoshik, Sojo University

An intriguing moiré effect is demonstrated wherein a real bar object in front of stripe motion on an LCD display apparently deforms or rotates in depth. Changing bar orientation and/or a bar-display distance drastically modulates the appearance. Even invisible stripe motion causes a vivid change in bar appearances.

The motion aftereffect without motion: 1-D, 2-D and 3-D illusory motion from local adaptation to flicker

Mark Georgeson, Aston University, UK

Adapting to a flickering image induces vivid illusory motion on an appropriate stationary test pattern: a motion aftereffect without inducing motion. Motion can be seen in 1-D, 2-D or 3-D, depending on the images chosen, but the basis for the effect is local adaptation to temporal gradients of luminance change.

Monocular rivalry

Leone Burridge

An iphone 5 drawing printed onto paper. The perceived colours fluctuate between blue/yellow and red /green.

A Fast and blurry versus slow and clear: How stationary stimuli modify motion perception

Mark Wexler, Labotatoire Psychologie de la Perception, CNRS & Université Paris Descartes

Why do shooting stars look the way they do? Why do most moving objects look clear, even at saccadic speeds? Are there motion effects waiting to be explored beyond the frequency range of computer monitors? Come and find out!

Thatcherize your face

Andre Gouws, York Neuroimaging Centre, University of York; Peter Thompson, University of York

The Margaret Thatcher illusion is one of the best-loved perceptual phenomena. Here you will have the opportunity to see yourself ‘thatcherized’ in real time and we print you a copy of the image to take away.

The caricature effect in data visualization: typical graphs produce negative learning

Jeremy Wilmer, Wellesley College

Graphs that display summary statistics without underlying distributions (e.g. bar/line/dot graphs with error bars) are commonly assumed to support robust information transfer. We demo an array of such graphs that falsify this assumption by stimulating negative learning relative to baseline in typical viewers.

Look where Simon says without delay

Katia Ripamonti, Cambridge Research Systems; Lloyd Smith, Cortech Solutions

Can you beat the Simon effect using your eye movements? Compete with other players to determine who can look where Simon says without delay. All you need to do is to control your eye movements before they run off. It sounds so simple and yet so difficult!

Illusory color induced by colored apparent-motion in the extreme-periphery

Takashi Suegami, Yamaha Motor Corporation, Caltech; Yusuke Shirai, Toyohashi University of Technology; Sara W. Adams, Caltech; Daw-An J. Wu, Caltech; Mohammad Shehata, Caltech, Toyohashi University of Technology; Shigeki Nakauchi, Toyohashi University of Technology; Shinsuke Shimojo, Caltech, Toyohashi University of Technology

Our new demo will show that foveal/parafoveal color cue with apparent motion can induce illusory color in the extreme-periphery (approx. 70°-90°) where cone cells are less distributed. One can experience, for example, clear red color perception for extreme-peripheral green flash, with isoluminant red cue (or vice versa).

The Magical Misdirection of Attention in Time

Anthony Barnhart, Carthage College

When we think of “misdirection,” we typically think of a magician drawing attention away from a spatial location. However, magicians also misdirect attention in time through the creation of “off-beats,” moments of suppressed attention. The “striking vanish” illusion, where a coin disappears when tapped with a pen, exploits this phenomenon.

How Can (Parts of) Planarians Survive Without their Brains and Eyes? -Hint: Its Extraocular UV-Sensitive System

Kensuke Shimojo, Chandler School; Eiko Shimojo, California Institute of Technology; Daw-An Wu, California Institute of Technology; Armand R. Tanguay, Jr., California Institute of Technology, University of Southern California; Mohammad Shehata, California Institute of Technology; Shinsuki Simojo, California Institute of Technology

Planarian dissected body parts, even with incomplete eyespots, show “light avoiding behavior” long before the complete regrowth of the entire body (including the sensory-motor organs). We will demonstrate this phenomenon live (in Petri dishes) and on video under both no-UV (visible) and UV light stimulation. In a dynamic poster mode, we show some observations addressing whether or not the mechanical stress (dissection) switches dominance between the two vision systems.

The joy of intra-saccadic retinal painting

Richard Schweitzer,  Humboldt-Universität zu Berlin; Tamara Watson, Western Sydney University; John Watson, Humboldt-Universität zu Berlin; Martin Rolfs, Humboldt-Universität zu Berlin

Is it possible to turn intra-saccadic motion blur – under normal circumstances omitted from conscious perception – into a salient stimulus? With the help of visual persistence, your own eye and/or head movements, and our custom-built setup for high-speed anorthoscopic presentation, you can paint beautiful images and amusing text directly onto your retina.

Build a camera obscura!

Ben Balas, North Dakota State University

Vision begins with the eye, and what better way to understand the eye than to build one? Come make your own camera obscura out of cardboard, tape, and paper, and you can observe basic principles of image formation and pinhole optics.

The Role of Color Filling-in in Natural Images

Christopher Tyler and Josh Solomon, City University of London

We demonstrate that natural images do not look very colorful when their color is restricted to edge transitions. Moreover, purely chromatic images with maximally graded transitions look fully colorful, implying that color filling-in makes no more than a minor contribution to the appearance of extended color regions in natural images.

Chopsticks trick your fingers

Songjoo Oh, Seoul National University

The famous rubber hand illusion is demonstrated by using chopsticks and fingers. A pair of chopsticks simultaneously moves back and forth on your index and middle fingers, respectively. One chopstick is actually touching the middle finger, but the other one is just moving in the air without touching the index finger. If you pay attention only to your index finger, you may erroneously feel the touch come from the index finger, not from the middle finger.

Spinning reflections on depth from spinning reflections

Michael Crognale and Alex Richardson, University of Nevada Reno

A trending novelty toy when spun, induces a striking depth illusion from disparity in specular reflections from point sources. However, “specular” disparity from static curved surfaces is usually discounted or contributes to surface curvature. Motion obscures surface features that compete with depth cues and result in a strong depth illusion.

High Speed Gaze-Contingent Visual Search

Kurt Debono and Dan McEchron, SR Research Ltd

Try to find the target in a visual search array which is continuously being updated based on the location of your gaze. High speed video based eye tracking combined with the latest high speed monitors make for a compelling challenge.

Interactions between visual movement and position

Stuart Anstis, University of California, San Diego; Sharif Saleki, Dartmouth College; Mart Ozkan, Dartmouth College; Patrick Cavanagh, York University

Movement paths can be distorted when they move across an oblique background grating (the Furrow illusion). These motions, viewed the periphery, can be paradoxically immune to visual crowding. Conversely, moving backgrounds can massively distort static flashed targets altering their perceived size, shape, position and orientation.(flash-grab illusion).

StroboPong

VSS Staff

Back by popular demand. Strobe lights and ping pong!

2019 Young Investigator – Talia Konkle

The Vision Sciences Society is honored to present Talia Konkle with the 2019 Young Investigator Award.

The Young Investigator Award is an award given to an early stage researcher who has already made a significant contribution to our field. The award is sponsored by Elsevier, and the awardee is invited to submit a review paper to Vision Research highlighting this contribution.

Talia Konkle

Assistant Professor
Department of Psychology
Harvard University

Talia Konkle earned Bachelor degrees in applied mathematics and in cognitive science at the University of California, Berkeley.  Under the direction of Aude Oliva, she earned a PhD in Brain & Cognitive Science at MIT in 2011. Following exceptionally productive years as a postdoctoral fellow in the Department of Psychology at Harvard and at the University of Trento, in 2015, Dr. Konkle assumed a faculty position in the Department of Psychology & Center for Brain Science at Harvard.

Dr. Konkle’s research to understand how our visual system organizes knowledge of objects, actions, and scenes combines elegant behavioral methods with modern analysis of brain activity and cutting-edge computational theories. Enabled by sheer originality and analytical rigor, she creates and crosses bridges between previously unrelated ideas and paradigms, producing highly cited publications in top journals. One line of research demonstrated that object processing mechanisms relate to the physical size of objects in the world. Pioneering research on massive visual memory, Dr. Konkle also showed that detailed visual long-term memory retrieval is linked more to conceptual than perceptual properties.

Dr. Konkle’s productive laboratory is a vibrant training environment, attracting many graduate students and postdoctoral fellows. Dr. Konkle has also been actively involved in outreach activities devoted to promoting women and minorities in science.

From what things look like to what they are

Dr. Konkle will talk during the Awards Session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2

How do we see and recognize the world around us, and how do our brains organize all of this perceptual input? In this talk I will highlight some of the current research being conducted in my lab, exploring the representation of objects, actions, and scenes in the mind and brain.

 

Save

Save

Save

Save

2019 Student Travel Awards

Bianca Baltaretu
York University and NSERC Brain-in-Action Program
Advisor: J. Douglas Crawford

Brandon Carlos
University of Houston
Advisor: Benjamin Tamber-Rosenau

Samson Chota
Université de Toulouse Paul Sabatier
Advisor: Rufin VanRullen

Chaipat Chunharas
University of California, San Diego and Chulalongkorn University, Thailand
Advisor: Timothy F. Brady

Clara Colombatto
Yale University
Advisor: Brian Scholl

Aimee Dollman
University of Cape Town
Advisor: Mark Solms

Cameron Ellis
Yale University
Advisor: Nicholas B. Turk-Browne

Monika Graumann
Freie Universität Berlin
Advisor: Radoslaw Martin Cichy

Jasper Hajonides van der Meulen
University of Oxford
Advisors: Kia Nobre and Mark Stokes

Lisa Kroell
Humboldt-Universität zu Berlin
Advisors: Martin Rolfs & Paul Bays

Rakesh Nanjappa
SUNY College of Optometry
Advisor: Robert M. McPeek

Mónica Otero
Universidad Técnica Federico Santa María
Advisors: María-José Escobar and Wael El-Deredy

Stella Qian
Michigan State University
Advisor: Jan Brascamp

Zekun Sun
Johns Hopkins University
Advisor: Chaz Firestone

JohnMark Taylor
Harvard University
Advisor: Yaoda Xu

Chunyue Teng
George Washington University
Advisor: Dwight J. Kravitz

Matsya Thulasiram
University of Manitoba
Advisor: Jonathan Marotta

Rina Watanabe
The University of Electro-Communications
Advisor: Yoichi Miyawaki

Jiaxuan Zhang
Columbia University
Advisor: Gemma Roig

Liron Zipora Gruber
Weizmann Institute of Science
Advisors: Ehud Ahissar and Shimon Ullman

 

2019 Funding Workshops

VSS Workshop on Funding in the US

No registration required. First come, first served, until full.

Saturday, May 18, 2019, 12:45 – 1:45 pm, Sabal/Sawgrass

Moderator: David Brainard, University of Pennsylvania
Discussants: Todd Horowitz, National Cancer Institute; Lawrence R. Gottlob, National Science Foundation; and Cheri Wiggs, National Eye Institute

You have a great research idea, but you need money to make it happen. You need to write a grant. This workshop will address NIH and NSF funding mechanisms for vision research. Cheri Wiggs (National Eye Institute) and Todd Horowitz (National Cancer Institute) will provide insight into the inner workings of the NIH extramural research program. Larry Gottlob will represent the Social, Behavioral, and Economic (SBE) directorate of the NSF. There will be time for your questions.

Todd Horowitz

National Cancer Institute

Todd S. Horowitz, Ph.D., is a Program Director in the Behavioral Research Program’s (BRP) Basic Biobehavioral and Psychological Sciences Branch (BBPSB), located in the Division of Cancer Control and Population Sciences (DCCPS) at the National Cancer Institute (NCI). Dr. Horowitz earned his doctorate in Cognitive Psychology at the University of California, Berkeley in 1995. Prior to joining NCI, he was Assistant Professor of Ophthalmology at Harvard Medical School and Associate Director of the Visual Attention Laboratory at Brigham and Women’s Hospital. He has published more than 70 peer-reviewed research papers in vision science and cognitive psychology. His research interests include attention, perception, medical image interpretation, cancer-related cognitive impairments, sleep, and circadian rhythms.

Lawrence R. Gottlob

National Science Foundation

Larry Gottlob, Ph.D., is a Program Director in the Perception, Action, and Cognition program at the National Science Foundation. His permanent home is in the Psychology Department at the University of Kentucky, but he is on his second rotation at NSF. Larry received his PhD from Arizona State University in 1995 and has worked in visual attention, memory, and cognitive aging.

Cheri Wiggs

National Eye Institute

Cheri Wiggs, Ph.D., serves as a Program Director at the National Eye Institute (of the National Institutes of Health). She oversees extramural funding through three programs — Perception & Psychophysics, Myopia & Refractive Errors, and Low Vision & Blindness Rehabilitation. She received her PhD from Georgetown University in 1991 and came to the NIH as a researcher in the Laboratory of Brain and Cognition. She made her jump to the administrative side of science in 1998 as a Scientific Review Officer. She currently represents the NEI on several trans-NIH coordinating committees (including BRAIN, Behavioral and Social Sciences Research, Medical Rehabilitation Research) and was appointed to the NEI Director’s Audacious Goals Initiative Working Group.

David Brainard

University of Pennsylvania

David H. Brainard is the RRL Professor of Psychology at the University of Pennsylvania. His research interests focus on human color vision, which he studies both experimentally and through computational modeling of visual processing. He is a fellow of the Optical Society, ARVO and the Association for Psychological Science. At present, he directs Penn’s Vision Research Center, serves as Associate Dean for the Natural Sciences in Penn’s School of Arts and Sciences, is an Associate Editor of the Journal of Vision, co-editor of the Annual Review of Vision Science, and president-elect of the Vision Sciences Society.

VSS Workshop on Funding Outside the US

No registration required. First come, first served, until full.

Sunday, May 19, 2019, 12:45 – 1:45 pm, Sabal/Sawgrass

Moderator: Laurie Wilcox, York University, Toronto

Panelists: Thiago Leiros Costa, KU Leuven; Anya Hurlbert, Newcastle University; Concetta Morrone, University of Pisa; and Cong Yu, Peking University

You have a great research idea, but you need money to make it happen. You need to write a grant. This funding workshop will be focused specifically on disseminating information about non-US funding mechanisms appropriate for vision research. The format of the workshop will be a moderated panel discussion driven by audience questions. The panelists are vision scientists, each of whom has experience with at least one non-US funding mechanism. Because funding opportunities are diverse and differ across countries, however, the workshop will also encourage information sharing from the audience.

Thiago Leiros Costa

KU Leuven

Thiago Leiros Costa is a Marie Skłodowska-Curie fellow at KU Leuven, Belgium. He is currently focused on accessing neural correlates of Gestalt-like phenomena and on the role that predictive processing plays in low and mid-level vision. Being a neuropsychologist and visual neuroscientist, he is interested in basic research in the field of perception per se, but also on opportunities for translational research in psychology (using tasks and methods derived from basic research to address clinically relevant questions). This has led him to work with different clinical populations, currently focusing on visual predictive processing in Autism. He has experience with multiple techniques, such as psychophysics, EEG, non-invasive brain stimulation and is currently planning his first study using fMRI.

Anya Hurlbert

Newcastle University

Anya Hurlbert is Professor of Visual Neuroscience,  Director of the Centre for Translational Systems Neuroscience and Dean of Advancement at Newcastle University. She co-founded Newcastle’s Institute of Neuroscience in 2003, serving as its co-Director until 2014.  Hurlbert’s research focuses on colour perception and its role in everyday visual and cognitive tasks, in normal and atypical development and ageing. She is also interested in applied areas such as digital imaging and novel lighting technologies.  Professor Hurlbert is active in the public understanding of science, and has devised and co-curated several science-based art exhibitions, including an interactive installation at the National Gallery, London, for its 2014 summer exhibition Making Colour. She is former Chairman of the Colour Group (GB) and Scientist Trustee of the National Gallery, and currently on the editorial board of Current Biology as well as several international advisory boards. Funding for her personal research has come from the Wellcome Trust, UKRI (EPSRC/MRC), the European Commission (EU), charities, and industry. She is currently a PI in the EU H2020 Innovative Training Network “Dynamics in Vision and Touch”.

Concetta Morrone

University of Pisa

Maria Concetta Morrone is Professor of Physiology in the School of Medicine of the University of Pisa, Director of the Vision Laboratory of the IRCCS Fondazione Stella Maris, and Academic Director of the inter-University Masters in Neuroscience. She is a member of the prestigious Accademia dei Lincei and has been awarded major national and international prizes for scientific achievements. From an initial interest in biophysics and physiology, where she made many seminal contributions, she moved on to psychophysics and visual perception. Over the years her research has spanned spatial vision, development, plasticity, attention, color, motion, robotics, vision during eye movements and more recently multisensory perception and action.  She has coordinated many European Community grants over many founding schemes, and was awarded in 2014 an ERC-IDEA Advanced Grant for Excellence in Science.

Cong Yu

Peking University

Cong Yu is a professor at Peking University. He studies human perceptual learning using psychophysical methods, and macaque visual cortex using two-photon calcium imaging.

Laurie Wilcox

York University

Laurie M. Wilcox is a Professor in Psychology at York University, Toronto, Canada.  She uses psychophysical methods to study stereoscopic depth perception. In addition to basic research in 3D vision, Laurie has been involved in understanding the factors that influence the viewer experience of 3D media (IMAX, Christie Digital) and perceptual distortions in VR (Qualcomm Canada). Her research has been funded primarily by the Natural Sciences and Engineering Research Council (NSERC) of Canada which supports both basic and applied research programs. She is also familiar with contract-based research in collaboration with industry and government agencies.

2019 Student Workshops

There is no advanced sign-up for workshops. Workshops will be filled on a first-come, first-served basis.

Peer-networking for Students and Postdocs

Saturday, May 18, 2019, 12:45 – 1:45 pm, Jasmine/Palm
Moderators: Eileen Kowler, Talia Konkle, and Fulvio Domini

Peer-to-peer connections and networks can be the basis of your most important long-term collaborations and friendships.  This workshop will help you meet and connect to your peer researchers, face to face.  The format will be separate round tables dedicated to different topics, allowing opportunities for discussion and networking.  Session moderators will help keep things organized. We’ll have at least one rotation during the workshop so that you will have the opportunity to talk to more people and explore more topics, including topics you’re working on now, as well as areas of interest for the future.

Eileen Kowler

Rutgers University

Eileen Kowler is a Distinguished Professor at Rutgers University and Senior Associate Dean in the School of Graduate Studies.  She received her doctoral degree from the University of Maryland, and was a postdoc at NYU.  She has been at Rutgers since 1980, where she maintains affiliations with the Department of Psychology and Center for Cognitive Science.  Kowler’s research focuses on the planning of and generation of eye movements and their role in visual tasks. In her roles as a faculty member, VSS board member, and former principal investigator of an NSF training grant, she has a strong commitment to the topic of this workshop:  creating opportunities for students and postdocs to develop their careers and collaborate with one another.

Talia Konkle

Harvard University

Talia Konkle is an Assistant Professor in the Department of Psychology at Harvard University.  Her research characterizes mid and high-level visual representation at both cognitive and neural levels. She received her B.A. in Applied Math and Cognitive Science at UC Berkeley in 2004, her Ph.D. from MIT in Brain and Cognitive Science in 2011, and conducted her postdoctoral training at University of Trento and Harvard until 2015. Talia is the recipient of the 2019 Elsevier/VSS Young Investigator Award.

Fulvio Domini

Brown University

Fulvio Domini is a Professor at the department of Cognitive, Linguistic and Psychological Sciences at Brown University. He was hired at Brown University in 1999 after completing a Ph.D. in Experimental Psychology at the University of Trieste, Italy in 1997. His research team investigates how the human visual system processes 3D visual information to allow successful interactions with the environment. His approach is to combine computational methods and behavioral studies to understand what are the visual features that establish the mapping between vision and action. His research has been and is currently funded by the National Science Foundation.

VSS Workshop for PhD Students and Postdocs:
How to Spend Your Time Well as a Young Researcher

Sunday, May 19, 2019, 12:45 – 1:45 pm, Jasmine/Palm
Moderator: Johan Wagemans, University of Leuven, Belgium
Panelists: Alex Holcombe, Niko Kriegeskorte, Allison Sekuler, and Kate Storrs

Graduate students and postdocs often wonder what they should spend their work time on, in addition to learning the skills of a good researcher, doing good research, and writing good papers.  For instance, quite a few people write blogs or are very active on public forums (e.g., about open science, open source software, helpdesks for R, Python, etc.).   Others have questions about how much time to spend on service to the profession, such as reviewing manuscripts.   With all these choices, many developing researchers will be faced with the challenge of finding the right balance between diversifying their professional activities while still devoting time to the core requirements of their careers.  This workshop will feature panelists who will provide perspectives on these issues and lead a discussion on the pros and cons of spending time on professional activities not directly relating to research.  If you think you have no time for this, you should definitely be there!

Alex Holcombe

University of Sydney

When not teaching or working on vision experiments, Alex Holcombe works to improve transparency in and access to research. To address the emerging reproducibility crisis in psychology, in 2011 he co-created PsychFiledrawer.org, in 2013 introduced the Registered Replication Report at the journal Perspectives on Psychological Science, and appears in this cartoon about replication.  He was involved in the creation of the journal badges to signal open practices, the preprint server PsyArxiv, the new journal Advances in Methods and Practices in Psychological Science, and PsyOA.org, which provides resources for flipping a subscription journal to open access. Talk to him anytime on Twitter @ceptional. 

Niko Kriegeskorte

Columbia University

Nikolaus Kriegeskorte is a computational neuroscientist who studies how our brains enable us to see and understand the world around us. He received his PhD in Cognitive Neuroscience from Maastricht University, held postdoctoral positions at the Center for Magnetic Resonance Research at the University of Minnesota and the U.S. National Institute of Mental Health in Bethesda, and was a Programme Leader at the U.K. Medical Research Council Cognition and Brain Sciences Unit at the University of Cambridge. Kriegeskorte is a Professor at Columbia University, affiliated with the Departments of Psychology and Neuroscience. He is a Principal Investigator and Director of Cognitive Imaging at the Zuckerman Mind Brain Behavior Institute at Columbia University. Kriegeskorte is a co-founder of the conference “Cognitive Computational Neuroscience”, which had its inaugural meeting in September 2017 at Columbia University.

Allison Sekuler

McMaster University

Allison Sekuler is the Sandra Rotman Chair in Cognitive Neuroscience and Vice-President Research at Baycrest Centre for Geriatric Care. She also is Managing Director of the Centre for Aging + Brain Health Innovation, and the world-renowned Rotman Research Institute. A graduate of Pomona College (BA, Mathematics and Psychology) and the University of California, Berkeley (PhD, Psychology), she holds faculty appointments at the University of Toronto and McMaster University, where she was the country’s first Canada Research Chair in Cognitive Neuroscience and established lasting collaborations with Japanese researchers. Dr. Sekuler has a notable record of scientific achievements in aging, vision science, neural plasticity, imaging, and neurotechnology. Her research focuses on perceptual organization and face perception, motion and depth perception, spatial and pattern vision, and age-related changes in vision. The recipient of numerous awards for research, teaching and leadership, she has broad experience in senior academic, research, and innovation leadership roles, advancing internationalization, interdisciplinarity, skills-development, entrepreneurship, and inclusivity. 

Kate Storrs

Justus-Liebig University, Giessen

Kate Storrs is currently a Humboldt Postdoctoral Fellow using deep learning to study material perception at the Justus-Liebig University in Giessen, Germany. Before that she was a postdoc at the University of Cambridge, a Teaching Fellow at University College London, and a PhD student at the University of Queensland in Australia. Her main professional hobby is science communication. Kate has performed vision-science-themed stand-up comedy in London at the Royal Society, the Natural History Museum, the Bloomsbury Theatre, and a dozen pubs and festivals across the UK. She has presented vision science segments on Cambridge TV, the Naked Scientists podcast, BBC Cambridgeshire radio, and was a UK finalist in the 2016 FameLab international science communication competition. Always happy to talk on Twitter @katestorrs.

Johan Wagemans

University of Leuven, Belgium

Johan Wagemans is a professor in experimental psychology at the University of Leuven (KU Leuven) in Belgium. Current research interests are mainly in perceptual grouping, figure-ground organization, depth perception, shape perception, object perception, and scene perception, including applications in autism, arts, and sports (see www.gestaltrevision.be). He has published more than 300 peer-reviewed articles on these topics and he has edited the Oxford Handbook of Perceptual Organization (2015).In addition to supervising many PhD students and postdocs, he is doing a great deal of community service such as coordinating the Department of Brain & Cognition, being editor of Cognition, Perception, i-Perception, and Art & Perception, and organizing the European Conference of Visual Perception (ECVP) and the Visual Science of Art Conference (VSAC) in Leuven (August 2019).

2019 Davida Teller Award – Barbara Dosher

The Vision Sciences Society is honored to present Dr. Barbara Dosher with the 2019 Davida Teller Award

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding female vision scientist in recognition of her exceptional, lasting contributions to the field of vision science.

Barbara Dosher

Distinguished Professor, University of California, Irvine

Barbara Dosher is a researcher in the areas of visual attention and learning. She received her PhD in 1977 from the University of Oregon and served on the faculty at Columbia University (1977 – 1992) and the University of California, Irvine (1992 – present). Her early career investigated temporal properties of retrieval from long-term and working memory, and priming using pioneering speed-accuracy tradeoff methods. She then transitioned to work largely in vision, bringing some of the concepts of cue combination in memory to initiate work on combining cues in visual perception. This was followed by work to develop observer models using external noise methods that went on to be the basis for proposing that changing templates, stimulus amplification, and noise filtering were the primary functions of attention. This and similar work then constrained and motivated new generative network models of visual perceptual learning that have been used to understand the roles of feedback in unsupervised and supervised learning, the induction of bias in perception, and the central contributions of reweighting evidence to a decision in visual learning.

Barbara Dosher is an elected member of the Society for Experimental Psychologists and the National Academy of Sciences, and is a recipient of the Howard Crosby Warren Medal (2013) and the Atkinson Prize (2018).

Learning and Attention in Visual Perception

Dr. Dosher will speak during the Awards session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2.

Visual perception functions in the context of a dynamic system that is affected by experience and by top-down goals and strategies. Both learning and attention can improve perception that is limited by the noisiness of internal visual processes and noise in the environment. This brief talk will illustrate several examples of how learning and attention can improve how well we see by amplifying relevant stimuli while filtering others—and how important it is to model the coding or transformation of early features in the development of truly generative quantitative models of perceptual performance.

 

Vision Sciences Society