[woocommerce_cart]
2019
Symposium Organizer Instructions
Thank you for organizing a symposium for VSS. Besides introducing each speaker, you are also responsible for ensuring that your symposium session stay on time. Below are some suggestions to help make your session run smoothly.
You should first familiarize yourself with the Talk Presentation Instructions. All instructions apply, except for those related to the talk timers.
Staying on Time
Unlike talk session presentations, symposium talks do not adhere to a strict 15 minutes per talk. As the symposium organizer, you are free to allocate any time you like to each talk. Be sure to let your speakers know how much time they have to talk and how much time is allowed for questions and answers. You many also want to allocate a Q&A period at the end of the symposium session.
Symposium sessions to not use the talk session timers. You must use your watch or phone to track time. To help keep speakers on time, we provide 5 minute and 1 minutes warning signs that you can display to your speakers to let them know when they are approaching the end of the allotted talk time. You can find the warning signs at the lectern.
Before Your Session
- Arrive at least 30 minutes before your session start time.
- Introduce yourself to each of the speakers and to the technician. VSS staff will also be present prior to your session to oversee preparations and answer any questions. In addition, a member of the Board of Directors will also attend the session and can help with situations.
- Confirm that all speakers are present. Report any missing speakers to the VSS staff person in the room.
- Make sure that there are NO DRINKS the table with the computers to avoid costly accidents.
- Ensure that all presenters have set up their computers and tested the projection. Ask them to test audio and video, if any.
- Ask the presenters take seats close to you in the front row.
- Familiarize all speakers with the talk room equipment:
- Microphones – Speakers can use either the lectern microphone or the Lavaliere (wireless lapel microphone).
- Laser Pointer – Test the laser pointer and give it to the speaker.
- Talk Time – Make sure that all speakers know how much speaking time they have.
- Switch Box – The speakers should know where the box is and what button to push to put their presentation live.
At the End of Your Session
Have a brief chat with the speakers, technical staff, and Board member to determine if there were issues that should be communicated to the VSS staff.
Technical Assistance
If you have a problem of any kind, alert the technician in the room for assistance. The attending board member should note the problem and report it to VSS staff so it can be logged.
To reach the VSS Technical Manager, please call Jeff Wilson at 415-302-4107 or send someone to the Registration Desk. If you need to reach VSS staff at the meeting, send someone to the Registration Desk or call 727-367-6461 or at ext. 7814 or 7814 from a house phone.
2019 FABBS Early Career Impact Award
Congratulations to Julie Golomb, the VSS nominee and recipient of the 2019 Federation of Associations in Behavioral & Brain Sciences (FABBS) Early Career Impact Award.
The FABBS Early Career Impact Award honors early career scientists of FABBS member societies during the first 10 years post-PhD and recognizes scientists who have made major contributions to the sciences of mind, brain, and behavior. The goal is to enhance public visibility of these sciences and the particular research through the dissemination efforts of the FABBS in collaboration with the member societies and award winners.
Julie Golomb
Associate Professor
Ohio State University
Julie Golomb earned her bachelor’s degree in neuroscience from Brandeis University and her doctorate from Yale University. She completed post-doctoral research at MIT before joining the faculty at Ohio State in 2012 and receiving tenure in 2018. Her lab’s research is funded by grants from the National Institutes of Health, the Alfred P. Sloan Foundation, and the Ohio Supercomputer Center. For more information about Dr. Golomb and an overview of her article, go to Making Sense from Dots of Light on the FABBS website.
Making Sense from Dots of Light
17th Annual Dinner and Demo Night
Monday, May 20, 2019, 6:00 – 10:00 pm
Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks and limited indoor seating in Banyan Breezeway
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall
Please join us Monday evening for the 17th Annual VSS Dinner and Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada, Reno; Karen Schloss, University of Wisconsin; Gennady Erlikhman, University of Nevada, Reno; and Benjamin Wolfe, MIT.
Demos are free to view for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a VSS Friends and Family Pass to attend the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. Guest passes may also be purchased at the BBQ event, beginning at 5:45 pm.
The following demos will be presented from 7:00 to 10:00 pm, in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall:
For the Last Time: The Ever-Popular Beuchet Chair
Peter Thompson, Rob Stone, and Tim Andrews, University of York
A favorite at demo Night for many years, the Beuchet chair is back for one last hurrah. The two parts of the chair are at different distances and the visual system fails to apply size constancy appropriately. The result is people can be shrunk or made giants.
Paradoxical impact of memory on color appearance of faces
Rosa Lafer-Sousa, MIT
What is the function of color vision? In this demo we impair retinal mechanisms of color using monochromatic sodium light, and probe memory colors for familiar objects in a naturalistic setting. We showcase a surprising finding: faces, and only faces, provoke a paradoxical memory color, providing evidence that color contributes to face encoding and social communication.
Immersive and long lasting afterimages – experiences of altered self
Daw-An Wu, California Institute of Technology
Dark Adaptation + Bright Flashes = Rod Afterimages!
Shikaku no Mori: gamified vision tests
Kenchi Hosokawa, Kazushi Maruya, and Shin’ya Nishida, NTT Communication Science Laboratories
We gamified several vision tests. Those games can be played in a short time (~ 3 minutes) and with a more entertained way. Test sensitivities are enough to be used as initial screening tests (see pretest data on poster in Sunday Pavilion session). Those games are usable for self-check.
The UW Virtual Brain Project: Exploring the visual and auditory systems in virtual reality
Karen B. Schloss, Chris Racey, Simon Smith, Ross Tredinnick, Nathaniel Miller, Melissa Schoenlein, and Bas Rokers, University of Wisconsin – Madison
The UW Virtual Brain Project allows you to explore the visual system and auditory system in virtual reality. It helps to visualize the flow of information from sensory input to cortex cortical processing. The ultimate aim of the project is to improve neuroscience education by leveraging natural abilities for space-based learning.
Fun with Birefringent Surfaces and Polarized Light
Gideon Caplovitz, University of Nevada Reno
What could possibly go wrong?
Generating hyper-realistic faces for use in vision science experiments
Joshua Peterson, Princeton University; Jordan Suchow, Stevens Institute of Technology; Stefan Uddenberg, Princeton University
Easily alter your photographic appearance in a bunch of interesting ways! We have developed a system to morph any face image along psychologically relevant dimensions using recent advances in deep neural networks (namely GANs).
Hidden in Plain Sight!
Peter April, Jean-Francois Hamelin, Danny Michaud, Sophie Kenny, VPixx Technologies
Can visual information be hidden in plain sight? We use the PROPixx 1440Hz projector, and the TRACKPixx 2kHz eye tracker, to demonstrate images which are invisible until you make a rapid eye movement. We implement retinal stabilization to show other images that fade during fixations. Do your eyes deceive?
The Magical Alberti Frame
Niko Troje and Adam Bebko, York University
Pictures are two things: objects in space and representations of spaces existing elsewhere. In this virtual reality experience, users use a magical frame to capture pictures that momentarily appear identical to the scene they reside in, but when users move, the pictures evoke unexpected and eerie perceptual changes and distortions.
Café-Wall illusion caused by shadows on a surface of three dimensional object
Kazushi Maruya, NTT Communication Science Laboratories; Yuki Fujita, Tokyo University of the Arts; Tomoko Ohtani, Tokyo University of the Arts
Café-Wall illusion is a famous optical illusion that parallel gray lines between displaced rows of black and white squares are appeared to be angled with respect to one another. In this demonstration, we show that the Café-wall pattern can be emerged when shadows are cast by multiple cuboids onto a 3D surface of varying depths.
Foveal Gravity: A Robust Illusion of Color-Location Misbinding
Cristina R. Ceja, Nicole L. Jardine, and Steven L. Franconer, Northwestern University
Here we present a novel, robust color-location misbinding illusion that we call foveal gravity: objects and their features can be perceived accurately, but are often mislocalized to locations closer to fovea under divided attention.
Multi Person VR walking experience with and without accuracy correction
Matthias Pusch and Andy Bell, WorldViz
Consumer VR systems are great fun but they have limited accuracy when it comes to precisely tracking research participants. This demo will allow participants to experience first hand how inaccurate these systems can be in an interactive multi-user setting within a large walkable virtual space.
Impossible Integration of Size and Weight: The Set-Subset Illusion
Isabel Won, Steven Gross, and Chaz Firestone, Johns Hopkins University
Perception can produce experiences that are *impossible*, such as a triangle with three 90° sides, or a circular staircase that ascends in every direction. Are there impossible experiences that we can not only see, but also *feel*? Here, we demonstrate the “Set-Subset Illusion” — whereby a set of objects can, impossibly, feel lighter than a member of that set!
The Illusory and Invisible Audiovisual Rabbit Illusions
Noelle Stiles, University of Southern California; Armand R. Tanguay, Jr., University of Southern California, Caltech; Ishani Ganguly, Caltech; Monica Li, Caltech, University of California, Berkeley; Carmel A. Levitan, Caltech, Occidental College; Yukiyasu Kamitani, Kyoto University; Shinsuke Shimojo, Caltech
Neuroscience often focuses on the prediction of future perception based on prior perception. However, information is also processed postdictively, such that later stimuli impact percepts of prior stimuli. We will demonstrate that audition can postdictively relocate an illusory flash or suppress a real flash in the Illusory and Invisible Audiovisual Rabbit Illusions.
Chopsticks Fusion
Ray Gottlieb, College of Syntonic Optometry
Have you noticed that your normal stereoscopic perception is never as strong as the stark, solid 3-dimensionality that you see in a stereoscope or virtual reality device? Chopstick Fusion is a simple and inexpensive stereo practice that develops spatial volume perception. I’ll bring chopsticks for everyone.
Moiré effects on real object’s appearances
Takahiro Kawabe and Masataka Sawayama, NTT Communication Science Laboratories; Tamio Hoshik, Sojo University
An intriguing moiré effect is demonstrated wherein a real bar object in front of stripe motion on an LCD display apparently deforms or rotates in depth. Changing bar orientation and/or a bar-display distance drastically modulates the appearance. Even invisible stripe motion causes a vivid change in bar appearances.
The motion aftereffect without motion: 1-D, 2-D and 3-D illusory motion from local adaptation to flicker
Mark Georgeson, Aston University, UK
Adapting to a flickering image induces vivid illusory motion on an appropriate stationary test pattern: a motion aftereffect without inducing motion. Motion can be seen in 1-D, 2-D or 3-D, depending on the images chosen, but the basis for the effect is local adaptation to temporal gradients of luminance change.
Monocular rivalry
Leone Burridge
An iphone 5 drawing printed onto paper. The perceived colours fluctuate between blue/yellow and red /green.
A Fast and blurry versus slow and clear: How stationary stimuli modify motion perception
Mark Wexler, Labotatoire Psychologie de la Perception, CNRS & Université Paris Descartes
Why do shooting stars look the way they do? Why do most moving objects look clear, even at saccadic speeds? Are there motion effects waiting to be explored beyond the frequency range of computer monitors? Come and find out!
Thatcherize your face
Andre Gouws, York Neuroimaging Centre, University of York; Peter Thompson, University of York
The Margaret Thatcher illusion is one of the best-loved perceptual phenomena. Here you will have the opportunity to see yourself ‘thatcherized’ in real time and we print you a copy of the image to take away.
The caricature effect in data visualization: typical graphs produce negative learning
Jeremy Wilmer, Wellesley College
Graphs that display summary statistics without underlying distributions (e.g. bar/line/dot graphs with error bars) are commonly assumed to support robust information transfer. We demo an array of such graphs that falsify this assumption by stimulating negative learning relative to baseline in typical viewers.
Look where Simon says without delay
Katia Ripamonti, Cambridge Research Systems; Lloyd Smith, Cortech Solutions
Can you beat the Simon effect using your eye movements? Compete with other players to determine who can look where Simon says without delay. All you need to do is to control your eye movements before they run off. It sounds so simple and yet so difficult!
Illusory color induced by colored apparent-motion in the extreme-periphery
Takashi Suegami, Yamaha Motor Corporation, Caltech; Yusuke Shirai, Toyohashi University of Technology; Sara W. Adams, Caltech; Daw-An J. Wu, Caltech; Mohammad Shehata, Caltech, Toyohashi University of Technology; Shigeki Nakauchi, Toyohashi University of Technology; Shinsuke Shimojo, Caltech, Toyohashi University of Technology
Our new demo will show that foveal/parafoveal color cue with apparent motion can induce illusory color in the extreme-periphery (approx. 70°-90°) where cone cells are less distributed. One can experience, for example, clear red color perception for extreme-peripheral green flash, with isoluminant red cue (or vice versa).
The Magical Misdirection of Attention in Time
Anthony Barnhart, Carthage College
When we think of “misdirection,” we typically think of a magician drawing attention away from a spatial location. However, magicians also misdirect attention in time through the creation of “off-beats,” moments of suppressed attention. The “striking vanish” illusion, where a coin disappears when tapped with a pen, exploits this phenomenon.
How Can (Parts of) Planarians Survive Without their Brains and Eyes? -Hint: Its Extraocular UV-Sensitive System
Kensuke Shimojo, Chandler School; Eiko Shimojo, California Institute of Technology; Daw-An Wu, California Institute of Technology; Armand R. Tanguay, Jr., California Institute of Technology, University of Southern California; Mohammad Shehata, California Institute of Technology; Shinsuki Simojo, California Institute of Technology
Planarian dissected body parts, even with incomplete eyespots, show “light avoiding behavior” long before the complete regrowth of the entire body (including the sensory-motor organs). We will demonstrate this phenomenon live (in Petri dishes) and on video under both no-UV (visible) and UV light stimulation. In a dynamic poster mode, we show some observations addressing whether or not the mechanical stress (dissection) switches dominance between the two vision systems.
The joy of intra-saccadic retinal painting
Richard Schweitzer, Humboldt-Universität zu Berlin; Tamara Watson, Western Sydney University; John Watson, Humboldt-Universität zu Berlin; Martin Rolfs, Humboldt-Universität zu Berlin
Is it possible to turn intra-saccadic motion blur – under normal circumstances omitted from conscious perception – into a salient stimulus? With the help of visual persistence, your own eye and/or head movements, and our custom-built setup for high-speed anorthoscopic presentation, you can paint beautiful images and amusing text directly onto your retina.
Build a camera obscura!
Ben Balas, North Dakota State University
Vision begins with the eye, and what better way to understand the eye than to build one? Come make your own camera obscura out of cardboard, tape, and paper, and you can observe basic principles of image formation and pinhole optics.
The Role of Color Filling-in in Natural Images
Christopher Tyler and Josh Solomon, City University of London
We demonstrate that natural images do not look very colorful when their color is restricted to edge transitions. Moreover, purely chromatic images with maximally graded transitions look fully colorful, implying that color filling-in makes no more than a minor contribution to the appearance of extended color regions in natural images.
Chopsticks trick your fingers
Songjoo Oh, Seoul National University
The famous rubber hand illusion is demonstrated by using chopsticks and fingers. A pair of chopsticks simultaneously moves back and forth on your index and middle fingers, respectively. One chopstick is actually touching the middle finger, but the other one is just moving in the air without touching the index finger. If you pay attention only to your index finger, you may erroneously feel the touch come from the index finger, not from the middle finger.
Spinning reflections on depth from spinning reflections
Michael Crognale and Alex Richardson, University of Nevada Reno
A trending novelty toy when spun, induces a striking depth illusion from disparity in specular reflections from point sources. However, “specular” disparity from static curved surfaces is usually discounted or contributes to surface curvature. Motion obscures surface features that compete with depth cues and result in a strong depth illusion.
High Speed Gaze-Contingent Visual Search
Kurt Debono and Dan McEchron, SR Research Ltd
Try to find the target in a visual search array which is continuously being updated based on the location of your gaze. High speed video based eye tracking combined with the latest high speed monitors make for a compelling challenge.
Interactions between visual movement and position
Stuart Anstis, University of California, San Diego; Sharif Saleki, Dartmouth College; Mart Ozkan, Dartmouth College; Patrick Cavanagh, York University
Movement paths can be distorted when they move across an oblique background grating (the Furrow illusion). These motions, viewed the periphery, can be paradoxically immune to visual crowding. Conversely, moving backgrounds can massively distort static flashed targets altering their perceived size, shape, position and orientation.(flash-grab illusion).
StroboPong
VSS Staff
Back by popular demand. Strobe lights and ping pong!
2019 Young Investigator – Talia Konkle
The Vision Sciences Society is honored to present Talia Konkle with the 2019 Young Investigator Award.
The Young Investigator Award is an award given to an early stage researcher who has already made a significant contribution to our field. The award is sponsored by Elsevier, and the awardee is invited to submit a review paper to Vision Research highlighting this contribution.
Talia Konkle
Assistant Professor
Department of Psychology
Harvard University
Talia Konkle earned Bachelor degrees in applied mathematics and in cognitive science at the University of California, Berkeley. Under the direction of Aude Oliva, she earned a PhD in Brain & Cognitive Science at MIT in 2011. Following exceptionally productive years as a postdoctoral fellow in the Department of Psychology at Harvard and at the University of Trento, in 2015, Dr. Konkle assumed a faculty position in the Department of Psychology & Center for Brain Science at Harvard.
Dr. Konkle’s research to understand how our visual system organizes knowledge of objects, actions, and scenes combines elegant behavioral methods with modern analysis of brain activity and cutting-edge computational theories. Enabled by sheer originality and analytical rigor, she creates and crosses bridges between previously unrelated ideas and paradigms, producing highly cited publications in top journals. One line of research demonstrated that object processing mechanisms relate to the physical size of objects in the world. Pioneering research on massive visual memory, Dr. Konkle also showed that detailed visual long-term memory retrieval is linked more to conceptual than perceptual properties.
Dr. Konkle’s productive laboratory is a vibrant training environment, attracting many graduate students and postdoctoral fellows. Dr. Konkle has also been actively involved in outreach activities devoted to promoting women and minorities in science.
From what things look like to what they are
Dr. Konkle will talk during the Awards Session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2
How do we see and recognize the world around us, and how do our brains organize all of this perceptual input? In this talk I will highlight some of the current research being conducted in my lab, exploring the representation of objects, actions, and scenes in the mind and brain.
2019 Funding Workshops
VSS Workshop on Funding in the US
No registration required. First come, first served, until full.
Saturday, May 18, 2019, 12:45 – 1:45 pm, Sabal/Sawgrass
Moderator: David Brainard, University of Pennsylvania
Discussants: Todd Horowitz, National Cancer Institute; Lawrence R. Gottlob, National Science Foundation; and Cheri Wiggs, National Eye Institute
You have a great research idea, but you need money to make it happen. You need to write a grant. This workshop will address NIH and NSF funding mechanisms for vision research. Cheri Wiggs (National Eye Institute) and Todd Horowitz (National Cancer Institute) will provide insight into the inner workings of the NIH extramural research program. Larry Gottlob will represent the Social, Behavioral, and Economic (SBE) directorate of the NSF. There will be time for your questions.
|
|
|
|
VSS Workshop on Funding Outside the US
No registration required. First come, first served, until full.
Sunday, May 19, 2019, 12:45 – 1:45 pm, Sabal/Sawgrass
Moderator: Laurie Wilcox, York University, Toronto
Panelists: Thiago Leiros Costa, KU Leuven; Anya Hurlbert, Newcastle University; Concetta Morrone, University of Pisa; and Cong Yu, Peking University
You have a great research idea, but you need money to make it happen. You need to write a grant. This funding workshop will be focused specifically on disseminating information about non-US funding mechanisms appropriate for vision research. The format of the workshop will be a moderated panel discussion driven by audience questions. The panelists are vision scientists, each of whom has experience with at least one non-US funding mechanism. Because funding opportunities are diverse and differ across countries, however, the workshop will also encourage information sharing from the audience.
Thiago Leiros Costa
KU Leuven
Thiago Leiros Costa is a Marie Skłodowska-Curie fellow at KU Leuven, Belgium. He is currently focused on accessing neural correlates of Gestalt-like phenomena and on the role that predictive processing plays in low and mid-level vision. Being a neuropsychologist and visual neuroscientist, he is interested in basic research in the field of perception per se, but also on opportunities for translational research in psychology (using tasks and methods derived from basic research to address clinically relevant questions). This has led him to work with different clinical populations, currently focusing on visual predictive processing in Autism. He has experience with multiple techniques, such as psychophysics, EEG, non-invasive brain stimulation and is currently planning his first study using fMRI.
Anya Hurlbert
Newcastle University
Anya Hurlbert is Professor of Visual Neuroscience, Director of the Centre for Translational Systems Neuroscience and Dean of Advancement at Newcastle University. She co-founded Newcastle’s Institute of Neuroscience in 2003, serving as its co-Director until 2014. Hurlbert’s research focuses on colour perception and its role in everyday visual and cognitive tasks, in normal and atypical development and ageing. She is also interested in applied areas such as digital imaging and novel lighting technologies. Professor Hurlbert is active in the public understanding of science, and has devised and co-curated several science-based art exhibitions, including an interactive installation at the National Gallery, London, for its 2014 summer exhibition Making Colour. She is former Chairman of the Colour Group (GB) and Scientist Trustee of the National Gallery, and currently on the editorial board of Current Biology as well as several international advisory boards. Funding for her personal research has come from the Wellcome Trust, UKRI (EPSRC/MRC), the European Commission (EU), charities, and industry. She is currently a PI in the EU H2020 Innovative Training Network “Dynamics in Vision and Touch”.
Concetta Morrone
University of Pisa
Maria Concetta Morrone is Professor of Physiology in the School of Medicine of the University of Pisa, Director of the Vision Laboratory of the IRCCS Fondazione Stella Maris, and Academic Director of the inter-University Masters in Neuroscience. She is a member of the prestigious Accademia dei Lincei and has been awarded major national and international prizes for scientific achievements. From an initial interest in biophysics and physiology, where she made many seminal contributions, she moved on to psychophysics and visual perception. Over the years her research has spanned spatial vision, development, plasticity, attention, color, motion, robotics, vision during eye movements and more recently multisensory perception and action. She has coordinated many European Community grants over many founding schemes, and was awarded in 2014 an ERC-IDEA Advanced Grant for Excellence in Science.
Cong Yu
Peking University
Cong Yu is a professor at Peking University. He studies human perceptual learning using psychophysical methods, and macaque visual cortex using two-photon calcium imaging.
Laurie Wilcox
York University
Laurie M. Wilcox is a Professor in Psychology at York University, Toronto, Canada. She uses psychophysical methods to study stereoscopic depth perception. In addition to basic research in 3D vision, Laurie has been involved in understanding the factors that influence the viewer experience of 3D media (IMAX, Christie Digital) and perceptual distortions in VR (Qualcomm Canada). Her research has been funded primarily by the Natural Sciences and Engineering Research Council (NSERC) of Canada which supports both basic and applied research programs. She is also familiar with contract-based research in collaboration with industry and government agencies.
2019 Student Workshops
There is no advanced sign-up for workshops. Workshops will be filled on a first-come, first-served basis.
Peer-networking for Students and PostdocsSaturday, May 18, 2019, 12:45 – 1:45 pm, Jasmine/Palm Peer-to-peer connections and networks can be the basis of your most important long-term collaborations and friendships. This workshop will help you meet and connect to your peer researchers, face to face. The format will be separate round tables dedicated to different topics, allowing opportunities for discussion and networking. Session moderators will help keep things organized. We’ll have at least one rotation during the workshop so that you will have the opportunity to talk to more people and explore more topics, including topics you’re working on now, as well as areas of interest for the future. |
![]() Rutgers University Eileen Kowler is a Distinguished Professor at Rutgers University and Senior Associate Dean in the School of Graduate Studies. She received her doctoral degree from the University of Maryland, and was a postdoc at NYU. She has been at Rutgers since 1980, where she maintains affiliations with the Department of Psychology and Center for Cognitive Science. Kowler’s research focuses on the planning of and generation of eye movements and their role in visual tasks. In her roles as a faculty member, VSS board member, and former principal investigator of an NSF training grant, she has a strong commitment to the topic of this workshop: creating opportunities for students and postdocs to develop their careers and collaborate with one another. |
![]() Harvard University Talia Konkle is an Assistant Professor in the Department of Psychology at Harvard University. Her research characterizes mid and high-level visual representation at both cognitive and neural levels. She received her B.A. in Applied Math and Cognitive Science at UC Berkeley in 2004, her Ph.D. from MIT in Brain and Cognitive Science in 2011, and conducted her postdoctoral training at University of Trento and Harvard until 2015. Talia is the recipient of the 2019 Elsevier/VSS Young Investigator Award. |
![]() Brown University Fulvio Domini is a Professor at the department of Cognitive, Linguistic and Psychological Sciences at Brown University. He was hired at Brown University in 1999 after completing a Ph.D. in Experimental Psychology at the University of Trieste, Italy in 1997. His research team investigates how the human visual system processes 3D visual information to allow successful interactions with the environment. His approach is to combine computational methods and behavioral studies to understand what are the visual features that establish the mapping between vision and action. His research has been and is currently funded by the National Science Foundation. |
VSS Workshop for PhD Students and Postdocs:
|
![]() University of Sydney When not teaching or working on vision experiments, Alex Holcombe works to improve transparency in and access to research. To address the emerging reproducibility crisis in psychology, in 2011 he co-created PsychFiledrawer.org, in 2013 introduced the Registered Replication Report at the journal Perspectives on Psychological Science, and appears in this cartoon about replication. He was involved in the creation of the journal badges to signal open practices, the preprint server PsyArxiv, the new journal Advances in Methods and Practices in Psychological Science, and PsyOA.org, which provides resources for flipping a subscription journal to open access. Talk to him anytime on Twitter @ceptional. |
![]() Columbia University Nikolaus Kriegeskorte is a computational neuroscientist who studies how our brains enable us to see and understand the world around us. He received his PhD in Cognitive Neuroscience from Maastricht University, held postdoctoral positions at the Center for Magnetic Resonance Research at the University of Minnesota and the U.S. National Institute of Mental Health in Bethesda, and was a Programme Leader at the U.K. Medical Research Council Cognition and Brain Sciences Unit at the University of Cambridge. Kriegeskorte is a Professor at Columbia University, affiliated with the Departments of Psychology and Neuroscience. He is a Principal Investigator and Director of Cognitive Imaging at the Zuckerman Mind Brain Behavior Institute at Columbia University. Kriegeskorte is a co-founder of the conference “Cognitive Computational Neuroscience”, which had its inaugural meeting in September 2017 at Columbia University. |
![]() McMaster University Allison Sekuler is the Sandra Rotman Chair in Cognitive Neuroscience and Vice-President Research at Baycrest Centre for Geriatric Care. She also is Managing Director of the Centre for Aging + Brain Health Innovation, and the world-renowned Rotman Research Institute. A graduate of Pomona College (BA, Mathematics and Psychology) and the University of California, Berkeley (PhD, Psychology), she holds faculty appointments at the University of Toronto and McMaster University, where she was the country’s first Canada Research Chair in Cognitive Neuroscience and established lasting collaborations with Japanese researchers. Dr. Sekuler has a notable record of scientific achievements in aging, vision science, neural plasticity, imaging, and neurotechnology. Her research focuses on perceptual organization and face perception, motion and depth perception, spatial and pattern vision, and age-related changes in vision. The recipient of numerous awards for research, teaching and leadership, she has broad experience in senior academic, research, and innovation leadership roles, advancing internationalization, interdisciplinarity, skills-development, entrepreneurship, and inclusivity. |
![]() Justus-Liebig University, Giessen Kate Storrs is currently a Humboldt Postdoctoral Fellow using deep learning to study material perception at the Justus-Liebig University in Giessen, Germany. Before that she was a postdoc at the University of Cambridge, a Teaching Fellow at University College London, and a PhD student at the University of Queensland in Australia. Her main professional hobby is science communication. Kate has performed vision-science-themed stand-up comedy in London at the Royal Society, the Natural History Museum, the Bloomsbury Theatre, and a dozen pubs and festivals across the UK. She has presented vision science segments on Cambridge TV, the Naked Scientists podcast, BBC Cambridgeshire radio, and was a UK finalist in the 2016 FameLab international science communication competition. Always happy to talk on Twitter @katestorrs. |
![]() University of Leuven, Belgium Johan Wagemans is a professor in experimental psychology at the University of Leuven (KU Leuven) in Belgium. Current research interests are mainly in perceptual grouping, figure-ground organization, depth perception, shape perception, object perception, and scene perception, including applications in autism, arts, and sports (see www.gestaltrevision.be). He has published more than 300 peer-reviewed articles on these topics and he has edited the Oxford Handbook of Perceptual Organization (2015).In addition to supervising many PhD students and postdocs, he is doing a great deal of community service such as coordinating the Department of Brain & Cognition, being editor of Cognition, Perception, i-Perception, and Art & Perception, and organizing the European Conference of Visual Perception (ECVP) and the Visual Science of Art Conference (VSAC) in Leuven (August 2019). |
2019 Davida Teller Award – Barbara Dosher
The Vision Sciences Society is honored to present Dr. Barbara Dosher with the 2019 Davida Teller Award
VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding female vision scientist in recognition of her exceptional, lasting contributions to the field of vision science.
Barbara Dosher
Distinguished Professor, University of California, Irvine
Barbara Dosher is a researcher in the areas of visual attention and learning. She received her PhD in 1977 from the University of Oregon and served on the faculty at Columbia University (1977 – 1992) and the University of California, Irvine (1992 – present). Her early career investigated temporal properties of retrieval from long-term and working memory, and priming using pioneering speed-accuracy tradeoff methods. She then transitioned to work largely in vision, bringing some of the concepts of cue combination in memory to initiate work on combining cues in visual perception. This was followed by work to develop observer models using external noise methods that went on to be the basis for proposing that changing templates, stimulus amplification, and noise filtering were the primary functions of attention. This and similar work then constrained and motivated new generative network models of visual perceptual learning that have been used to understand the roles of feedback in unsupervised and supervised learning, the induction of bias in perception, and the central contributions of reweighting evidence to a decision in visual learning.
Barbara Dosher is an elected member of the Society for Experimental Psychologists and the National Academy of Sciences, and is a recipient of the Howard Crosby Warren Medal (2013) and the Atkinson Prize (2018).
Learning and Attention in Visual Perception
Dr. Dosher will speak during the Awards session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2.
Visual perception functions in the context of a dynamic system that is affected by experience and by top-down goals and strategies. Both learning and attention can improve perception that is limited by the noisiness of internal visual processes and noise in the environment. This brief talk will illustrate several examples of how learning and attention can improve how well we see by amplifying relevant stimuli while filtering others—and how important it is to model the coding or transformation of early features in the development of truly generative quantitative models of perceptual performance.
Download
[email protected] 2019
Vision After Sight Restoration
Monday, April 29, 1:15 – 2:45 pm at ARVO 2019, Vancouver, Canada
Organizers: Lynne Kiorpes, Ulrike Grunert and David Brainard
Speakers: Holly Bridge, Krystel Huxlin, Sharon Gilad -Gutnick and Geoff Boynton
Visual deprivation during development can have a profound effect on adult visual function, with congenital or early acquired blindness representing one extreme regarding the degree of deprivation and adult sight loss representing another. As better treatments for blindness become available, a critical question concerns the nature of vision after the restoration of sight and the level of remaining visual system plasticity. This symposium will highlight recent progress in this area, as well as how vision therapy can best be deployed to optimize the quality of post-restoration vision. This is the biennial [email protected] symposium, featuring speakers from the Vision Sciences Society.