Cart

[woocommerce_cart]

2019 FABBS Early Career Impact Award

Congratulations to Julie Golomb, the VSS nominee and recipient of the 2019 Federation of Associations in Behavioral & Brain Sciences (FABBS) Early Career Impact Award.

The FABBS Early Career Impact Award honors early career scientists of FABBS member societies during the first 10 years post-PhD and recognizes scientists who have made major contributions to the sciences of mind, brain, and behavior. The goal is to enhance public visibility of these sciences and the particular research through the dissemination efforts of the FABBS in collaboration with the member societies and award winners.

Julie Golomb

Associate Professor
Ohio State University

Julie Golomb earned her bachelor’s degree in neuroscience from Brandeis University and her doctorate from Yale University. She completed post-doctoral research at MIT before joining the faculty at Ohio State in 2012 and receiving tenure in 2018. Her lab’s research is funded by grants from the National Institutes of Health, the Alfred P. Sloan Foundation, and the Ohio Supercomputer Center. For more information about Dr. Golomb and an overview of her article, go to Making Sense from Dots of Light on the FABBS website.

Making Sense from Dots of Light

For Julie Golomb, it all started with a college course in visual perception. “I realized that all of these things I take for granted about how I perceive the world are actually really hard challenges for the brain to solve.”
How do we recognize our coffee mug? How do we pick out a friend’s face in the crowd? Or know that the round, white and black thing flying at us is, in fact, a soccer ball?
This constant bombardment of rich and usually moving pictures start out simply as dots of light hitting different spots on the retina.
Those dots create a map of where things are in the world before heading to the brain, where the deep processing takes place that Golomb studies in her lab.
While the brain is busy almost instantaneously processing incoming data, the world outside is continuously moving and changing, as are our eyes–an emphasis in Golomb’s lab.
In one experiment, Golomb may ask volunteers to determine whether two objects that appear on a computer monitor are the same shape. “Or we’ll flash a bunch of different objects on the screen and then ask, ‘What color was presented in a certain location?’”
Among interesting findings: When asked to pay attention to two squares of different colors, such as red and blue, volunteers might mistakenly describe one of the colors afterward as purple.
“The brain has a hard job, and it does a remarkable job,” Golomb says. “But it is not perfect.” A lot of learning about the brain is based on its mistakes.
Golomb also asks volunteers to complete tasks while connected to tools such as functional MRI, which images their brain, or an EEG machine, which records electrical activity on the scalp. She uses sophisticated computer models to analyze how the brains are processing information.
As the technology changes and develops, so do the possibilities with brain research. And it’s not just new equipment. “We’re asking better questions and new questions based on what we’re continually learning.”

17th Annual Dinner and Demo Night

Monday, May 20, 2019, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks and limited indoor seating in Banyan Breezeway
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall

Please join us Monday evening for the 17th Annual VSS Dinner and Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada, Reno; Karen Schloss, University of Wisconsin; Gennady Erlikhman, University of Nevada, Reno; and Benjamin Wolfe, MIT.

Demos are free to view for all registered VSS attendees and their families and guests. The Beach BBQ is free for attendees, but YOU MUST WEAR YOUR BADGE to receive dinner. Guests and family members must purchase a VSS Friends and Family Pass to attend the Beach BBQ. You can register your guests at any time at the VSS Registration Desk, located in the Grand Palm Colonnade. Guest passes may also be purchased at the BBQ event, beginning at 5:45 pm.

The following demos will be presented from 7:00 to 10:00 pm, in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, Spotted Curlew and Jacaranda Hall:

For the Last Time: The Ever-Popular Beuchet Chair

Peter Thompson, Rob Stone, and Tim Andrews, University of York

A favorite at demo Night for many years, the Beuchet chair is back for one last hurrah. The two parts of the chair are at different distances and the visual system fails to apply size constancy appropriately. The result is people can be shrunk or made giants.

Paradoxical impact of memory on color appearance of faces

Rosa Lafer-Sousa, MIT

What is the function of color vision? In this demo we impair retinal mechanisms of color using monochromatic sodium light, and probe memory colors for familiar objects in a naturalistic setting. We showcase a surprising finding: faces, and only faces, provoke a paradoxical memory color, providing evidence that color contributes to face encoding and social communication.

Immersive and long lasting afterimages – experiences of altered self

Daw-An Wu, California Institute of Technology

Dark Adaptation + Bright Flashes = Rod Afterimages!

Shikaku no Mori: gamified vision tests

Kenchi Hosokawa, Kazushi Maruya, and Shin’ya Nishida, NTT Communication Science Laboratories

We gamified several vision tests. Those games can be played in a short time (~ 3 minutes) and with a more entertained way. Test sensitivities are enough to be used as initial screening tests (see pretest data on poster in Sunday Pavilion session). Those games are usable for self-check.

The UW Virtual Brain Project: Exploring the visual and auditory systems in virtual reality

Karen B. Schloss, Chris Racey, Simon Smith, Ross Tredinnick, Nathaniel Miller, Melissa Schoenlein, and Bas Rokers, University of Wisconsin – Madison

The UW Virtual Brain Project allows you to explore the visual system and auditory system in virtual reality. It helps to visualize the flow of information from sensory input to cortex cortical processing. The ultimate aim of the project is to improve neuroscience education by leveraging natural abilities for space-based learning.

Fun with Birefringent Surfaces and Polarized Light

Gideon Caplovitz, University of Nevada Reno

What could possibly go wrong?

Generating hyper-realistic faces for use in vision science experiments

Joshua Peterson, Princeton University; Jordan Suchow, Stevens Institute of Technology; Stefan Uddenberg, Princeton University

Easily alter your photographic appearance in a bunch of interesting ways! We have developed a system to morph any face image along psychologically relevant dimensions using recent advances in deep neural networks (namely GANs).

Hidden in Plain Sight!

Peter April, Jean-Francois Hamelin, Danny Michaud, Sophie Kenny, VPixx Technologies

Can visual information be hidden in plain sight? We use the PROPixx 1440Hz projector, and the TRACKPixx 2kHz eye tracker, to demonstrate images which are invisible until you make a rapid eye movement. We implement retinal stabilization to show other images that fade during fixations. Do your eyes deceive?

The Magical Alberti Frame

Niko Troje and Adam Bebko, York University

Pictures are two things: objects in space and representations of spaces existing elsewhere. In this virtual reality experience, users use a magical frame to capture pictures that momentarily appear identical to the scene they reside in, but when users move, the pictures evoke unexpected and eerie perceptual changes and distortions.

Café-Wall illusion caused by shadows on a surface of three dimensional object

Kazushi Maruya, NTT Communication Science Laboratories; Yuki Fujita, Tokyo University of the Arts; Tomoko Ohtani, Tokyo University of the Arts

Café-Wall illusion is a famous optical illusion that parallel gray lines between displaced rows of black and white squares are appeared to be angled with respect to one another. In this demonstration, we show that the Café-wall pattern can be emerged when shadows are cast by multiple cuboids onto a 3D surface of varying depths.

Foveal Gravity: A Robust Illusion of Color-Location Misbinding

Cristina R. Ceja, Nicole L. Jardine, and Steven L. Franconer, Northwestern University

Here we present a novel, robust color-location misbinding illusion that we call foveal gravity: objects and their features can be perceived accurately, but are often mislocalized to locations closer to fovea under divided attention.

Multi Person VR walking experience with and without accuracy correction

Matthias Pusch and Andy Bell, WorldViz

Consumer VR systems are great fun but they have limited accuracy when it comes to precisely tracking research participants. This demo will allow participants to experience first hand how inaccurate these systems can be in an interactive multi-user setting within a large walkable virtual space.

Impossible Integration of Size and Weight: The Set-Subset Illusion

Isabel Won, Steven Gross, and Chaz Firestone, Johns Hopkins University
Perception can produce experiences that are *impossible*, such as a triangle with three 90° sides, or a circular staircase that ascends in every direction. Are there impossible experiences that we can not only see, but also *feel*? Here, we demonstrate the “Set-Subset Illusion” — whereby a set of objects can, impossibly, feel lighter than a member of that set!

The Illusory and Invisible Audiovisual Rabbit Illusions

Noelle Stiles, University of Southern California; Armand R. Tanguay, Jr., University of Southern California, Caltech; Ishani Ganguly, Caltech; Monica Li, Caltech, University of California, Berkeley; Carmel A. Levitan, Caltech, Occidental College; Yukiyasu Kamitani, Kyoto University; Shinsuke Shimojo, Caltech

Neuroscience often focuses on the prediction of future perception based on prior perception. However, information is also processed postdictively, such that later stimuli impact percepts of prior stimuli. We will demonstrate that audition can postdictively relocate an illusory flash or suppress a real flash in the Illusory and Invisible Audiovisual Rabbit Illusions.

Chopsticks Fusion

Ray Gottlieb, College of Syntonic Optometry

Have you noticed that your normal stereoscopic perception is never as strong as the stark, solid 3-dimensionality that you see in a stereoscope or virtual reality device? Chopstick Fusion is a simple and inexpensive stereo practice that develops spatial volume perception. I’ll bring chopsticks for everyone.

Moiré effects on real object’s appearances

Takahiro Kawabe and Masataka Sawayama, NTT Communication Science Laboratories; Tamio Hoshik, Sojo University

An intriguing moiré effect is demonstrated wherein a real bar object in front of stripe motion on an LCD display apparently deforms or rotates in depth. Changing bar orientation and/or a bar-display distance drastically modulates the appearance. Even invisible stripe motion causes a vivid change in bar appearances.

The motion aftereffect without motion: 1-D, 2-D and 3-D illusory motion from local adaptation to flicker

Mark Georgeson, Aston University, UK

Adapting to a flickering image induces vivid illusory motion on an appropriate stationary test pattern: a motion aftereffect without inducing motion. Motion can be seen in 1-D, 2-D or 3-D, depending on the images chosen, but the basis for the effect is local adaptation to temporal gradients of luminance change.

Monocular rivalry

Leone Burridge

An iphone 5 drawing printed onto paper. The perceived colours fluctuate between blue/yellow and red /green.

A Fast and blurry versus slow and clear: How stationary stimuli modify motion perception

Mark Wexler, Labotatoire Psychologie de la Perception, CNRS & Université Paris Descartes

Why do shooting stars look the way they do? Why do most moving objects look clear, even at saccadic speeds? Are there motion effects waiting to be explored beyond the frequency range of computer monitors? Come and find out!

Thatcherize your face

Andre Gouws, York Neuroimaging Centre, University of York; Peter Thompson, University of York

The Margaret Thatcher illusion is one of the best-loved perceptual phenomena. Here you will have the opportunity to see yourself ‘thatcherized’ in real time and we print you a copy of the image to take away.

The caricature effect in data visualization: typical graphs produce negative learning

Jeremy Wilmer, Wellesley College

Graphs that display summary statistics without underlying distributions (e.g. bar/line/dot graphs with error bars) are commonly assumed to support robust information transfer. We demo an array of such graphs that falsify this assumption by stimulating negative learning relative to baseline in typical viewers.

Look where Simon says without delay

Katia Ripamonti, Cambridge Research Systems; Lloyd Smith, Cortech Solutions

Can you beat the Simon effect using your eye movements? Compete with other players to determine who can look where Simon says without delay. All you need to do is to control your eye movements before they run off. It sounds so simple and yet so difficult!

Illusory color induced by colored apparent-motion in the extreme-periphery

Takashi Suegami, Yamaha Motor Corporation, Caltech; Yusuke Shirai, Toyohashi University of Technology; Sara W. Adams, Caltech; Daw-An J. Wu, Caltech; Mohammad Shehata, Caltech, Toyohashi University of Technology; Shigeki Nakauchi, Toyohashi University of Technology; Shinsuke Shimojo, Caltech, Toyohashi University of Technology

Our new demo will show that foveal/parafoveal color cue with apparent motion can induce illusory color in the extreme-periphery (approx. 70°-90°) where cone cells are less distributed. One can experience, for example, clear red color perception for extreme-peripheral green flash, with isoluminant red cue (or vice versa).

The Magical Misdirection of Attention in Time

Anthony Barnhart, Carthage College

When we think of “misdirection,” we typically think of a magician drawing attention away from a spatial location. However, magicians also misdirect attention in time through the creation of “off-beats,” moments of suppressed attention. The “striking vanish” illusion, where a coin disappears when tapped with a pen, exploits this phenomenon.

How Can (Parts of) Planarians Survive Without their Brains and Eyes? -Hint: Its Extraocular UV-Sensitive System

Kensuke Shimojo, Chandler School; Eiko Shimojo, California Institute of Technology; Daw-An Wu, California Institute of Technology; Armand R. Tanguay, Jr., California Institute of Technology, University of Southern California; Mohammad Shehata, California Institute of Technology; Shinsuki Simojo, California Institute of Technology

Planarian dissected body parts, even with incomplete eyespots, show “light avoiding behavior” long before the complete regrowth of the entire body (including the sensory-motor organs). We will demonstrate this phenomenon live (in Petri dishes) and on video under both no-UV (visible) and UV light stimulation. In a dynamic poster mode, we show some observations addressing whether or not the mechanical stress (dissection) switches dominance between the two vision systems.

The joy of intra-saccadic retinal painting

Richard Schweitzer,  Humboldt-Universität zu Berlin; Tamara Watson, Western Sydney University; John Watson, Humboldt-Universität zu Berlin; Martin Rolfs, Humboldt-Universität zu Berlin

Is it possible to turn intra-saccadic motion blur – under normal circumstances omitted from conscious perception – into a salient stimulus? With the help of visual persistence, your own eye and/or head movements, and our custom-built setup for high-speed anorthoscopic presentation, you can paint beautiful images and amusing text directly onto your retina.

Build a camera obscura!

Ben Balas, North Dakota State University

Vision begins with the eye, and what better way to understand the eye than to build one? Come make your own camera obscura out of cardboard, tape, and paper, and you can observe basic principles of image formation and pinhole optics.

The Role of Color Filling-in in Natural Images

Christopher Tyler and Josh Solomon, City University of London

We demonstrate that natural images do not look very colorful when their color is restricted to edge transitions. Moreover, purely chromatic images with maximally graded transitions look fully colorful, implying that color filling-in makes no more than a minor contribution to the appearance of extended color regions in natural images.

Chopsticks trick your fingers

Songjoo Oh, Seoul National University

The famous rubber hand illusion is demonstrated by using chopsticks and fingers. A pair of chopsticks simultaneously moves back and forth on your index and middle fingers, respectively. One chopstick is actually touching the middle finger, but the other one is just moving in the air without touching the index finger. If you pay attention only to your index finger, you may erroneously feel the touch come from the index finger, not from the middle finger.

Spinning reflections on depth from spinning reflections

Michael Crognale and Alex Richardson, University of Nevada Reno

A trending novelty toy when spun, induces a striking depth illusion from disparity in specular reflections from point sources. However, “specular” disparity from static curved surfaces is usually discounted or contributes to surface curvature. Motion obscures surface features that compete with depth cues and result in a strong depth illusion.

High Speed Gaze-Contingent Visual Search

Kurt Debono and Dan McEchron, SR Research Ltd

Try to find the target in a visual search array which is continuously being updated based on the location of your gaze. High speed video based eye tracking combined with the latest high speed monitors make for a compelling challenge.

Interactions between visual movement and position

Stuart Anstis, University of California, San Diego; Sharif Saleki, Dartmouth College; Mart Ozkan, Dartmouth College; Patrick Cavanagh, York University

Movement paths can be distorted when they move across an oblique background grating (the Furrow illusion). These motions, viewed the periphery, can be paradoxically immune to visual crowding. Conversely, moving backgrounds can massively distort static flashed targets altering their perceived size, shape, position and orientation.(flash-grab illusion).

StroboPong

VSS Staff

Back by popular demand. Strobe lights and ping pong!

2019 Young Investigator – Talia Konkle

The Vision Sciences Society is honored to present Talia Konkle with the 2019 Young Investigator Award.

The Young Investigator Award is an award given to an early stage researcher who has already made a significant contribution to our field. The award is sponsored by Elsevier, and the awardee is invited to submit a review paper to Vision Research highlighting this contribution.

Talia Konkle

Assistant Professor
Department of Psychology
Harvard University

Talia Konkle earned Bachelor degrees in applied mathematics and in cognitive science at the University of California, Berkeley.  Under the direction of Aude Oliva, she earned a PhD in Brain & Cognitive Science at MIT in 2011. Following exceptionally productive years as a postdoctoral fellow in the Department of Psychology at Harvard and at the University of Trento, in 2015, Dr. Konkle assumed a faculty position in the Department of Psychology & Center for Brain Science at Harvard.

Dr. Konkle’s research to understand how our visual system organizes knowledge of objects, actions, and scenes combines elegant behavioral methods with modern analysis of brain activity and cutting-edge computational theories. Enabled by sheer originality and analytical rigor, she creates and crosses bridges between previously unrelated ideas and paradigms, producing highly cited publications in top journals. One line of research demonstrated that object processing mechanisms relate to the physical size of objects in the world. Pioneering research on massive visual memory, Dr. Konkle also showed that detailed visual long-term memory retrieval is linked more to conceptual than perceptual properties.

Dr. Konkle’s productive laboratory is a vibrant training environment, attracting many graduate students and postdoctoral fellows. Dr. Konkle has also been actively involved in outreach activities devoted to promoting women and minorities in science.

From what things look like to what they are

Dr. Konkle will talk during the Awards Session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2

How do we see and recognize the world around us, and how do our brains organize all of this perceptual input? In this talk I will highlight some of the current research being conducted in my lab, exploring the representation of objects, actions, and scenes in the mind and brain.

 

Save

Save

Save

Save

2019 Funding Workshops

VSS Workshop on Funding in the US

No registration required. First come, first served, until full.

Saturday, May 18, 2019, 12:45 – 1:45 pm, Sabal/Sawgrass

Moderator: David Brainard, University of Pennsylvania
Discussants: Todd Horowitz, National Cancer Institute; Lawrence R. Gottlob, National Science Foundation; and Cheri Wiggs, National Eye Institute

You have a great research idea, but you need money to make it happen. You need to write a grant. This workshop will address NIH and NSF funding mechanisms for vision research. Cheri Wiggs (National Eye Institute) and Todd Horowitz (National Cancer Institute) will provide insight into the inner workings of the NIH extramural research program. Larry Gottlob will represent the Social, Behavioral, and Economic (SBE) directorate of the NSF. There will be time for your questions.

Todd Horowitz

National Cancer Institute

Todd S. Horowitz, Ph.D., is a Program Director in the Behavioral Research Program’s (BRP) Basic Biobehavioral and Psychological Sciences Branch (BBPSB), located in the Division of Cancer Control and Population Sciences (DCCPS) at the National Cancer Institute (NCI). Dr. Horowitz earned his doctorate in Cognitive Psychology at the University of California, Berkeley in 1995. Prior to joining NCI, he was Assistant Professor of Ophthalmology at Harvard Medical School and Associate Director of the Visual Attention Laboratory at Brigham and Women’s Hospital. He has published more than 70 peer-reviewed research papers in vision science and cognitive psychology. His research interests include attention, perception, medical image interpretation, cancer-related cognitive impairments, sleep, and circadian rhythms.

Lawrence R. Gottlob

National Science Foundation

Larry Gottlob, Ph.D., is a Program Director in the Perception, Action, and Cognition program at the National Science Foundation. His permanent home is in the Psychology Department at the University of Kentucky, but he is on his second rotation at NSF. Larry received his PhD from Arizona State University in 1995 and has worked in visual attention, memory, and cognitive aging.

Cheri Wiggs

National Eye Institute

Cheri Wiggs, Ph.D., serves as a Program Director at the National Eye Institute (of the National Institutes of Health). She oversees extramural funding through three programs — Perception & Psychophysics, Myopia & Refractive Errors, and Low Vision & Blindness Rehabilitation. She received her PhD from Georgetown University in 1991 and came to the NIH as a researcher in the Laboratory of Brain and Cognition. She made her jump to the administrative side of science in 1998 as a Scientific Review Officer. She currently represents the NEI on several trans-NIH coordinating committees (including BRAIN, Behavioral and Social Sciences Research, Medical Rehabilitation Research) and was appointed to the NEI Director’s Audacious Goals Initiative Working Group.

David Brainard

University of Pennsylvania

David H. Brainard is the RRL Professor of Psychology at the University of Pennsylvania. His research interests focus on human color vision, which he studies both experimentally and through computational modeling of visual processing. He is a fellow of the Optical Society, ARVO and the Association for Psychological Science. At present, he directs Penn’s Vision Research Center, serves as Associate Dean for the Natural Sciences in Penn’s School of Arts and Sciences, is an Associate Editor of the Journal of Vision, co-editor of the Annual Review of Vision Science, and president-elect of the Vision Sciences Society.

VSS Workshop on Funding Outside the US

No registration required. First come, first served, until full.

Sunday, May 19, 2019, 12:45 – 1:45 pm, Sabal/Sawgrass

Moderator: Laurie Wilcox, York University, Toronto

Panelists: Thiago Leiros Costa, KU Leuven; Anya Hurlbert, Newcastle University; Concetta Morrone, University of Pisa; and Cong Yu, Peking University

You have a great research idea, but you need money to make it happen. You need to write a grant. This funding workshop will be focused specifically on disseminating information about non-US funding mechanisms appropriate for vision research. The format of the workshop will be a moderated panel discussion driven by audience questions. The panelists are vision scientists, each of whom has experience with at least one non-US funding mechanism. Because funding opportunities are diverse and differ across countries, however, the workshop will also encourage information sharing from the audience.

Thiago Leiros Costa

KU Leuven

Thiago Leiros Costa is a Marie Skłodowska-Curie fellow at KU Leuven, Belgium. He is currently focused on accessing neural correlates of Gestalt-like phenomena and on the role that predictive processing plays in low and mid-level vision. Being a neuropsychologist and visual neuroscientist, he is interested in basic research in the field of perception per se, but also on opportunities for translational research in psychology (using tasks and methods derived from basic research to address clinically relevant questions). This has led him to work with different clinical populations, currently focusing on visual predictive processing in Autism. He has experience with multiple techniques, such as psychophysics, EEG, non-invasive brain stimulation and is currently planning his first study using fMRI.

Anya Hurlbert

Newcastle University

Anya Hurlbert is Professor of Visual Neuroscience,  Director of the Centre for Translational Systems Neuroscience and Dean of Advancement at Newcastle University. She co-founded Newcastle’s Institute of Neuroscience in 2003, serving as its co-Director until 2014.  Hurlbert’s research focuses on colour perception and its role in everyday visual and cognitive tasks, in normal and atypical development and ageing. She is also interested in applied areas such as digital imaging and novel lighting technologies.  Professor Hurlbert is active in the public understanding of science, and has devised and co-curated several science-based art exhibitions, including an interactive installation at the National Gallery, London, for its 2014 summer exhibition Making Colour. She is former Chairman of the Colour Group (GB) and Scientist Trustee of the National Gallery, and currently on the editorial board of Current Biology as well as several international advisory boards. Funding for her personal research has come from the Wellcome Trust, UKRI (EPSRC/MRC), the European Commission (EU), charities, and industry. She is currently a PI in the EU H2020 Innovative Training Network “Dynamics in Vision and Touch”.

Concetta Morrone

University of Pisa

Maria Concetta Morrone is Professor of Physiology in the School of Medicine of the University of Pisa, Director of the Vision Laboratory of the IRCCS Fondazione Stella Maris, and Academic Director of the inter-University Masters in Neuroscience. She is a member of the prestigious Accademia dei Lincei and has been awarded major national and international prizes for scientific achievements. From an initial interest in biophysics and physiology, where she made many seminal contributions, she moved on to psychophysics and visual perception. Over the years her research has spanned spatial vision, development, plasticity, attention, color, motion, robotics, vision during eye movements and more recently multisensory perception and action.  She has coordinated many European Community grants over many founding schemes, and was awarded in 2014 an ERC-IDEA Advanced Grant for Excellence in Science.

Cong Yu

Peking University

Cong Yu is a professor at Peking University. He studies human perceptual learning using psychophysical methods, and macaque visual cortex using two-photon calcium imaging.

Laurie Wilcox

York University

Laurie M. Wilcox is a Professor in Psychology at York University, Toronto, Canada.  She uses psychophysical methods to study stereoscopic depth perception. In addition to basic research in 3D vision, Laurie has been involved in understanding the factors that influence the viewer experience of 3D media (IMAX, Christie Digital) and perceptual distortions in VR (Qualcomm Canada). Her research has been funded primarily by the Natural Sciences and Engineering Research Council (NSERC) of Canada which supports both basic and applied research programs. She is also familiar with contract-based research in collaboration with industry and government agencies.

2019 Student Workshops

There is no advanced sign-up for workshops. Workshops will be filled on a first-come, first-served basis.

Peer-networking for Students and Postdocs

Saturday, May 18, 2019, 12:45 – 1:45 pm, Jasmine/Palm
Moderators: Eileen Kowler, Talia Konkle, and Fulvio Domini

Peer-to-peer connections and networks can be the basis of your most important long-term collaborations and friendships.  This workshop will help you meet and connect to your peer researchers, face to face.  The format will be separate round tables dedicated to different topics, allowing opportunities for discussion and networking.  Session moderators will help keep things organized. We’ll have at least one rotation during the workshop so that you will have the opportunity to talk to more people and explore more topics, including topics you’re working on now, as well as areas of interest for the future.

Eileen Kowler

Rutgers University

Eileen Kowler is a Distinguished Professor at Rutgers University and Senior Associate Dean in the School of Graduate Studies.  She received her doctoral degree from the University of Maryland, and was a postdoc at NYU.  She has been at Rutgers since 1980, where she maintains affiliations with the Department of Psychology and Center for Cognitive Science.  Kowler’s research focuses on the planning of and generation of eye movements and their role in visual tasks. In her roles as a faculty member, VSS board member, and former principal investigator of an NSF training grant, she has a strong commitment to the topic of this workshop:  creating opportunities for students and postdocs to develop their careers and collaborate with one another.

Talia Konkle

Harvard University

Talia Konkle is an Assistant Professor in the Department of Psychology at Harvard University.  Her research characterizes mid and high-level visual representation at both cognitive and neural levels. She received her B.A. in Applied Math and Cognitive Science at UC Berkeley in 2004, her Ph.D. from MIT in Brain and Cognitive Science in 2011, and conducted her postdoctoral training at University of Trento and Harvard until 2015. Talia is the recipient of the 2019 Elsevier/VSS Young Investigator Award.

Fulvio Domini

Brown University

Fulvio Domini is a Professor at the department of Cognitive, Linguistic and Psychological Sciences at Brown University. He was hired at Brown University in 1999 after completing a Ph.D. in Experimental Psychology at the University of Trieste, Italy in 1997. His research team investigates how the human visual system processes 3D visual information to allow successful interactions with the environment. His approach is to combine computational methods and behavioral studies to understand what are the visual features that establish the mapping between vision and action. His research has been and is currently funded by the National Science Foundation.

VSS Workshop for PhD Students and Postdocs:
How to Spend Your Time Well as a Young Researcher

Sunday, May 19, 2019, 12:45 – 1:45 pm, Jasmine/Palm
Moderator: Johan Wagemans, University of Leuven, Belgium
Panelists: Alex Holcombe, Niko Kriegeskorte, Allison Sekuler, and Kate Storrs

Graduate students and postdocs often wonder what they should spend their work time on, in addition to learning the skills of a good researcher, doing good research, and writing good papers.  For instance, quite a few people write blogs or are very active on public forums (e.g., about open science, open source software, helpdesks for R, Python, etc.).   Others have questions about how much time to spend on service to the profession, such as reviewing manuscripts.   With all these choices, many developing researchers will be faced with the challenge of finding the right balance between diversifying their professional activities while still devoting time to the core requirements of their careers.  This workshop will feature panelists who will provide perspectives on these issues and lead a discussion on the pros and cons of spending time on professional activities not directly relating to research.  If you think you have no time for this, you should definitely be there!

Alex Holcombe

University of Sydney

When not teaching or working on vision experiments, Alex Holcombe works to improve transparency in and access to research. To address the emerging reproducibility crisis in psychology, in 2011 he co-created PsychFiledrawer.org, in 2013 introduced the Registered Replication Report at the journal Perspectives on Psychological Science, and appears in this cartoon about replication.  He was involved in the creation of the journal badges to signal open practices, the preprint server PsyArxiv, the new journal Advances in Methods and Practices in Psychological Science, and PsyOA.org, which provides resources for flipping a subscription journal to open access. Talk to him anytime on Twitter @ceptional. 

Niko Kriegeskorte

Columbia University

Nikolaus Kriegeskorte is a computational neuroscientist who studies how our brains enable us to see and understand the world around us. He received his PhD in Cognitive Neuroscience from Maastricht University, held postdoctoral positions at the Center for Magnetic Resonance Research at the University of Minnesota and the U.S. National Institute of Mental Health in Bethesda, and was a Programme Leader at the U.K. Medical Research Council Cognition and Brain Sciences Unit at the University of Cambridge. Kriegeskorte is a Professor at Columbia University, affiliated with the Departments of Psychology and Neuroscience. He is a Principal Investigator and Director of Cognitive Imaging at the Zuckerman Mind Brain Behavior Institute at Columbia University. Kriegeskorte is a co-founder of the conference “Cognitive Computational Neuroscience”, which had its inaugural meeting in September 2017 at Columbia University.

Allison Sekuler

McMaster University

Allison Sekuler is the Sandra Rotman Chair in Cognitive Neuroscience and Vice-President Research at Baycrest Centre for Geriatric Care. She also is Managing Director of the Centre for Aging + Brain Health Innovation, and the world-renowned Rotman Research Institute. A graduate of Pomona College (BA, Mathematics and Psychology) and the University of California, Berkeley (PhD, Psychology), she holds faculty appointments at the University of Toronto and McMaster University, where she was the country’s first Canada Research Chair in Cognitive Neuroscience and established lasting collaborations with Japanese researchers. Dr. Sekuler has a notable record of scientific achievements in aging, vision science, neural plasticity, imaging, and neurotechnology. Her research focuses on perceptual organization and face perception, motion and depth perception, spatial and pattern vision, and age-related changes in vision. The recipient of numerous awards for research, teaching and leadership, she has broad experience in senior academic, research, and innovation leadership roles, advancing internationalization, interdisciplinarity, skills-development, entrepreneurship, and inclusivity. 

Kate Storrs

Justus-Liebig University, Giessen

Kate Storrs is currently a Humboldt Postdoctoral Fellow using deep learning to study material perception at the Justus-Liebig University in Giessen, Germany. Before that she was a postdoc at the University of Cambridge, a Teaching Fellow at University College London, and a PhD student at the University of Queensland in Australia. Her main professional hobby is science communication. Kate has performed vision-science-themed stand-up comedy in London at the Royal Society, the Natural History Museum, the Bloomsbury Theatre, and a dozen pubs and festivals across the UK. She has presented vision science segments on Cambridge TV, the Naked Scientists podcast, BBC Cambridgeshire radio, and was a UK finalist in the 2016 FameLab international science communication competition. Always happy to talk on Twitter @katestorrs.

Johan Wagemans

University of Leuven, Belgium

Johan Wagemans is a professor in experimental psychology at the University of Leuven (KU Leuven) in Belgium. Current research interests are mainly in perceptual grouping, figure-ground organization, depth perception, shape perception, object perception, and scene perception, including applications in autism, arts, and sports (see www.gestaltrevision.be). He has published more than 300 peer-reviewed articles on these topics and he has edited the Oxford Handbook of Perceptual Organization (2015).In addition to supervising many PhD students and postdocs, he is doing a great deal of community service such as coordinating the Department of Brain & Cognition, being editor of Cognition, Perception, i-Perception, and Art & Perception, and organizing the European Conference of Visual Perception (ECVP) and the Visual Science of Art Conference (VSAC) in Leuven (August 2019).

2019 Davida Teller Award – Barbara Dosher

The Vision Sciences Society is honored to present Dr. Barbara Dosher with the 2019 Davida Teller Award

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding female vision scientist in recognition of her exceptional, lasting contributions to the field of vision science.

Barbara Dosher

Distinguished Professor, University of California, Irvine

Barbara Dosher is a researcher in the areas of visual attention and learning. She received her PhD in 1977 from the University of Oregon and served on the faculty at Columbia University (1977 – 1992) and the University of California, Irvine (1992 – present). Her early career investigated temporal properties of retrieval from long-term and working memory, and priming using pioneering speed-accuracy tradeoff methods. She then transitioned to work largely in vision, bringing some of the concepts of cue combination in memory to initiate work on combining cues in visual perception. This was followed by work to develop observer models using external noise methods that went on to be the basis for proposing that changing templates, stimulus amplification, and noise filtering were the primary functions of attention. This and similar work then constrained and motivated new generative network models of visual perceptual learning that have been used to understand the roles of feedback in unsupervised and supervised learning, the induction of bias in perception, and the central contributions of reweighting evidence to a decision in visual learning.

Barbara Dosher is an elected member of the Society for Experimental Psychologists and the National Academy of Sciences, and is a recipient of the Howard Crosby Warren Medal (2013) and the Atkinson Prize (2018).

Learning and Attention in Visual Perception

Dr. Dosher will speak during the Awards session
Monday, May 20, 2019, 12:30 – 1:45 pm, Talk Room 1-2.

Visual perception functions in the context of a dynamic system that is affected by experience and by top-down goals and strategies. Both learning and attention can improve perception that is limited by the noisiness of internal visual processes and noise in the environment. This brief talk will illustrate several examples of how learning and attention can improve how well we see by amplifying relevant stimuli while filtering others—and how important it is to model the coding or transformation of early features in the development of truly generative quantitative models of perceptual performance.

 

VSS@ARVO 2019

Vision After Sight Restoration

Monday, April 29, 1:15 – 2:45 pm at ARVO 2019, Vancouver, Canada
Organizers: Lynne Kiorpes, Ulrike Grunert and David Brainard
Speakers: Holly Bridge, Krystel Huxlin, Sharon  Gilad -Gutnick and Geoff Boynton

Visual deprivation during development can have a profound effect on adult visual function, with congenital or early acquired blindness representing one extreme regarding the degree of deprivation and adult sight loss representing another. As better treatments for blindness become available, a critical question concerns the nature of vision after the restoration of sight and the level of remaining visual system plasticity. This symposium will highlight recent progress in this area, as well as how vision therapy can best be deployed to optimize the quality of post-restoration vision. This is the biennial VSS@ARVO symposium, featuring speakers from the Vision Sciences Society.

2019 Satellite Events

Wednesday, May 15

Computational and Mathematical Models in Vision (MODVIS)

Wednesday, May 15 – Friday, May 17, Horizons
9:00 am – 6:00 pm, Wednesday
9:00 am – 6:00 pm, Thursday
8:30 – 11:45 am, Friday

Organizers: Jeff Mulligan, NASA Ames Research Center; Zygmunt Pizlo, UC Irvine; Anne B. Sereno, Purdue University; and Qasim Zaidi, SUNY College of Optometry

Keynote Selection Committee: Yalda Mohsenzadeh, MIT; Michael Rudd, University of Washington

The 8th VSS satellite workshop on Computational and Mathematical Models in Vision (MODVIS) will be held at the Tradewinds Island Resorts in St. Pete Beach, FL, May 15 – May 17.

A keynote address will be given by Dr. Yanxi Liu, Penn State University.

The early registration fee is $100 for regular participants, $50 for students. After March 31st, the registration fee will increase to $120 (regular) and $60 (student).

Friday, May 17

Improving the precision of timing-critical research with visual displays

Friday, May 17, 9:00 – 11:00 am, Jasmine/Palm

Organizers: Sophie Kenny, VPixx Technologies; Peter April, VPixx Technologies

VPixx Technologies is a privately held company serving the vision research community by developing innovative hardware and software tools for vision scientists (www.vpixx.com).

Visual display and computer technologies have improved on many fronts over the years; however, impressive technical specifications of devices mask the fact that timing of concurrent events is not typically controlled with a high degree of precision. This is a problem for scientists whose research relies on synchronization of external recording equipment relative to the onset of a visual stimulus. During this workshop, we will demonstrate the use of hardware solutions to improve upon these issues. We will first describe the principle behind these hardware solutions. We will then showcase how experiments can be programmed to control the triggering of external devices, to play audio signals, and to record digital, analog and audio signals, all synchronized with microsecond accuracy to screen refresh.

To help us plan this event, please send an email signalling your interest to:

Psychophysics Toolbox Forum

Friday, May 17, 11:00 – 11:45 am, Jasmine/Palm

Organizer: Vijay Iyer, MathWorks

Forum for researchers, vendors, and others who work with the Psychophysics Toolbox (PTB) widely used for visual stimulus generation in vision science. MathWorks is pleased to support the PTB’s ongoing development, which is now hosted at the Medical Innovations Incubator (MII) in Tuebingen. A consortium led by industry is emerging to support the PTB project. Join to learn more about the new arrangement and to provide your input on future directions for PTB.

Saturday, May 18

Large-scale datasets in visual neuroscience

Saturday, May 18, 8:30 – 10:30 pm, Jasmine/Palm

Organizers: Elissa Aminoff, Fordham University; John Pyles, Carnegie Mellon University

Speakers: Elissa Aminoff, Fordham University; Kendrick Kay, University of Minnesota; John Pyles, Carnegie Mellon University; Michael Tarr, Carnegie Mellon University

The future of vision science lends itself more and more to using large real-world image datasets (n > 1,000) to study and understand the neural and functional mechanisms underlying vision. As the size of such datasets (and the resulting data) increases, there are commensurate challenges to effectively and successfully collect, distribute, and analyze large-scale data. If you are interested in discussing these challenges, please join us.

The format of this event will be brief presentations by researchers who have recently collected or analyzed large fMRI datasets, followed by an open discussion.

Sunday, May 19

FoVea (Females of Vision et al) Workshop

Sunday, May 19, 7:30 – 9:00 pm, Horizons

Organizers: Diane Beck, University of Illinois, Urbana-Champaign; Mary A. Peterson, University of Arizona; Karen Schloss, University of Wisconsin – Madison; Allison Sekuler, Baycrest Health Sciences

Panel Discussion on Navigating a Life in Science as a Woman
Panel Discussants: Lynne Kiorpes (New York University), Ruth Rosenholtz (MIT), Preeti Verghese (Smith-Kettlewell Eye Research Institute), Emily Ward (University of Wisconsin – Madison)

The panel will begin by addressing issues they consider important/informative and then address questions.

FoVea is a group founded to advance the visibility, impact, and success of women in vision science (www.foveavision.org). We encourage vision scientists of all genders to participate in the workshops.

Please register at: http://www.foveavision.org/vss-workshops 

Monday, May 20

Aesthetics Social

Monday, May 20, 2:00 – 3:30 pm, Sabal/Sawgrass

Organizers: Edward Vessel, Max Planck Institute for Empirical Aesthetics; Karen Schloss, University Wisconsin-Madison; Aenne Brielmann (New York University); Ilkay Isik (MPIEA); Dominik Welke (MPIEA)

Our lives are full of aesthetic experiences. When we look at art, people surrounding us, or views out of the window, we cannot help but assess how much the sight pleases us. This social meeting brings together researchers interested in understanding such aesthetic responses. We will highlight aesthetics research being presented at VSS in a “Data Blitz” session, followed by an open discussion and time to socialize. Light refreshments will be offered.

Data Blitz presentations are open to anyone presenting aesthetics-related work at VSS. Selection for presentation will be made by the organizing committee based on scientific rigor, potential impact and interest, academic position (preference given to students/early stage researchers), and whether your work was selected for a talk or poster at VSS (priority given to posters).

If you are interested in presenting your findings at the Data Blitz session please send an email to  (ATTN: Aesthetics Social Data Blitz) by April 5, 2019 with the following information:

  • Presenter name, affiliation, and academic status (student/postdoc/PI/etc.)
  • Presenter contact information (email, phone)
  • Presentation title and abstract
  • Date/time and type of VSS presentation (poster/talk)

This event is sponsored by the International Association of Empirical Aesthetics (IAEA; https://www.science-of-aesthetics.org) and the Max Planck Institute for Empirical Aesthetics (MPIEA; https://www.aesthetics.mpg.de/en.html).

A hands-on crash course in reproducible mixed-effects modeling

Monday, May 20, 2:00 – 4:00 pm, Glades

Organizer: Dejan Draschkow, Department of Psychology, Goethe University Frankfurt; Department of Psychiatry, University of Oxford

Mixed-effects models are a powerful alternative to traditional F1/F2-mixed model/repeated-measure ANOVAs and multiple regressions. Mixed models allow simultaneous estimation of between-subject and between-stimulus variance, deal well with missing data, allow for easy inclusion of covariates and modelling of higher order polynomials. This workshop provides a focused, hands-on and state of the art treatment of applying this analysis technique in an open and reproducible way. We will provide a fully documented R pipeline, solutions for power analysis and will discuss common pitfalls and unresolved issues. It is suitable for 1) “concept attendance” – you want to be able to evaluate potential issues when reviewing a paper; 2) “implementation attendance” – strong theoretical background, low practical experience; 3) “switch attendance” – you are coming from another language or software and want to switch to R; 4) “transition attendance” – you are quite experienced in traditional analysis procedures and want to see what this is all about and 5) “refreshing attendance” – you just want to check if there are any new developments. It might not be suitable for participants with zero experience in statistics and programming and too boring for participants who perform simulation-based power analysis for mixed models or use a PCA to diagnose overfitting problems. This event is funded by a WikiMedia Open Science grant dedicated to https://smobsc.readthedocs.io/en/latest/.

No registration required. First come, first served, until full. For questions or more information, please visit my website at https://www.draschkow.com/.

WorldViz VR/AR Workshop: Virtual Reality Displays Break New Ground for Research Purposes

Monday, May 20, 2:00 – 4:00 pm, Jasmine/Palm

Organizers: Matthias Pusch, WorldViz; Lucero Rabaudi, WorldViz

Beyond the wave of consumer virtual reality displays is a new lineup of professional products that are capable of generating a new class of visual stimulus that can be used by scientists. We will show two examples of what we consider most exciting for the VSS community. The first is a multi-resolution HMD that is capable of nearly 60 cycles-per-degree over a large center field of the display which then feathers to more typical HMD resolution toward the periphery. The second is a low-latency high-resolution video-see-thru technology that converts a consumer class HMD into a sophisticated augmented reality system that can be used to combine real near field objects (e.g., one’s hands or tools) with computer graphics imagery.

In this Satellite session, we will present these technologies in action with examples of how researchers can use them in practice. There will be a technical portion of the session detailing the technologies benefits and limitations, as well as a hands-on portion for attendees to try the technologies live.

VISxVISION Workshop: Novel Vision Science Research Directions in Visualization

Monday, May 20, 2:00 – 4:00 pm, Royal Tern

Organizers: Cindy Xiong, Northwestern University; Zoya Bylinskii, Adobe Research; Madison Elliott, University of British Columbia; Christie Nothelfer, Nielsen; Danielle Szafir, University of Colorado Boulder

Interdisciplinary work across vision science and data visualization has provided a new lens to advance our understanding of the capabilities and mechanisms of the visual system while simultaneously improving the ways we visualize data. Vision scientists can gain important insights about human perception by studying how people interact with visualized data. Vision science topics, including visual search, ensemble coding, multiple object tracking, color and shape perception, pattern recognition, and saliency, map directly to challenges encountered in visualization research.

VISxVISION (www.visxvision.com) is an initiative to encourage communication and collaboration between researchers from the vision science and the data visualization research communities. Building on the growing interest on this topic and the discussions inspired by our symposium last year “Vision and Visualization: Inspiring novel research directions in vision science,” this workshop aims to provide a platform to bring together vision science and visualization researchers to share cutting-edge research at this interdisciplinary intersection. We also encourage researchers to share vision science projects that have the potential to be applied to topics in data visualization.

This year’s workshop will consist of a series of lightning talks, followed by a Q&A session with the presenters. Attendees will then learn about conference and publication opportunities in this field: Brian Fisher will review the IEEE Vis conference and benefits of collaborating within data visualization, and Editors from the Journal of Vision’s upcoming special visualization edition will discuss publishing in this area. The workshop will conclude with a “meet & mingle” session with refreshments, intended to encourage more informal discussion among participants and to inspire interdisciplinary collaboration.

This event is being sponsored by Adobe Inc., the Visual Thinking Lab at Northwestern, and Colorado Boulder’s VisuaLab.

A call for abstracts on https://visxvision.com will solicit recent, relevant research at the intersection of vision science and visualization, or vision science project proposals that have the potential to be applied to topics in data visualization (deadline: April 8).  The top submissions will be selected for presentation as lightning talks at the workshop (notification: April 15). Submit your abstract here: http://bit.ly/2019abstract

Please register for the event at: http://bit.ly/2019visxvision.

Tuesday, May 21

Canadian Vision Social

Tuesday, May 21, 12:30 – 2:30 pm, Jasmine/Palm

Organizer: Doug Crawford, York Centre for Vision Research

This lunch Social is open to any VSS member who is, knows, or would like to meet a Canadian Vision Scientist! This event will feature free food and refreshments, with a complementary beverage for the first 100 attendees. We particularly encourage trainees and scientists who would like to learn about the various opportunities available through York’s Vision: Science to Applications (VISTA) program. This event is sponsored by the York Centre for Vision Research and VISTA, which is funded in part by the Canada First Research Excellence Fund (CFREF)

Visibility: A Gathering of LGBTQ+ Vision Scientists and friends

Tuesday, May 21, 8:30 – 10:00 pm (precedes Club Vision), Jasmine/Palm

Organizers: Alex White, University of Washington; Michael Grubb, Trinity College

LGBTQ students are disproportionately likely to drop out of science early. Potential causes include the lack of visible role models and the absence of a strong community. This social event is one small step towards filling that gap. All are welcome. Snacks, drinks, and camaraderie will be provided. Sponsored by Trinity College.

Wednesday, May 22

MacGyver-ing in vision science: interfacing systems that are not supposed to work together

Wednesday, May 22, 1:00 – 3:00 pm, Chart
Organizer: Zoltan Derzsi, New York University Abu Dhabi

In research, it is sometimes necessary to push equipment beyond its design limits or to use it for something it was not designed to do. Desperation leads to creativity, and temporary workarounds end up being permanent. Usually this is the point when a design bottleneck is introduced into the experiment, which will bite back a couple of months later when nobody anticipates it, effectively ruining all the data collected (my own experience!).

This workshop will show some good practices on how to interface various systems, and how to use ordinary electronics in a vision science experiment.

You will get a free IoT (Internet of Things) kit containing a development board, some sensors, a display and light sources.

Please let me know if you plan to attend, by emailing zd8[at]nyu[dot]edu no later than the 10th of April!

The kit will contain a nodeMCU device, please make sure you pick it up on the first days of the conference. I will not be able to start from scratch on how to do programming and how to upload a firmware to the board, this will be included in the documentation and there is plenty of support online. I’d like to spend time showing how to make these bits into the cheapest calibrated D65 light source, how to automate data collection over the local network, how to build your own instruments, or simultaneously control various systems, while delivering stimuli with microsecond precision.

You will be able to adapt the workshop material for your own environment, and develop it further.

Vision Sciences Society