12th Annual Dinner and Demo Night

Monday, May 19, 2014, 6:00 – 10:00 pm

Beach BBQ: 6:00 – 8:00 pm, Beachside Sun Decks,
Demos: 7:00 – 10:00 pm, Talk Room 1-2, Royal Tern, Snowy Egret, Compass, & Spotted Curlew

Please join us Monday evening for the 12th Annual VSS Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education. This year’s Demo Night will be organized and curated by Gideon Caplovitz, University of Nevada Reno; Arthur Shapiro, American University; Dejan Todorovic, University of Belgrade and Karen Schloss, Brown University.

A Beach BBQ is served on the Beachside Sun Decks. Demos are located in Talk Room 1-2, Royal Tern, Snowy Egret, Compass, & Spotted Curlew.

Demo Night is free for all registered VSS attendees. Meal tickets are not required, but you must wear your VSS badge for entry to the Beach BBQ. Guests and family members of all ages are welcome to attend the demos but must purchase a ticket for dinner. You can register your guests at any time during the meeting at the VSS Registration Desk, located on the Grand Palm Colonnade. A desk will also be set up on the Seabreeze Terrace at 6:30 pm.

Guest prices: Adults: $25, Youth (6-12 years old): $10, Children under 6: free

Biological Motion

Peter Thompson, Rob Stone, University of York
A real-time demonstration of point-light biological motion. Walk, jump, dance in front of the sensor and see your point-light display. Using an Xbox Kinect sensor (approx $50) and our free software you can produce this effect for yourselves.

Audiovisual Hallucinations

Parag Mital, Dartmouth College
Audiovisual scene synthesis attempts to simultaneously learn and match existing representations of proto-objects in the ongoing auditory and visual scene. The synthesized scene is presented through virtual reality goggles and headphones.

Phenomenology of Flicker-Defined Motion

Jeff Mulligan, NASA Ames Research Center; Scott Stevenso, University of Houston College of Optometry
Flicker-defined motion produces a number of surprises: a target that disappears when pursued; a target that appears to move in jumps when moved continuously; a persistent ‘’trail’’ that disappears when the target is pursued. These effects and more will be presented.

Thatcherise Your Face

Peter Thompson, Rob Stone, Tim Andrews, University of York
The Thatcher illusion is one of the best-loved perceptual phenomena. here you will have the opportunity to see yourself ‘thatcherised’ in real time. And you can have a still version of your thatcherised face as a souvenir.

The Ever-Popular Beuchet Chair

Peter Thompson, Rob Stone, Tim Andrews, University of York
The Beuchet chair baffles because the two separate parts of the chair are seen as belonging together. Although at different distances, the two parts have appropriate sizes to create the retinal image of a single chair at some intermediate distance. The two figures are now perceived as being at the same distance away and therefore size constancy does not operate. Additionally the far figure must be tiny to fit on the big seat of the chair and the near figure must be huge.

The Wandering Circles

Christopher D. Blair, Lars Strother and Gideon P. Caplovitz, University of Nevada, Reno
Physically stationary flickering shapes appear to drift randomly when viewed peripherally.

Dynamic Ebbinghaus

Ryan E.B. Mruczek, Christopher D. Blair, Gideon P. Caplovitz, University of Nevada, Reno
Come see the Ebbinghaus Illusion as you’ve never seen it before! Watch the central circle grow and shrink before your eyes as we add a dynamic twist to this classic illusion.

To Deform or Not to Deform: Illusory Deformations of a Static Object Triggered by the Light Projection of Motion Signals

Takahiro Kawabe, Masataka Sawayama, Kazushi Maruya, Shin’ya Nishida, NTT Communication Sciences Laboratories, Japan
We will demonstrate that projecting image motion through a video projector can deform the apparent shape of static objects printed on the paper.

Strobowheel

Anna Kosovicheva, Benjamin Wolfe, Wesley Chaney, Allison Yamanashi Leib, Alina Liberman, University of California, Berkeley
We present a modified phenakistoscope in which we use a strobe light to create animated images on a spinning disc. Viewers can adjust the frequency of a strobe light to change the animation, or make the image on the disc appear to spin backwards or stand still.

Polygonization Effect

Kenzo Sakurai, Tohoku Gakuin University
Prolonged viewing of a circular shape in peripheral vision produces polygonal shape perception of the circle itself. This shape distortion illusion can be induced in a short period by alternately presenting a circle and its inward gradation pattern.

The Saccadic Smear

Mark Wexler, Marianne Duyke, Thérèse Collins, CRNS & Université Paris Descartes
When a stimulus appears only during a saccade, you see it smeared. If it also appears before the saccade or stays on afterwards, the smear is masked. We demonstrate this retro 1970s-style phenomenon using a portable eye tracker and several LEDs. Wait a minute, where did that smear go?

Bistable Double Face Illusion

Sarah Cormiea, Anna Shafer-Skelton; Harvard University
Come visit our demo and take home an illusion made with your own face. We’ll take two photos and combine them to create a bistable illusion of a forward looking face that incongruously still has a profile.

Expansion/Contraction Blindness

Koshke Takahashi, Katsumi Watanabe, The University of Tokyo
We show a novel striking visual illusion. When an object filled with black and white color makes rotation and zoom on a gray background, you will never see the expansion and contraction.

Rotating Columns

Vicky Froyen, Daglar Tanrikulu, Rutgers University
Adding textural motion to classic figure-ground displays reveals complex interactions between accretion-deletion and geometric figure-ground cues. We demonstrate cases where static geometry overrides standard depth from accretion-deletion. Thus moving regions are perceived as figural and rotating in 3D, despite the textural motion being linear and thus inconsistent with 3D rotation.

Infinite Regress Etch-a-Sketch

Nika Adamian, Patrick Cavanagh, Matteo Lisi, Laboratoire Psychologie de la Perception, Université Paris V Descartes; Peter U. Tse, Laboratoire Psychologie de la Perception, Université Paris V Descartes, Department of Psychological and Brain Sciences, Dartmouth College
A new infinite regress illusion (Tse & Hsieh, 2006) synchronizes changes in the path of a gabor with changes in its internal motion. This produces large, stable differences between perceived and physical location. Illusory shapes or orientations can be created to show dramatic dissociations between action and perception.

News from the Freiburg Vision Test

Michael Bach, University Eye Center, Freiburg Germany
“FrACT” with a history of over 20 years was validated in a number of studies and is widely employed – in 2013 it was cited in 40 papers that used FrACT. Its ongoing active development is often driven by user requests. I will demonstrate new features.

Chromatic Interocular Switch Rivalry

Jens Hofman Christiansen, University of Copenhagen; Steven Shevell, University of Chicago; Anthony D’Antona, University of Texas at Austin
Using a haploscope, a differently colored circle is presented to each eye in the same part of the visual field (binocular color rivalry). When the rivalrous colors are exchanged between the eyes at 3 Hz, the percept is not flickering colors but instead slow alternation between the two colors.

Eye Movements and Troxler Fading

Romain Bachy, Qasim Zaidi, Graduate Center for Vision Research, SUNY Optometry
Observers will be able to use a time-varying procedure to see that fixational eye-movements control the magnitude and speed of adaptation for foveal and peripheral vision. The stimuli will isolate single classes of retinal ganglion cells and demonstrate the effects of flicker and blur on adaptation of each class.

The Magical Misdirection of Attention in Time

Anthony S. Barnhart, Northern Arizona University
When we think of ‘’misdirection,’’ we typically think of a magician drawing attention away from a spatial location. However, magicians also misdirect attention in time through the creation of ‘’off-beats,’’ moments of suppressed attention. The ‘’striking vanish’’ illusion, where a coin disappears when tapped with a pen, exploits this phenomenon.

Applying Temporal Masking For Bandwidth Reduction in HD Video Streaming

Velibor Adzic, Hari Kalva, Florida Atlantic University
We demonstrate some aspects of temporal masking in natural video sequences. Specifically, application of backward temporal masking and motion masking in visually lossless video compression.

Water Flowing Upward

Wenxun Li, Leonard Martin, Columbia University; Ethel Matin, Long Island University – Post
See Water Flowing Uphill!

Lower in Contrast, Higher in Numerosity

Quan Lei, Adam Reeves, Northeastern University
There appear to be many more light gray than white disks, and many more dark gray than black disks, when equal numbers of the disks are intermingled on a medium gray background. Intermingling is critical: disks separated into two regions match in perceived numerosity.

The Shape-Shifting Cylinder

Lore Thaler, Durham University, UK
We present a novel demonstration of the effects of optical texture and binocular disparity on shape perception. You will see a real, physical cylinder. As you alternate your view from monocular to binocular the shape of the cylinder shifts, i.e. the tip of the cylinder appears to move from left to right (or vice versa).

Virtual Reality Immersion with the Full HD Oculus Rift Head Mounted Displays

Michael Schaletzki, Matthias Pusch, Charlette Li, WorldViz
Get fully immersed with a research quality, consumer component based Virtual Reality system. Powered by the WorldViz Vizard VR software, the system comes with the Oculus Rift HD, motion tracking, rapid application development tools, application starter kit, support & training. Walk through high-fidelity virtual environments in full scale and fully control visual input.

What Happens to a Shiny 3D Object in a Rotating Environment?

Steven A. Cholewiak, University of Gissen, Germany; Gizem Kucukoglu, New York University
A mirrored object reflects a distorted world. The distortions depend on the object’s surface and act as points of correspondence when it moves. We demonstrate how the perceived speed of a rotating mirrored object is affected by rotation of the environment and present an interesting case of perceived non-rigid deformation.

Alternating Apparent Motion in Random Dot Displays

Nicolas Davidenko, Jacob Smith, Yeram Cheong, University of California, Santa Cruz
A succession of random dot displays gives rise to a percept of coherent, global, apparent motion. The perceived apparent motion is typically alternating (flipping direction on each frame) and vertical, although the direction can be easily manipulated by suggestion.

An Ames-room-like Box with a Ball Inside

Ryuichi Yokota, Masahiro Ishii, Shoko Yasuoka, Sapporo University
This is a miniature overturned Ames room with a physically-slanted base. The top face has a hole to peep inside. The box is designed to have an apparently-horizontal base and contains a ball. One can experience unnatural feelings when they manipulate to roll the ball across the base.

VPixx Response-Time Survivor

Peter April, Jean-Francois Hamelin, Stephanie-Ann Seguin, VPixx Technologies
VPixx will be demonstrating our PROPixx DLP projector refreshing at 1440Hz. The demo is a fun game in which we measure your reaction time to cross-modal audiovisual stimuli. Do it fast, and win a prize! This year’s demo has a surprise twist which you will definitely want to see.

Moving Barber-Pole Illusion

George Sperling, Peng Sun, Charles Chubb, University of California, Irvine
When an entire vertically oriented barber pole itself moves laterally, and it is viewed peripherally, the perceived motion direction is vertically upward, even though the physical Fourier, end-stop, and feature motion directions, and the foveally perceived motion direction are all diagonal.

SWYE! Surfing With Your Eyes: The Beachiest Illusion Out There!

Alejandro Lleras, Simona Buetti, University of Illinois
This ‘’You-Should-Really-Try-Doing-It-On-The-Beach-Sometime-You-Know?’’ visual illusion is Ok when seen on video… a run-of-the-mill bi-stable stimulus. But when experienced at the beach, it becomes a multimodal illusion where (while stationary) you feel as if you were gliding at several feet per second over the water. Your trips to the beach will never be the same!

The New Synopter

M.W.A. Wijntjes, S.C. Pont, Perceptual Intelligence Lab, Delft University of Technology
With two mirrors it is possible to optically juxtapose the location of both eyes, resulting in disparities that are similar to infinitely distant points. Although invented about a 100 year ago, the synopter yields a percept that is still difficult to explain: that of an illusory 3D picture.

VSS@ARVO 2014

Cortical influences on eye movements, integrating work from human observers and non-human primates

Time/Room: Sunday, May 4, 2014, 1:30 – 3:00 pm
Organizers: Tony Norcia, Stanford University and Susana Chung, UC Berkeley
Speakers: Jeff Schall, Eileen Kowler, Bosco Tjan

The mechanisms responsible for guiding and controlling gaze shifts.

Speaker: Jeff Schall, Department of Psychology, Vanderbilt University

This presentation will survey the mechanisms responsible for guiding and controlling gaze shifts. Computational models provide a framework through which to understand how distinct populations of neurons select targets for gaze shifts, control the initiation of saccades and monitor the outcome of gaze behavior. Alternative computational models are evaluated based on fits to performance of macaque monkeys and humans guiding and controlling saccades during visual search and stopping tasks. The dynamics of model components are evaluated in relation to neurophysiological data collected from the frontal lobe and midbrain of macaque monkeys performing visual search and stopping tasks. The insights gained provide guidance on possible diagnosis and treatment of high level gaze disorders.

The role of prediction and expectations in the planning of smooth pursuit and saccadic eye movements.

Speaker: Eileen Kowler, Department of Psychology, Rutgers University

Eye movements – saccades or smooth pursuit – ensure that the line of sight remains near objects of interest, thus establishing the retinal conditions that support high quality vision. Effective control of eye movements relies on more than the analysis of sensory signals.  Eye movements must also be sensitive to high-level decisions about which regions of the environment deserve immediate attention and visual analysis.  One important high level signal that contributes to effective eye movements is the ability to generate predictions.  For example:  Anticipatory smooth pursuit eye movements in the direction of upcoming future target motion are elicited by symbolic cues that disclose the future path of moving targets, as well as (for self-moved targets) signals that represent our own motor plans.  These responses are automatic and require no learning or effort.  Anticipatory behavior is also seen in saccades, where subtle adjustments in fixation time are made on the basis of the expected difficulty of the visual discrimination.  By taking advantage of our ability to interpret the environment and monitor our own cognitive states, predictive eye movements serve a vital role in natural oculomotor behavior.  They reduce sensorimotor delays, reduce the load attached to processing sensory input, and allow a pattern of efficient decision-making that frees central resources for higher level aspects of the task.

Gaze Control without a Fovea

Speaker: Bosco Tjan

Form vision is an active process. With normal foveal vision, the oculomotor system continually brings targets of interest onto the fovea with saccadic eye-movements. The loss of foveal vision means that these foveating saccades will be counterproductive. Central field loss (CFL) patients often develop a preferred retinal locus (PRL) in their periphery for fixation (Crossland et al., 2005). This adjustment appears idiosyncratic and lengthy. Neither the time course of this adjustment nor the determining factors for the eventual location of a PRL is well understood. This is because it is nearly impossible to infer the conditions prior to the onset of CFL for any individual patient or to track a patient from CFL onset. To make progress, we studied PRL development in normally sighted individuals. We used a gaze-contingent display to simulate a visible circular central scotoma 5° or 6°in radius in two experiments. In one experiment, subjects were told to “look at” an object as it was randomly repositioned against a uniform background. This object was the target for a visual-search trial immediately following this observation period. In the other experiment, a different group of subjects used eye movements to control a highlighted ring, which marked the edge of the simulated scotoma, to make contact with a small target disc, which was randomly placed on the screen in each trial.  In both experiments, a PRL emerged spontaneously within a few hours of experiment time (spread out over several days). Saccades were also re-referenced to the PRL, but at a slower rate. We found that the developed PRL was retained over weeks without additional practice. Furthermore, the PRL stayed at the same retinal location when tested with a different task or when using an invisible simulated scotoma. Losing the fovea replaces a unique locus on the retina by a set of equally probable peripheral loci. Rather than selecting the optimal retinal locus for every saccade, the oculomotor system opts for a minimal change in its control strategy by adopting a single retinal locus for all saccades. This leads to a speedy adjustment and refinement of the controller. The quality of the error signals (invisible natural scotoma vs. visible simulated scotoma) may explain why CFL patients appear to take much longer in developing PRL than our normally sighted subjects.

2014 Public Lecture – Thomas V. Papathomas

Thomas V. Papathomas

Rutgers University

public_lectureThomas V. Papathomas, a Professor and Dean at Rutgers, the State University of New Jersey, studies how humans perceive objects, faces and scenes. He has authored over 100 scientific publications, has designed award-winning 3-D illusions and has exhibited in art/science shows and science museums.

Vision Research: Artists Doing Science – Scientists Doing Art

Saturday, May 17, 2014, 11:00 am – 12:30 pm, The Dali Museum, St. Petersburg, Florida

It has often been said that artists are years ahead of vision scientists in making progress toward understanding how the visual brain works. This talk will illustrate how artists have been able to use their intuitive grasp of visual perception fundamentals to open new horizons in research. At the same time, the talk will highlight how visual scientists have used their research-based knowledge of visual brain function to provide a deep understanding of the art experience and, occasionally, venture into making art.

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

2014 Student Workshops

VSS Workshop for PhD Students and Postdocs:
PNAS: How do I judge to which journal I should send my paper

Sunday, May 18, 1:00 – 2:00 pm, Snowy Egret

Moderator: Frans Verstraten
Introduction: Sandra Aamodt
Discussants: Heinrich Bülthoff, Nancy Kanwisher, & Concetta Morrone

PNAS… Post Nature And Science. We all think we do excellent research and great results deserve a great outlet. How many of us have wandered the whole route from all the top ranked journals, only to end up in an average journal? Wouldn’t it be good if we could only judge the journal to go for immediately? It saves the disappointment of not being sent out for review, rejection, and the energy needed to once more having to rewrite the manuscript. Moreover, what is wrong with an average journal for your output? We will discus some of the ways to convince the editors of high profile journals to at least send your manuscript out for review. We will hear some good and bad experiences and hope to conclude with some realistic advice…

Sandra Aamodt

Sandra is a coauthor of Welcome to Your Child’s Brain: How the Mind Grows from Conception to College and Welcome to Your Brain: Why You Lose Your Car Keys But Never Forget How to Drive and Other Puzzles of Everyday Life, which was named science book of the year in 2009 by the American Association for the Advancement of Science. A former editor in chief of Nature Neuroscience, she has read over 5000 neuroscience papers in her career. Before joining the journal, she received a Ph.D. in neuroscience from the University of Rochester and did postdoctoral research at Yale University.

Heinrich Bülthoff

Heinrich is director at the Max Planck Institute for Biological Cybernetics in Tübingen. He is head of the Department Human Perception, Cognition and Action in which a group of about 70 researchers investigate psychophysical and computational aspects of higher level visual processes in object and face recognition, sensory-motor integration, human robot interaction, spatial cognition, and perception and action in virtual environments. He is Honorary Professor at the Eberhard-Karls-Universität (Tübingen) and Korea University (Seoul). He is co-founder of the journal ACM Transactions on Applied Perception (ACM TAP) and on the editorial boards of several open access journals. He has not published in Nature Journals for more than ten years.

Nancy Kanwisher

Nancy is the Walter A. Rosenblith Professor of Cognitive Neuroscience in the Department of Brain and Cognitive Sciences at the M.I.T. She is interested in the functional organization of the brain as a window into the architecture of the human mind. Her work and that of her students have been published in some of the best journals. She has, however, her ideas about this… She is also a member of the National Academy of Sciences (USA).

Concetta Morrone

Concetta is Professor of Physiology at University of Pisa. Over the years her research has spanned most active areas of vision research, including spatial vision, development, plasticity, attention, color, motion, robotics, vision during eye movements and more recently multisensory perception and action. Concetta has published some 160 publications in excellent international peer-review journals, including Nature and her sister journals, Neuron, Current Biology and several Trends in Journals. She has been editor of many journals and was one of the founding editors of the Journal of Vision, and currently she is Chief Editor and founder of the journal “Multisensory Research” (the continuation of “Spatial Vision”).

Frans Verstraten

Frans is the MacCaughey Chair of Psychology at the University of Sydney. So far he has never made it into Nature or Science and if Bayes was right, he probably never will. His task is to facilitate the discussion. He has served on several editorial boards and is currently one of the editors-in-chief of Perception and i-Perception.

VSS Workshop for PhD Students and Postdocs:
How to Transition from Postdoc to Professor?

Sunday, May 18, 1:00 – 2:00 pm, Royal Tern

Moderator: Frank Tong
Discussants: Julie Golomb, Sam Ling, Joo-Hyun Song, and Jeremy Wilmer

You’re really excited by all of the research you’re doing in the lab…. Ahh, the freedom to explore, discover, and focus just on doing good science. But at the back of your mind, you find yourself thinking, “When should I strike out on my own and apply for faculty positions, so I can start my own lab?”

So, when is the right time? What should your CV look like, so your application will attract the attention of the search committee? How will you craft your research statement to convey the importance of your work? Once you are invited to interview, how will you prepare for the big day, what should you expect in your individual meetings, what kinds of questions might people ask? Most important, how will you structure and stylize your job talk to excite everyone in the department about your research program?

We will hear the advice and learning experiences of assistant professors who recently made the transition from postdoc to faculty member. Much of this seminar will focus on how to put your best face forward when applying for faculty positions, from CV to negotiating the details of the position. We will have an open discussion of what qualities departments often look for in top candidates. We will also hear about the joys and challenges of starting a new lab, teaching courses for the first time, finding the right people for the lab family, and what life is like as a new faculty member.

Julie Golomb

Julie is an Assistant Professor in the Department of Psychology and Center for Cognitive and Brain Sciences at the Ohio State University. Her research focuses on how objects and their spatial locations are perceived and coded in the brain, and how these representations are influenced by eye movements, shifts of attention, and other top-down factors. Julie received her PhD from Yale University in 2009 and did a postdoc at MIT before starting her faculty position in 2012. She was recently selected as a 2014 Sloan Research Fellow in Neuroscience.

Sam Ling

Sam is an Assistant Professor of Psychology at Boston University. His research focuses on neural mechanisms of visual perception (e.g., orientation perception, contrast sensitivity, binocular rivalry) and the top-down effects of attention on visual processing. He received his PhD from New York University in 2007 and pursued postdoctoral research at Vanderbilt University before beginning his current faculty position in 2014.

Joo-Hyun Song

Joo-Hyun is an Assistant Professor in the Department of Cognitive, Linguistic & Psychological Sciences at Brown University. She investigates the mechanisms involved in integrating higher-order cognitive processes, such as attention, decision making and visually guided actions, through a combination of methodologies including behavioral investigations, online action tracking, fMRI, EEG, and neurophysiological experiments. She received her PhD from Harvard University (2006) and pursued postdoctoral research at the Smith-Kettlewell Eye Research Institute (2006-2010) before beginning her current faculty position in 2010.

Jeremy Wilmer

Jeremy is an Assistant Professor of Psychology at Wellesley College. He investigates clinical and non-clinical human variation in cognitive and perceptual abilities to gain insights into their genetic and environmental influences, functional organization, and practical correlates. His experiences include several years of running a lab at an undergraduate-only, single-sex liberal arts college. He received his PhD in 2006, pursued postdoctoral research at University of Pennsylvania and SUNY College of Optometry, before beginning his current faculty position in 2009.

Frank Tong

Frank Tong is a Professor of Psychology at Vanderbilt University. He is interested in understanding the fundamental mechanisms underlying visual perception, attention, object processing, and visual working memory. He has received multiple awards for his research advances, in particular for his work on fMRI decoding of visual and mental states. He particularly enjoys working with students and postdocs as they carve their path towards scientific discovery and independence, and currently serves as a VSS board member.

2014 Funding Workshop

VSS Workshop on Grantsmanship and Funding Agencies

Saturday, May 17, 2014, 1:00 – 2:00 pm, Snowy Egret

Discussants: Todd Horowitz and Michael Steinmetz

You have a great research idea, but you need money to make it happen. You need to write a grant. But where can you apply to get money for vision research? What do you need to know before you write a grant? How does the granting process work? Writing grants to support your research is as critical to a scientific career as data analysis and scientific writing. In this session, Todd Horowitz (National Cancer Institute) and Mike Steinmetz (National Eye Institute) will give you insight into the inner workings of the extramural program at the National Institutes of Health. Additionally, we will present information on a range of government agencies outside the NIH who are interested in funding vision science research.

Todd Horowitz

Todd is Program Director in the Basic Biobehavioral and Psychological Sciences Branch at the National Cancer Institute (NCI). He came to this position after spending 12 years as Principal Investigator at Brigham & Women’s Hospital and Harvard Medical School in Boston, where he studied visual search and multiple object tracking. At NCI, he is responsible for promoting basic research in attention, perception, and cognition, as well as serving on the trans-NIH Sleep Research coordinating committee.

Michael Steinmetz

Michael is the Director of the Strabismus, Amblyopia, and Visual Processing Program at the National Eye Institute (NEI). Dr. Steinmetz was a faculty member in the Department of Neuroscience and the Zanvyl Krieger Mind-Brain Institute at Johns Hopkins University for twenty years. His research program studied the neurophysiological mechanisms of selective attention and spatial perception by combining behavioral studies with single-unit electrophysiology in awake monkeys and fMRI experiments in humans. Dr. Steinmetz has extensive experience at NIH, both as a Scientific Review Administrator and as a program officer. He also represents the NEI on many inter-agency and trans-NIH committees, including the NIH Blueprint; the NIH/NSF Collaborative Research in Computational Neuroscience (CRCNS) program; the BRAIN project; and  the DOD vision research group. Dr. Steinmetz also serves as the NEI spokesperson for numerous topics in visual neuroscience.

 

2014 Davida Teller Award – Mary C. Potter

VSS established the Davida Teller Award in 2013. Davida was an exceptional scientist, mentor and colleague, who for many years led the field of visual development. The award is therefore given to an outstanding woman vision scientist with a strong history of mentoring.

Vision Sciences Society is honored to present Dr. Mary Potter with the 2014 Davida Teller Award.

Mary C. Potter

Department of Brain and Cognitive Sciences, MIT

Dr. Mary Potter, better known as Molly Potter, a professor of Psychology at the Massachusetts Institute of Technology, is the winner of the Davida Teller Award 2014. Potter is known for her fierce intellect, her deeply original experiments, and her fundamental discoveries about human cognition.

A few highlights: Already in 1975, Potter discovered that subjects can report conceptual information about a pictured object faster than they can name it, showing that it is not necessary to access the verbal label to understand the meaning of an object. Later she discovered that complex visual scenes can be perceived and understood much faster than anyone had previously recognized. She showed that subjects can identify the gist of a scene from an astonishingly brief presentation. Here Potter made innovative use of rapid serial visual presentation (RSVP).

Potter has a long list of scientists that consider her as their mentor, many of them leading scientists themselves now. For example, with Judith Kroll, Molly showed that people can easily read at 12 words per second, but their later memory will be poor. In Molly’s lab, Helene Intraub discovered repetition blindness and Nancy Kanwisher and Daphne Bavelier developed methods to study it. Marvin Chun, and later Mark Niewenstein and Brad Wyble, investigated and modeled the attentional blink.

Detecting picture meaning in extreme conditions

Monday, May 19, 12:30 pm, Talk Room 2

What is the shortest presentation duration at which a named scene or object can be recognized above chance, when the scene is presented among other pictures in a short RSVP sequence? In a recent study (Potter, Wyble, Hagmann, & McCourt, 2014) presentation durations were blocked and dropped slowly from 80 ms to 53, 27, and 13 ms. Although d’ declined as duration shortened, it remained above chance even at 13 ms, whether the name was given just before or just after the sequence, and whether there were 6 or 12 pictures per sequence. A forced choice between two pictures at the end of each sequence was reliably above chance only if the participant had correctly said yes. New replications varied the method but gave similar results: 1) using grayscale sequences; 2) randomizing all the nontarget pictures across all trials, for each subject; 3) randomizing durations instead of blocking them; and 4) using a different set of pictures with superordinate or basic object names for targets. Whether these results indicate feedforward processing (as we suggest) or are accounted for in some other way, they represent a challenge to models of visual attention and perception.

Understanding representation in visual cortex: why are there so many approaches and which is best?

Organizers: Thomas Naselaris & Kendrick Kay; Department of Neurosciences, Medical University of South Carolina & Department of Psychology, Washington University in St. Louis
Presenters: Thomas Naselaris, Marcel van Gerven, Kendrick Kay, Jeremy Freeman, Nikolaus Kriegeskorte, James J. DiCarlo, MD, PhD

< Back to 2014 Symposia

Symposium Description

Central to visual neuroscience is the problem of representation: what features of the visual world drive activity in different areas of the visual system? Receptive fields and tuning functions have long served as the basic descriptive elements used to characterize visual representations. In recent years, the receptive field and the tuning function have been generalized and in some cases replaced with alternative methods for characterizing visual representation. These include decoding and multivariate pattern analysis, representational similarity analysis, the use of abstract semantic spaces, and models of stimulus statistics. Given the diversity of approaches, it is important to consider whether these approaches are simply pragmatic, driven by the nature of the data being collected, or whether these approaches might represent fundamentally new ways of characterizing visual representations. In this symposium, invitees will present recent discoveries in visual representation, explaining the generality of their approach and how it might be applicable to future studies. Invitees are encouraged to discuss the theoretical underpinnings of their approach and its criterion for “success”. Invitees are also encouraged to provide practical pointers, e.g. regarding stimulus selection, experimental design, and data analysis. Through this forum we hope to move towards an integrative approach that can be shared across experimental paradigms. Audience: This symposium will appeal to researchers interested in computational approaches to understanding the visual system. The symposium is expected to draw interest from a broad range of experimental backgrounds (e.g. fMRI, EEG, ECoG, electrophysiology). Invitees: The invitees will consist of investigators who have conducted pioneering work in computational approaches to studying visual representation.

Presentations

Visual representation in the absence of retinal input

Speaker: Thomas Naselaris; Department of Neurosciences, Medical University of South Carolina, Charleston, SC

An important discovery of the last two decades is that receptive fields in early visual cortex provide an efficient basis for generating images that have the statistical structure of natural scenes. This discovery has lent impetus to the theory that receptive fields in early visual cortex can function not only as passive filters of retinal input, but as mechanisms for generating accurate representations of the visual environment that are independent of retinal input. A number of theoretical studies argued that such internal visual representations could play an important functional role in vision by supporting probabilistic inference. In this talk, we will explore the idea of receptive fields as generators of internal representations by examining the role that receptive fields play in generating mental images. Mental images are the canonical form of internal visual representation: they are independent of retinal input and appear to be essential for many forms of inference. We present evidence from fMRI studies that voxel-wise receptive field models of the tuning to retinotopic location, orientation, and spatial frequency can account for much of the BOLD response in early visual cortex to imagining previously memorized works of art. We will discuss the implications of this finding for the structure of functional feedback projections to early visual cortex, and for the development of brain-machine interfaces that are driven by mental imagery.

Learning and comparison of visual feature representations

Speaker: Marcel van Gerven; Donders Institute for Brain, Cognition and Behaviour

Recent developments on the encoding and decoding of visual stimuli have relied on different feature representations such as pixel-level, Gabor wavelet or semantic representations. In previous work, we showed that high-quality reconstructions of images can be obtained via the analytical inversion of regularized linear models operating on individual pixels. However, such simple models do not account for the complex nonlinear transformations of sensory input that take place in the visual hierarchy. I will argue that these nonlinear transformations can be estimated independent of brain data using statistical approaches. Decoding based on the resulting feature space is shown to yield better results than those obtained using a hand-designed feature space based on Gabor wavelets. I will discuss how alternative feature spaces that are either learned or hand-designed can be compared with one another, thereby providing insight into what visual information is represented where in the brain. Finally, I will present some recent encoding and decoding results obtained using ultra-high field MRI.

Identifying the nonlinearities used in extrastriate cortex

Speaker: Kendrick Kay; Department of Psychology, Washington University in St. Louis

In this talk, I will discuss recent work in which I used fMRI measurements to develop models of how images are represented in human visual cortex. These models consist of specific linear and nonlinear computations and predict BOLD responses to a wide range of stimuli. The results highlight the importance of certain nonlinearities (e.g. compressive spatial summation, second-order contrast) in explaining responses in extrastriate areas. I will describe important choices made in the development of the approach regarding stimulus design, experimental design, and analysis. Furthermore, I will emphasize (and show through examples) that understanding representation requires a dual focus on abstraction and specificity. To grasp complex systems, it is necessary to develop computational concepts, language, and intuition that can be applied independently of data (abstraction). On the other hand, a model risks irrelevance unless it is carefully quantified, implemented, and systematically validated on experimental data (specificity).

Carving up the ventral stream with controlled naturalistic stimuli

Speaker: Jeremy Freeman; HHMI Janelia Farm Research Campus
Authors: Corey M. Ziemba, J. Anthony Movshon, Eero P. Simoncelli, and David J. Heeger Center for Neural Science New York University, New York, NY

The visual areas of the primate cerebral cortex provide distinct representations of the visual world, each with a distinct function and topographic representation. Neurons in primary visual cortex respond selectively to orientation and spatial frequency, whereas neurons in inferotemporal and lateral occipital areas respond selectively to complex objects. But the areas in between, in particular V2 and V4, have been more difficult to differentiate on functional grounds. Bottom-up receptive field mapping is ineffective because these neurons respond poorly to artificial stimuli, and top-down approaches that employ the selection of “interesting” stimuli suffer from the curse of dimensionality and the arbitrariness of the stimulus ensemble. I will describe an alternative approach, in which we use the statistics of natural texture images and computational principles of hierarchical coding to generate controlled, but naturalistic stimuli, and then use these images as targeted experimental stimuli in electrophysiological and fMRI experiments. Responses to such “naturalistic” stimuli reliably differentiate neurons in area V2 from those in V1, in both single-units recorded from macaque monkey, and in humans as measured using fMRI. In humans, responses to these stimuli, alongside responses to both simpler and more complex stimuli, suggest a simple functional account of the visual cortical cascade: Whereas V1 encodes basic spectral properties, V2, V3, and to some extent V4 represent the higher-order statistics of textures. Downstream areas capture the kinds of global structures that are unique to images of natural scenes and objects.

Vision as transformation of representational geometry

Speaker: Nikolaus Kriegeskorte; Medical Research Council, Cognition and Brain Sciences Unit, Cambridge, UK

Vision can be understood as the transformation of representational geometry from one visual area to the next, and across time, as recurrent dynamics converge within a single area. The geometry of a representation can be usefully characterized by a representational distance matrix computed by comparing the patterns of brain activity elicited a set of visual stimuli. This approach enables to compare representations between brain areas, between different latencies after stimulus onset, between different individuals and between brains and computational models. I will present results from human functional imaging of early and ventral-stream visual representations. Results from fMRI suggest that the early visual image representation is transformed into an object representation that emphasizes behaviorally important categorical divisions more strongly than accounted for by visual-feature computational models that are not explicitly optimized to distinguish categories. The categorical clusters appear to be consistent across individual human brains. However, the continuous representational space is unique to each individual and predicts individual idiosyncrasies in object similarity judgements. The representation flexibly emphasizes task-relevant category divisions through subtle distortions of the representational geometry. MEG results further suggest that the categorical divisions emerge dynamically, with the latency of categoricality peaks suggesting a role for recurrent processing.

Modern population approaches for discovering neural representations and for discriminating among algorithms that might produce those representations.

Speaker: James J. DiCarlo, MD, PhD; Professor of Neuroscience Head, Department of Brain and Cognitive Sciences Investigator, McGovern Institute for Brain Research Massachusetts Institute of Technology, Cambridge, USA
Authors: Ha Hong and Daniel Yamins Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research Massachusetts Institute of Technology, Cambridge, USA

Visual object recognition (OR) is a central problem in systems neuroscience, human psychophysics, and computer vision. The primate ventral stream, which culminates in inferior temporal cortex (IT), is an instantiation of a powerful OR system. To understand this system, our approach is to first drive a wedge into the problem by finding the specific patterns of neuronal activity (a.k.a. neural “representations”) that quantitatively express the brain’s solution to OR. I will argue that, to claim discovery of a neural “representation” for OR, one must show that a proposed population of visual neurons can perfectly predict psychophysical phenomena pertaining to OR. Using simple decoder tools, we have achieved exactly this result, demonstrating that IT representations (as opposed to V4 representations) indeed predict OR phenomena. Moreover, we can “invert” the decoder approach to use large-scale psychophysical measurements to make new, testable predictions about the IT representation. While decoding methods are powerful for exploring the link between neural activity and behavior, they are less well suited for addressing how pixel representations (i.e. images) are transformed into neural representations that subserve OR. To address this issue, we have adopted the representational dissimilarity matrices (RDM) approach promoted by Niko Kriegeskorte. We have recently discovered novel models (i.e. image-computable visual features) that, using the RDM measure of success, explain IT representations dramatically better than all previous models.

< Back to 2014 Symposia

What are you doing? Recent advances in visual action recognition research.

Organizers: Stephan de la Rosa & Heinrich Bülthoff; Max Planck Institute for Biological Cybernetics
Presenters: Nick Barraclough, Cristina Becchio, Stephan de la Rosa, Ehud Zohary, Martin. A. Giese

< Back to 2014 Symposia

Symposium Description

The visual recognition of actions is critical for humans when interacting with their physical and social environment. The unraveling of the underlying processes has sparked wide interest in several fields including computational modeling, neuroscience, and psychology. Recent research endeavors on how people recognize actions provide important insights into the mechanisms underlying action recognition. Moreover, they give new ideas for man-machine interfaces and have implications for artificial intelligence. The aim of the symposium is to provide an integrative view on recent advances in our understanding of the psychological and neural processes underlying action recognition. Speakers will discuss new and related developments in the recognition of mainly object- and human-directed actions from a behavioral, neuroscientific, and modeling perspective. These developments include, among other things, a shift from the investigation of isolated actions to the examination of action recognition under more naturalistic conditions including contextual factors and the human ability to read social intentions from the recognized actions. These findings are complemented by neuroscientific work examining the action representation in motor cortex. Finally, a novel theory of goal-directed actions will be presented that integrates the results from various action recognition experiments. The symposium will first discuss behavioral and neuroscientific aspects of action recognition and then will shift its attention to the modeling of the processes underlying action recognition. More specifically, Nick Barraclough will present research on action recognition using adaptation paradigms and object-directed and locomotive actions. He will talk about the influence of the observer’s mental state on action recognition using displays that present the action as naturalistic as possible. Cristina Becchio will talk about actions and their ability to convey social intentions. She will present research on the translation of social intentions into kinematic patterns of two interacting persons and discuss the observers’ ability to visually use these kinematic cues for inferring social intentions. Stephan de la Rosa will focus on social actions and talk about the influence of social and temporal context on the recognition of social actions. Moreover, he will present research on the visual representation underlying the recognition of social interactions. Ehud Zohary will discuss the representation of actions within the motor pathway using fMRI and the sensitivity of the motor pathway to visual and motor aspects of an action. Martin Giese will wrap up the symposium by presenting a physiologically plausible neural theory for the perception of goal-directed hand actions and discuss this theory in the light of recent physiological findings. The symposium is targeted towards the general VSS audience and provides an comprehensive and integrative view about an essential ability of human visual functioning.

Presentations

Other peoples’ actions interact within our visual system

Speaker: Nick Barraclough; Department of Psychology, University of York, York, UK

Perception of actions relies on the behavior of neurons in the temporal cortex that respond selectively to the actions of other individuals. It is becoming increasingly clear that visual adaptation, well known for influencing early visual processing of more simple stimuli, appears also to have an influence at later processing stages where actions are coded. In a series of studies we, and others, have been using visual adaptation techniques to attempt to characterize the mechanisms underlying our ability to recognize and interpret information from actions. Action adaptation generates action aftereffects where perception of subsequent actions is biased; they show many of the characteristics of both low-level and high-level face aftereffects, increasing logarithmically with duration of action observation, and declining logarithmically over time. I will discuss recent studies where we have investigated the implications for action adaptation in naturalistic social environments. We used high-definition, orthostereoscopic presentation of life-sized photorealistic actors on a 5.3 x 2.4 m screen in order to maximize immersion in a Virtual Reality environment. We find that action recognition and judgments we make about the internal mental state of other individuals is changed in a way that can be explained by action adaptation. Our ability to recognize and interpret the actions of an individual is dependent, not only on what that individual is doing, but the effect that other individuals in the environment have on our current brain state. Whether or not two individuals are actually interacting in the environment, it seems they interact within our visual system.

On seeing intentions in others’ movements

Speaker: Cristina Becchio; Centre for Cognitive Science, Department of Psychology, University of Torino, Torino, Italy; Department of Robotics, Brain, and Cognitive Science, Italian Institute of Technology, Genova, Italy

Starting from Descartes, philosophers, psychologists, and more recently neuroscientists, have often emphasized the idea that intentions are not things that can be seen. They are mental states and perception cannot be smart enough to reach the mental states that are hidden away (imperceptible) in the other person’s mind. Based on this assumption, standard theories of social cognition have mainly focused the contribution of higher-level cognition to intention understanding. Only recently, it has been recognized that intentions are deeply rooted in the actions of interacting agents. In this talk, I present findings from a new line of research showing that intentions translate into differential kinematic patterns. Observers are especially attuned to kinematic information and can use early differences in visual kinematics to anticipate what another person will do next. This ability is crucial not only for interpreting the actions of individual agents, but also to predict how, in the context of a social interaction between two agents, the actions of one agent relate to the actions of a second agent.

The influence of context on the visual recognition of social actions.

Speaker: Stephan de la Rosa; Department Human Perception, Cognition and Action; Max Planck Institute for Biological Cybernetics, Tübingen, Germany
Authors: Stephan Streuber, Department Human Perception, Cognition and Action; Max Planck Institute for Biological Cybernetics, Tübingen, Germany Heinrich Bülthoff, Department Human Perception, Cognition and Action; Max Planck Institute for Biological Cybernetics, Tübingen, Germany

Actions do not occur out of the blue. Rather, they are often a part of human interactions and are, therefore, embedded in an action sequence. Previous research on visual action recognition has primarily focused on elucidating the perceptual and cognitive mechanisms in the recognition of individual actions. Surprisingly, the social and temporal context, in which actions are embedded, has received little attention. I will present studies examining the importance of context on action recognition. Specifically, we examined the influence of social context (i.e. competitive vs. cooperative interaction settings) on the observation of actions during real life interactions and found that social context modulates action observation. Moreover, we investigated the perceptual and temporal factors (i.e. action context as provided by visual information about preceding actions) on action recognition using an adaptation paradigm. Our results provide evidence that experimental effects are modulated by temporal context. These results in the way that action recognition is not guided by the immediate visual information but also by temporal and social contexts.

On the representation of viewed action in the human motor pathways

Speaker: Ehud Zohary; Department of Neurobiology, Alexander Silberman Institute of Life Sciences, Hebrew University of Jerusalem, Israel

I will present our research on the functional properties of brain structures which are involved in object-directed actions. Specifically, we explore the nature of viewed-action representation using functional magnetic resonance imaging (fMRI). One cortical region involved in action recognition is anterior intraparietal (AIP) cortex. The principal factor determining the response in AIP is the identity of the observed hand. Similar to classical motor areas, AIP displays clear preference for the contralateral hand, during motor action (i.e., object manipulation) without visual feedback. This dual visuomotor grasping representation suggests that AIP may be involved in the specific motor simulation of hand actions. Furthermore, viewing object-directed actions (from an egocentric-viewpoint, as in self action) elicits a similar selectivity for the contralateral hand. However, if the viewed action is seen from an allocentric viewpoint (i.e. being performed by another person facing the viewer), greater activation in AIP is found for the ipsilateral hand. Such a mapping may be useful for imitation of hand action (e.g. finger tapping) made by someone facing us which is more accurate when using the opposite (mirror-image) hand. Finally, using the standard “center-out” task requiring visually guided hand movements in various directions, we show that primary motor cortex (M1) is sensitive to both motor and visual components of the task. Interestingly, the visual aspects of movement are encoded in M1 only when they are coupled with motor consequences. Together, these studies indicate that both perceptual and motor aspects are encoded in the patterns of activity in the cortical motor pathways.

Neural theory for the visual perception of goal-directed actions and perceptual causality

Speaker: Martin. A. Giese; Section for Computational Sensomotorics, Dept. for Cognitive Neurology, HIH and CIN, University Clinic Tübingen, Germany
Authors: Falk Fleischer (1,2); Vittorio Caggiano (2,3); Jörn Pomper (2), Peter Thier (2) 1) Section for Computational Sensomotorics 2) Dept. for Cognitive Neurology, HIH and CIN, University Clinic Tübingen, Germany 3) McGovern Institute for Brain Research, M.I.T., Cambridge, MA Supported by the DFG, BMBF, and EU FP7 projects AMARSI, ABC, and the Human Brain Project.

The visual recognition of goal-directed movements even from impoverished stimuli, is a central visual function with high importance for survival and motor learning. In cognitive neuroscience and brain imaging a number of speculative theories have been proposed that suggest possible computational processes that might underlie this function. However, these theories typically leave it completely open how the proposed functions might be implemented by local cortical circuits. Complementing these approaches, we present a physiologically-inspired neural theory for the visual processing of goal-directed actions, which provides a unifying account for existing neurophysiological results on the visual recognition of hand actions in monkey cortex. The theory motivated, and partly correctly predicted specific computational properties of action-selective neurons in monkey cortex, which later could be verified physiologically. Opposed to several dominant theories in the field, the model demonstrates that robust view-invariant action recognition from monocular videos can be accomplished without a reconstruction of the three-dimensional structure of the effector, or a critical importance of an internal simulation of motor programs. As a ‘side-effect’, the model also reproduces simple forms of causality perception, predicting that these stimuli might be processed by similar neural structures as natural hand actions. Consistent with this prediction, F5 mirror neurons can be shown to respond selectively to such stimuli. This suggests that the processing of goal-directed actions might be accounted for by relatively simple neural mechanisms that are accessible by electrophysiological experimentation.

< Back to 2014 Symposia

The visual white-matter matters! Innovation, data, methods and applications of diffusion MRI and fiber tractography

Organizers: Franco Pestilli & Ariel Rokem; Stanford University
Presenters: Ariel Rokem, Andrew Bock, Holly Bridge, Suzy Scherf, Hiromasa Takemura, David Van Essen

< Back to 2014 Symposia

Symposium Description

For about two decades, functional MR imaging has allowed investigators to map visual cortex in the living human brain. Vision scientists have identified clusters of cortical regions with different functional properties. The function of these maps is determined by both the selectivity of their neurons, as well as their connections. Communication between cortical regions is carried by long-range white-matter fascicles. The wiring of these fascicles is important for implementing the perceptual functions of the visual maps in the occipital, temporal and parietal cortex. Magnetic resonance diffusion imaging (dMRI) and computational tractography are the only technologies that enable scientists to measure the white matter in the living human brain. In the decade since their development, these technologies revolutionized our understanding of the importance of human white-matter for health and disease. Recent advances in dMRI and fiber tractography have opened new avenues of understanding the white-matter connections in the living human brain. With the advent of these technologies we are for the first time in a position to draw a complete wiring diagram of the human visual system. By probing the motion of water molecules at the micron scale, dMRI can be used to study the microstructural properties and geometric organization of the visual white-matter fascicles. These measurements in living brains can help clarify the relationship between the properties of the tissue within the fascicles and visual perception, both in healthy individuals and in cases where vision is impeded through disease. Prior to these measurements, the white matter was thought of as a passive cabling system. But modern measurements show that white matter axons and glia respond to experience and that the tissue properties of the white matter are transformed during development and following training. The white matter pathways comprise a set of active wires and the responses and properties of these wires predict human cognitive and perceptual abilities. This symposium targets a wide range of investigators working in vision science by providing an introduction to the principles of dMRI measurements, algorithms used to identify anatomical connections and models used to characterize white-matter properties. The speakers have pioneered the use of diffusion and functional MRI and fiber tractography to study the human visual white-matter in answering a wide range of scientific questions: connectivity, development, plasticity. The symposium will also introduce publicly available resources (analysis software and data) to help advance the study of the human visual cortex and white-matter, with special emphasis on the high-quality MR measurements provided by the Human Connectome Project (HCP).

Presentations

Measuring and modelling of diffusion and white-matter tracts

Speaker: Ariel Rokem; Stanford University
Authors: Franco Pestilli

This talk will present a general methodological overview of diffusion MRI (dMRI), with a special focus on methods used to image connectivity and tissue properties in the human visual system. We will start by describing the principles of dMRI measurements. We will then provide an overview of models that are used to describe the signal and make inferences about the properties of the tissue and the trajectories of fiber fascicles in white-matter. We will focus on the classical Diffusion Tensor Model (DTM), which is used in many applications, and on the more recent development of Sparse Fascicle Models (SFM), which are more realistic representations of the signal as a combination of signals from different fascicles. Using cross-validation, we have found that DTM provides an accurate representation of the data, better than the reliability of a repeated measurement. SFM provide even more accurate models of the data, and particularly in regions where different fiber tracts cross. In the second part of the talk, we will focus on tractography. With special emphasis on probabilistic and deterministic tractography. We will introduce ideas about validation of white-matter trajectories and to perform statistical inferences about connectivity between different parts of the visual system. A major problem of the field is that different algorithms provide different estimates of connectivity. This problem is solved by choosing the fiber estimates that best account for the data in a repeated measurement (cross-validation).

Gross topographic organization in the corpus callosum is preserved despite abnormal visual input.

Speaker: Andrew Bock; University of Washington
Authors: Melissa Saenz, University of Laussane; Holly Bridge, Oxford; Ione Fine, University of Washington.

The loss of sensory input early in development has been shown to induce dramatic anatomical and functional changes within the central nervous system. Using probabilistic diffusion tractography, we examined the retinotopic organization of splenial callosal connections within early blind, anophthalmic, achiasmatic and control subjects. Early blind subjects experience prenatal retinal “waves” of spontaneous activity similar to those of sighted subjects, and only lack postnatal visual experience. In anophthalmia, the eye is either absent or arrested at an early prenatal stage, depriving these subjects of both pre- and postnatal visual input, while in achiasma there is a lack of crossing at the optic chiasm such that the white matter projection from each eye is ipsilateral. Comparing these groups provides a way of separating the influence of pre- and postnatal retinal deprivation and abnormal visual input on the organization of visual connections across hemispheres. We found that retinotopic mapping within the splenium was not measurably disrupted in any of these groups compared to visually normal controls. These results suggest that neither prenatal retinal activity nor postnatal visual experience plays a role in the large-scale topographic organization of visual callosal connections within the splenium, and the general method we describe provides a useful way of quantifying the organization of large white matter tracts.

Using diffusion-weighted tractography to investigate dysfunction of the visual system

Speaker: Holly Bridge; Oxford
Authors: Rebecca Millington; James Little; Kate Watkins

The functional consequences of damage to, or dysfunction of, different parts of the visual pathway have been well characterized for many years. Possibly the most extreme dysfunction is the lack of eyes (anophthalmia) which prevents any stimulation of this pathway by light input. In this case, functional MRI indicates the use of the occipital cortex for processing of language, and other auditory stimuli. This raises the question of how this information gets to the occipital cortex; are there differences in the underlying anatomical connectivity or can existing pathways be used to carry different information? Here I’ll describe several approaches we have taken to try to understand the white matter connectivity in anophthalmia using diffusion tractography. Damage to the visual pathway can also be sustained later in life, either to the periphery or to the post-chiasmatic pathway (optic tract, lateral geniculate nucleus, optic radiation or visual cortex). When damage occurs in adulthood, any changes to white matter are likely to be the result of degeneration. Sensitive measures of white matter integrity can be used to illustrate patterns of degeneration in patient populations. However, it is also the case that in the presence of lesions, and where white matter tracts are relatively small (e.g. optic tract) measures derived from diffusion-weighted imaging can be misleading. In summary I will present an overview of the potential for employing diffusion tractography to understand plasticity and degeneration in the abnormal visual system, highlighting potential confounds that may arise in patient populations.

Structural properties of white matter circuits necessary for face perception

Speaker: Suzy Scherf; Penn State
Authors: Marlene Behrmann, Carnegie Mellon University; Cibu Thomas, NIH; Galia Avidan, Beer Sheva University; Dan Elbich, Penn State University

White matter tracts, which communicate signals between cortical regions, reportedly play a critical role in the implementation of perceptual functions. We examine this claim by evaluating structural connectivity, and its relationship to neural function, in the domain of face recognition in both developing individuals and those with face recognition deficits. In all studies, we derived the micro- as well as macro-structural properties of the inferior longitudinal fasciculus (ILF) and of the inferior fronto-occipital fasciculus (IFOF), which connect distal regions of cortex that respond preferentially to faces. In participants aged 6-23 years old, we observed age-related differences in both the macro- and micro-structural properties of the ILF. Critically, these differences were specifically related to an age-related increase in the size of the functionally defined fusiform face area. We then demonstrated the causal nature of the structure-function relationship in individuals who are congenitally prosopagnosic (CP) and in an aging population (who exhibits an age-related decrement in face recognition). The CPs exhibited reduced volume of the IFOF and ILF, which was related to the severity of their face processing deficit. Similarly, in the older population there were also significant reductions in the structural properties of the ILF and IFOF that were related to their behavioral performance. Finally, we are exploring whether individual differences in face-processing behavior of normal adults are related to variations in these structure-function relations. This dynamic association between emerging structural connectivity, functional architecture and perceptual behavior reveals the critical role of neural circuits in human cortex and perception.

A major white-matter wiring between the ventral and dorsal stream

Speaker: Hiromasa Takemura; Stanford University
Authors: Brian Wandell

Over the last several decades, visual neuroscientists have learned how to use fMRI to identify multiple visual field maps in the living human brain. Several theories have been proposed to characterize the organization of these visual field maps, and a key theory with substantial support distinguishes dorsal stream involving with spatial processing and ventral stream involving categorical processing. We combined fMRI, diffusion MRI and fiber tractography to identify a major white matter pathway, the Vertical Occipital Fasciculus (VOF), connecting maps within the dorsal and ventral visual streams. We use a model-based method, LInear Fascicle Evaluation (LIFE), to assess the statistical evidence supporting the VOF wiring pattern. There is strong evidence supporting the hypothesis that dorsal and ventral streams of visual maps communicate through the VOF. This pathway is large and its organization suggests that the human ventral and dorsal visual maps communicate substantial information through V3A/B and hV4/VO-1. We suggest that the VOF is crucial for transmitting signals between regions that encode object properties including form, identity and color information and regions that map spatial location to action plans. Findings on the VOF will extend the current understandings of the human visual field map hierarchy.

What is the Human Connectome Project telling us about human visual cortex?

Speaker: David Van Essen; Washington University

The Human Connectome Project (HCP) is acquiring and sharing vast amounts of neuroimaging data from healthy young adults, using high-resolution structural MRI, diffusion MRI, resting-state fMRI, and task-fMRI. Together, these complementary modalities provide invaluable information and insights regarding the organization and connectivity of human visual cortex. This presentation will highlight recent results obtained using surface-based analysis and visualization approaches to characterize structural and functional connectivity of visual cortex in individuals and group averages.

< Back to 2014 Symposia

Mid-level representations in visual processing

Organizer: Jonathan Peirce; University of Nottingham
Presenters: Jonathan Peirce, Anitha Pasupathy, Zoe Kourtzi, Gunter Loffler, Tim Andrews, Hugh Wilson

< Back to 2014 Symposia

Symposium Description

A great deal is known about the early stages of visual processing, whereby light of different wavelengths is detected and filtered in such a way as to represent something approximating “edges”. A large number of studies are also examining the “high-level” processing and representation of visual objects; the representation of faces and scenes, and the visual areas responsible for their processing. Remarkably few studies examine either the intervening “mid-level” representations or the visual areas that are involved in this level of processing. This symposium will examine what form these intermediate representations might take, and what methods we have available to study such mechanisms. The speakers have used a variety of methods to try and understand mid-level processing and the associated visual areas. Along the way, a number of questions will be considered. Do we even have intermediate representations; surely higher-order object representations could be built directly on the outputs of V1 cells since all of the information is available there? How does such a representation not fall foul of the problem of parameter explosion? What aspects of the visual scene are encoded at this level? How could we understand such representations further? Why have we not made further progress in this direction before; is the problem simply too hard to study? The symposium is designed for attendees of all levels and will involve a series of 20 minute talks (each including 5 minutes for questions) from each of the speakers. We hope to encourage people that this is an important and tangible problem that vision scientists should be working hard to solve.

Presentations

Compound feature detectors in mid-level vision

Speaker: Jonathan Peirce; University of Nottingham

A huge number of studies have considered low-level visual processes (such as the detection of edges, colors and motion) and high-level visual processes (such as the processing of faces and scenes). Relatively few studies examine the nature of intermediate visual representations, or “mid-level” vision. One approach to studying mid-level visual representations might be to try and understand the mechanisms that combine the outputs of V1 neurons to create an intermediate feature detector. We have used adaptation techniques to try and probe the existence of detectors for combinations of sinusoids that might form plaid form detectors or curvature detectors. We have shown for both of these features that adaptation effects to the compound has been greater than predicted by adaptation to the parts alone, and that this is greatest when the components form a plaid that we perceive as coherent or a curve that is continuous. To create such representations requires simple logical AND-gates, which might be formed simply by summing the nonlinear outputs of V1 neurons. Many questions remain however, about where in the visual cortex these representations are stored, and how the different levels of representation interact.

Boundary curvature as a basis of shape encoding in macaque area V4

Speaker: Anitha Pasupathy; University of Washington

The broad goal of research in my laboratory is to understand how visual form is encoded in the intermediate stages of the ventral visual pathway, how these representations arise and how they contribute to object recognition behavior. Our current focus is primate V4, an area known to be critical for form processing. Given the enormity of the shape-encoding problem, our strategy has been to test specific hypotheses with custom-designed, parametric, artificial stimuli. With guidance from shape theory, computer-vision and psychophysical literature we identify stimulus features (for example T-junctions) that might be critical in natural vision and work these into our stimulus design so as to progress in a controlled fashion toward more naturalistic stimuli. I will present examples from our past and current experiments that successfully employ this strategy and have led to the discovery of boundary curvature as a basis for shape encoding in area V4. I will conclude with some brief thoughts on how we might move from the highly-controlled stimuli we currently use to the more rich and complex stimuli of natural vision.

Adaptive shape coding in the human visual brain

Speaker: Zoe Kourtzi; University of Birmingham

In the search for neural codes, we typically measure responses to input stimuli alone without considering their context in space (i.e. scene configuration) or time (i.e. temporal history). However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience. Here, we present work showing that experience plays a critical role in molding mid-level visual representations and shape perception. Combining behavioral and brain imaging measurements we demonstrate that learning optimizes the binding of local elements into shapes, and the selection of behaviorally relevant features for shape categorization. First, we provide evidence that the brain flexibly exploits image regularities and learns to use discontinuities typically associated with surface boundaries for contour linking and target identification. Specifically, learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training, is stimulus dependent, and enhances processing in intraparietal regions. Second, by reverse correlating behavioral and fMRI responses with noisy stimulus trials, we identify the critical image parts that determine the observers’ choice in a shape categorization task. We demonstrate that learning optimizes shape templates by tuning the representation of informative image parts in higher ventral cortex. In sum, we propose that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and shape visual category representations.

Probing intermediate stages of shape processing

Speaker: Gunter Loffler; Glasgow Caledonian University

The visual system provides a representation of what and where objects are. This entails parsing the visual scene into distinct objects. Initially, the visual system encodes information locally. While interactions between adjacent cells can explain how local fragments of an object’s contour are extracted from a scene, more global mechanisms have to be able to integrate information beyond that of neighbouring cells to allow for the representation of extended objects. This talk will examine the nature of intermediate-level computations in the transformation from discrete local sampling to the representation of complex objects. Several paradigms were invoked to study how information concerning the position and orientation of local signals is combined: a shape discrimination task requiring observers to discriminate between contours; a shape coherence task measuring the number of elements required to detect a contour; a shape illusion in which positional and orientational information is combined inappropriately. Results support the notion of mechanisms that integrate information beyond that of neighbouring cells and are optimally tuned to a range of different contour shapes. Global integration is not restricted to central vision: peripheral data show that certain aspects of this process only emerge at intermediate stages. Moreover, intermediate processing appears vulnerable to damage. Diverse clinical populations (migraineurs, pre-term children and children with Cortical Visual Impairment) show specific deficits for these tasks that cannot be accounted for by low-level processes. Taken together, evidence is converging towards the identification of an intermediate level of processing, at which sensitivity to global shape attributes emerges.

Low-level image properties of visual objects explain category-selective patterns of neural response across the ventral visual pathway

Speaker: Tim Andrews; University of York

Neuroimaging research over the past 20 years has begun to reveal a picture of how the human visual system is organized. A key organizing principle that has arisen from these studies is the distinction between low-level and high-level visual regions. Low-level regions are organized into visual field maps that are tightly linked to the image properties of the stimulus. In contrast, high-level visual areas are thought to be arranged in modules that are selective for particular object categories. It is unknown, however, whether this selectivity is truly based on object category, or whether it reflects tuning for low-level features that are common to images from a particular category. To address this issue, we compared the pattern of neural response elicited by each object category with the corresponding low-level properties of images from each object category. We found a strong positive correlation between the neural patterns and the underlying low-level image properties. Importantly, the correlation was still evident when the within-category correlations were removed from the analysis. Next, we asked whether low-level image properties could also explain variation in the pattern of response to exemplars from individual object categories (faces or scenes). Again, a positive correlation was evident between the similarity in the pattern of neural response and the low-level image properties of exemplars from individual object categories. These results suggest that the pattern of response in high-level visual areas may be better explained by the image statistics of visual stimuli than by their associated categorical or semantic properties.

From Orientations to Objects: Configural Processing in the Ventral Stream

Speaker: Hugh Wilson; York University

I shall review psychophysical and fMRI evidence for a hierarchy of intermediate processing stages in the ventral or form vision system. A review of receptive field sizes from V1 up to TE indicates an increase in diameter by a factor of about 3.0 from area to area. This is consistent with configural combination of adjacent orientations to form curves or angles, followed by combination of curves and angles to form descriptors of object shapes. Psychophysical and fMRI evidence support this hypothesis, and neural models provide a plausible explanation of this hierarchical configural processing.

< Back to 2014 Symposia

Vision Sciences Society