9th Annual Dinner and Demo Night

Monday, May 9, 2011, 7:00 – 10:00 pm

Buffet Dinner: 7:00 – 9:00 pm, Vista Ballroom, Sunset Deck and Mangrove Pool
Demos: 7:30 – 10:00 pm, Royal Palm 4-5 and Acacia Meeting Rooms

Please join us Monday evening for the 9th Annual VSS Demo Night, a spectacular night of imaginative demos solicited from VSS members. The demos highlight the important role of visual displays in vision research and education.

This year, Arthur Shapiro, Dejan Todorovic, and Gideon Caplovitz are co-curators for Demo Night.

New This Year – We are pleased to announce that ViperLib is sponsoring a “best demo for ViperLib” prize. Thanks in part to the generosity of ECVP, the best demo (or two) will be awarded the honor of being featured on ViperLib and will receive 100 Euros or Pounds Sterling. (The winner gets to decide on their currency of preference).

Buffet dinner is served on the Sunset Terrace, Sunset Deck and Mangrove Pool. Demos are located upstairs on the ballroom level in the Royal Palm 4-5 and Acacia Meeting Rooms.

Be sure to visit the exhibitor area in the Orchid Foyer as some exhibitors have also prepared special demos for Demo Night.
Demo Night is free for all registered VSS attendees. Meal tickets are not required, but you must wear your VSS badge for entry to the Dinner Buffet. Guests and family members of all ages are welcome to attend the demos but must purchase a ticket for dinner. You can register your guests at any time during the meeting at the VSS Registration Desk, located in the Royal Palm Foyer. A desk will also be set up at the entrance to the dinner in the Vista Ballroom at 6:30 pm.

Guest prices: Adults: $25, Youth (6-12 years old): $10, Children under 6: free

A Gilbert Stuart Portrait of You

Krista Ehinger, MIT; Eric Altschuler, MD, PhD, New Jersey Medical School
Gilbert Stuart (1755-1828) painted the first five US Presidents who died before photography, and also President John Quincy Adams (1767-1848) who was photographed. We appreciated such portrait/photograph pairs as a ‘’Rosetta Stone’’ to the pre-photography era, and created a model to obtain photographic representations for those never photographed. We ‘’reverse’’ the model to make ‘’Gilbert Stuart portraits’’ from photos of attendees.

A New Method to Induce Phantom Limbs

Elizabeth Seckel, V.S. Ramachandran, and Beatrix Krause, UCSD; Claude Miller, UCLA
If one is dark adapted, a brief, bright flash may bleach the photoreceptors, allowing whatever is seen during the flash to be “imprinted” on the retinas for several seconds. By uncoupling visual feedback from proprioception, we will give you the experience of phantom limbs!

Bend it like Beckham

Kurt Debono, Alexander C. Schütz, and Karl R. Gegenfurtner, Justus Liebig University, Giessen, Germany
A pursued target travelling in a straight line on a moving background appears to initially move in the direction of the background before bending towards its veridical direction. The illusion occurs when a peripheral marker is aligned with background motion, and breaks down when it is aligned with target direction.

Blink-Induced-Blindness (BIB) in Multiple-Object-Tracking (MOT) shows when vision does not extrapolate

Deborah J. Aks, Hristyian Kourtev, Harry Haladjian, and Zenon Pylyshyn, Rutgers University; Jiye Shen, SR-Research Ltd.
Do we predict where moving objects reappear when MOT is interrupted? Our blink-contingent demonstration suggests not. When tracking objects that stop during eye-blinks, motion-discontinuities are indistinguishable from continuous motion. Not only do paths appear surprisingly smooth, but tracking is easier. Thus, both percept and performance are not predicted by extrapolation.

Class A procedure for measuring visual aftereffects

Qasim Zaidi and Rob Ennis, Graduate Center for Vision Research SUNY College of Optometry
You will see how to make objective measures of the magnitudes of aftereffects of color, brightness, motion, tilt, spatial-frequency, size, and other visual qualities, using identity judgments on time-varying stimuli. You will also see how you can take this method and apply it to simultaneous adaptation along multiple qualities.

Color Rotation and Expansion/Contraction Standstill

Max R. Dürsteler, University Hospital Zurich
A slowly rotating color wheel with alternating sectors painted in isoluminant colors is perceived as standing still in the presence of a stationary luminance mask. Rings painted in isoluminant colors are alternating expanding or contracting. When shown behind a stationary luminance mask, the precept of expansion or contraction is lost.

Colorful demonstrations of perceptual phenomena

Orit Baruch, University of Haifa
Several perceptual phenomena are demonstrated in paintings. It is demonstrated that our perceptual tendencies obscure other alternatives which may be present in the images.

Dichoptic Completion

Gao Meng, School of Medicine, Tsinghua University, Beijing, China; Li Zhaoping, Department of Computer Science, University College London
We named the illusion “dichoptic completion”, when two very different images in the two eyes are seen simultaneously or complement each other, rather than rivaling against each other, or averaging in perception.

Dyops™ (short for Dynamic Optotypes™) as a revolutionary new method for determining visual acuity

Allan Hytowitz, Animated Vision Associates, LLC; John Hayes, Yu-Chi Tai, Sung Ouk Jang, James Sheedy, Vision Performance Institute, College of Optometry, Pacific University
A constantly rotating segmented image provides a precise measure of acuity based upon the maximum distance for detection of that image rotation as determined by the angular size of that image.

‘’Exorcist 2011’’ – Combining the hollow-face and hollow-torso illusions

Thomas V. Papathomas and Tom Grace, Rutgers Unversity
Hollow masks appear as normal convex faces (hollow-mask illusion) and move as viewers move in front of them. We combine hollow masks and “bollow” (convex) torsos. The result is a compelling illusion: torsos and masks rotate in opposite directions; necks twist in a spectacular fashion (“Exorcist illusion”).

How Does the Brain Determine Size? A Size Weight Shape Illusion

Elizabeth Seckel, UCSD; Edward M Hubbard, Vanderbilt; Eric L Altschuler, New Jersey Medical School; VS Ramachandran, UCSD
120 years ago Charpentier described a remarkable effect: a larger object feels lighter than a smaller object of the same scale-weight. But how does the brain determine ‘’size.’’ Using sets of discs and annuli attendees can experience for themselves that the brain uses only the largest diameter to determine size.

Infinite X: Illusions of perpetual increases in magnitude

Mark W. Schurgin, Brian R. Levinthal, Alexandra List, Aleksandra Sherman, Satoru Suzuki, Marcia Grabowecky, and Steven L. Franconeri, Northwestern University, Psychology
We present a modification of two- and four-stroke motion that creates a sense of perpetual change in more abstract dimensions, such as size and emotion. This experience is highly sensitive to the timing of a blank frame or reversal of polarity. Furthermore, pausing our animations produces a robust after-effect.

Launching apparent motion: The Michotte gun

Sung-Ho Kim, Jacob Feldman, and Manish Singh, Rutgers University
We will demonstrate that the perception of causality can affect apparent motion. Perceived causality can resolve a motion correspondence problem, and also bias the paths of moving objects.

Lifestyle and its impact on your face

David Perrett, Ross Whitehead, David Hunter, Carmen LeFervre, and Dan Re, University of St Andrews
We show visitors how lifestyle affect their face own appearance. Facial fatness predicts current illnesses and early mortality. Smoking and sun exposure hasten age-related skin wrinkling and uneven pigmentation. Increasing fruit and vegetables consumption and exercise benefit health and modify skin colour in ways that enhance healthy appearance.

Meet a robot that navigates and sees as we do

Yunfeng Li and Tadamasa Sawada, Purdue University; Meng Yi and Longin Jan Latecki, Temple University; TaeKyu Kwon and Yun Shi, Purdue University; Robert M. Steinman, University of Maryland; Zygmunt Pizlo, Purdue University
We will demonstrate a seeing robot, who can: (i) solve the figure-ground organization problem, (ii) navigate within a 3D scene, and (ii) recover 3D shapes of objects.

Minimap based navigation with high-fidelity virtual reality.

Matthias Pusch and Paul Elliott, WorldViz
Literally walk through high-fidelity virtual environments in full scale and experience a stunning sense of immersion. With the new WorldViz minimap implementation, you can intuitively move yourself to any location and easily explore arbitrarily large virtual spaces, while using only a small physical footprint. Simply don a stereoscopic head-mounted display and you’re free to walk and explore naturally. Interact with virtual objects using the WorldViz PPT Wand hand interaction device.

Motion aftereffect from an image that isn’t moving, on a test image that isn’t there

Mark Georgeson, Aston University, UK
You adapt briefly to sine gratings whose contrast reverses in sawtooth fashion over time. Stationary test gratings then appear to be drifting. Then, on a completely blank screen, you will see gratings moving in the opposite direction. The effects reflect spatial and temporal gradient filters in motion encoding (Anstis, 1990).

New Star Trek Illusion

Li Li and Diederick Niehorster, and Joseph Cheng, Department of Psychology, The University of Hong Kong; Sieu Khuu, School of Optometry, University of New South Wales, Australia
We will show how the perceived direction of self-motion specified by the motion signal in a radial flow pattern (like in Star Trek movies) can be biased toward the center of a static radial form pattern composed of dot pairs. Furthermore, we will show how this bias can be reduced by reducing the global form coherence of the static radial form pattern.

Spinning Ellipses

Gideon Paul Caplovitz and Kyle Killebrew, University of Nevada Reno
Who says spinning an ellipse has to be boring?

The disembodied eye

Jordan Suchow and Maryam Vaziri-Pashkam, and Ken Nakayama, Department of Psychology, Harvard University
When looking at an upside down face, the eyes eventually appear to flip right-side up, giving the eery impression that they no longer belong to the face. The same is true of a mouth.

The Emotion Mirror: A Novel Intervention for Facial Expression Production and Perception Training for Children with Autism

Dave Deriso and Josh Susskind, UCSD; Jim Tanaka, UV; John Herrington and Bob Schultz, CHOP; Marian Bartlett, UCSD
We will present a novel use of machine learning and computer vision to aid in the treatment of autism. This demo is an intervention game where cartoon characters mimic facial expressions in real-time to improve the ability of children to produce basic emotion facial expressions.

The Flickering Wheel

Rodika Sokoliuk and Ramakrishna Chakravarthi, Centre de Recherche Cerveau et Cognition, Toulouse
We will present a new dynamic illusion: The Flickering Wheel, a way to visually experience your brain oscillations. The static circular stimulus, built up of alternating black and white sectors, elicits a flickering sensation in its center which is caused by an interaction between eye movements and alpha oscillations.

The Floating Light

Martin Rolfs, New York University, Department of Psychology; Maryam Vaziri Pashkam, Harvard University, Vision Sciences Laboratory
A bright object in a dim frame dramatically shifts position when both are set in motion, breaking the law of common fate. In a three-dimensional setup, either the stimulus set or the observers will move. We will also illustrate disturbingly strong versions of the related Hess-, Pulfrich- and flash-lag-effects.

The Incredible Shrinking Peter Illusion

Stuart Anstis, UC San Diego
Reverse phi makes an image of Pete Thompson continually shrink while SA continually expands, though neither changes in mean size. Pete Thompson will dobtless award this his Viperlib prize.

The Leaning Tower Illusion: a 2D illusion?

Aaron Johnson and Bruno Richard, Concordia University
The Leaning Tower Illusion occurs when an image of a tower appears lopsided when placed next to a copy of itself. In this demo, we show that the illusion does not exist when real towers are placed next to each other, but does exist when viewed on a 2D screen.

The Speed Illusion of Trains

James Lu and Anthony Chen, University High School
In this demo, we show that if you have two objects moving at the same speed, the closer one will appear to be moving faster than the one further away.

Transilience Induced Blindness and Selective Filling-in of Artificial Scotoma

Seiichiro Naito, Makoto Katsumura and Ryo Shohara, Human and Information Science, Tokai University, Japan
The large MIB target figure would disappear. We devised the novel inducing stimuli. The MIB target has been identified as the perceptual or artificial scotoma. We found that any uniform color would fill-in, Neither simple line segments passing under the targets nor fine textures could never fill in.

Unpredictable slopes

Elnaz Nouri, University of Southern California; Mouna Attarha, The University of Iowa
Careful with the slopes! Here, we will show you that surfaces arranged in particular ways trick the visual system into miscalculating the flow of water. Come over to learn why.

Vectorized LITE

Kenneth Brecher, Boston University
We will show fully vectorized images we have constructed based on visually striking art works, such as Isia Leviant’s ‘’Enigma’’, Bridget Riley’s ‘’Fall’’ and Reginald Neal’s ‘’Squares of Two’’ where sharp, large format printing enhances the psychophysical phenomena. The PDF’s can be found at: http://lite.bu.edu.

What do deforming shapes teach us about 3-D structure-from-motion?

Anshul Jain and Qasim Zaidi, Graduate Center for Vision Research, SUNY College of Optometry
You will judge the aspect ratios of flexing and rigid 3-D cylinders to test your ability to extract structure from motion without rigidity assumptions. You will also see how rotating symmetric cylinders around oblique axes creates asymmetric percepts corresponding to asymmetries in the image velocity pattern.

What right angle bias?

Lydia Maniatis, American University
The impression of pictorial depth is often attributed to a bias for perceiving right angles and/or parallel lines. This demo was designed to show that a figure may produce depth effects despite the absence of both of these features in the percept.

ARVO@VSS 2011

What the retina tells us about central visual processing

Time/Room: Friday, May 6, 2011, 5:00 – 6:45 pm, Royal Ballroom 4-5
Chair: Tony Movshon
Presenters: Jonathan Demb, Greg Field, Jay Neitz

This symposium was designed in conjunction with David Williams and Maarten Kamermans as part of the continuing series of exchange symposia that highlight the historical and continued shared areas of interests of VSS and ARVO. This year, the symposium is at VSS, intended to bring us some of the latest advances presented at ARVO. There will be three talks, all showcasing aspects of retinal function that are crucial for understanding central visual processing. The speakers are all experts and experienced speakers who will give excellent accounts of their important work.

Explaining receptive field properties at the level of synapses: lessons from the retina

Speaker: Jonathan Demb, University of Michigan

A visual neuron’s receptive field is generated by the combination of its unique pattern of synaptic inputs and intrinsic membrane properties. These cellular mechanisms underlying the receptive field can be studied efficiently in retinal ganglion cells, in vitro.  In this talk, I will describe recent progress in understanding the mechanisms for visual computations and adaptation in retinal circuitry.

High-resolution receptive field measurements in primate retinal ganglion cells, and their implications for color vision

Speaker: Greg Field, Salk Institute

Identifying the connectivity of the myriad neurons within a circuit is key to understanding its function.  We developed a novel technique to map the functional connectivity between thousands of cone photoreceptors and hundreds of ganglion cells in the primate retina.  These measurements reveal the nature of cone sampling by midget ganglion cells, providing insight to the origins of red-green color opponency.

The effect of genetic manipulation of the photopigments on vision and the implications for the central processing of color

Speaker: Jay Neitz, University of Washington

The processes responsible for color perception are accessible experimentally because of a wealth of genetic variations and because some components lend themselves to genetic manipulation. The addition of an opsin gene, as occurred in the evolution of color vision, and has been done experimentally produces expanded capacities providing insight into the neural circuitry.

2011 Public Lecture – Jeremy Wolfe

Jeremy Wolfe

Harvard Medical School

Jeremy Wolfe became interested in visual perception during the course of a summer job at Bell Labs in New Jersey after his senior year in high school. He graduated summa cum laude from Princeton in 1977 with a degree in Psychology and went on to obtain his PhD in 1981 from MIT, studying with Richard Held. His PhD thesis was entitled “On Binocular Single Vision”. Wolfe remained at MIT as a lecture, assistant professor, and associate professor until 1991. During that period, he published papers on binocular rivalry, visual aftereffects, and accommodation. In the late 1980s, the focus of the lab shifted to visual attention. Since that time, he has published numerous articles on visual search and visual attention. He is, perhaps, best known for the development of the Guided Search theory of visual search. In 1991, Wolfe moved to Brigham and Women’s Hospital where he is Director of the Visual Attention Lab and of the Radiology Department’s Center for Advanced Medical Imaging. He is Professor of Ophthalmology and Radiology at Harvard Medical School.

At present, the Visual Attention Lab works on basic problems in visual attention and their application to such problems as airport security and medical screening. The lab is funded by the US National Institutes of Health, Office of Naval Research, and Department of Homeland Security. The Center for Advanced Medical Imaging is devoted to understanding and improving the consumption of images in clinical radiology.

Wolfe has taught Introductory Psychology, Psychology and Literature, and Sensation and Perception at MIT & Harvard and other universities. He is the Editor of the journal, Attention, Perception and Psychophysics (AP&P, formerly P&P). Wolfe is Past-President of the Eastern Psychological Association and President of Division 3 of the American Psychological Association. He is chair of the Soldier Systems Panel of the Army Research Lab Technical Assessment Board (NRC). He won the Baker Memorial Prize for teaching at MIT in 1989. He is a fellow of the American Assoc. for the Advancement of Science, the American Psychological Association (Div. 3 & 6), the American Psychological Society, and a member of the Society for Experimental Psychologists. He lives in Newton, Mass. with his wife, Julie Sandell (Professor of Neuroanatomy and Assoc. Provost at Boston U.). has three sons (Benjamin – 24, Philip – 22, and Simon – 15), a cat, two snakes, and occasional mice.

The Salami at the Airport: Visual Search Gets Real

Saturday, May 7, 2011, 10:00 – 11:30 am, Renaissance Academy of Florida Gulf Coast University

We are built to search. Our ancestors foraged for food. We search for pens, keys, and cars in parking lots. Some searches are hard and important: think about the search for cancer in x-rays or security threats in luggage. We are remarkably good at search. How do you mange to find the cornstarch in the cupboard? However, we are not as good as we would like to be. How could you miss something (like a gun or a tumor) that is, literally, right in front of your eyes? How might we reduce errors in socially important search tasks?

About the VSS Public Lecture

The annual public lecture represents the mission and commitment of the Vision Sciences Society to promote progress in understanding vision, and its relation to cognition, action and the brain. Education is basic to our science, and as scientists we are obliged to communicate the results of our work, not only to our professional colleagues but to the broader public. This lecture is part of our effort to give back to the community that supports us.

Jointly sponsored by VSS and the Renaissance Academy of Florida Gulf Coast University.

2011 Student Workshops

Student Career Development Workshop

Chair: Andrew Welchman, Birmingham University

Sunday, May 8, 12:45 – 1:30 pm, Room TBD

After a brief presentation by Dr. Welchman the floor will be open for
questions and discussion. Dr. Welchman will cover topics related to making
career choices during the transition from Ph.D. student to PostDoc and how to
plan your PostDoc period. Several other senior scientists will participate: Alex
Huk, University of Texas at Austin, Anya Hurlbert, University of Newcastle upon
Tyne and Cathleen Moore, University of Iowa.

Student Publishing Workshop

Chair: Andrew B. Watson, Editor-in-Chief of the Journal of Vision

Sunday, May 8, 12:45 – 1:30 pm, Room TBD

This workshop will start with a brief overview. Andrew Watson will present
some advice on how to select the right journal for your publication, how to
visually present your data most effectively, and how to efficiently manage the
reviewing process. Several other leading scientists will be available for
questions and discussion: Marty Banks, University of California, Berkeley,
Concetta Morrone, University of Pisa and Cong Yu, Beijing Normal University.

2011 Young Investigator – Alexander C. Huk

Alexander C. Huk

Neurobiology & Center for Perceptual Systems
The University of Texas at Austin

Dr. Alexander C. Huk has been chosen as the 2011 winner of the Elsevier/VSS Young Investigator Award. Dr. Huk is an Associate Professor of Neurobiology in the Center for Perceptual Systems at the University of Texas at Austin. Dr. Huk impressed the committee with the broad range of techniques he has brought to bear on fundamental questions of visual processing and decision making. Studying both human and non-human primates with psychophysical, electrophysiological and fMRI approaches, Dr. Huk has made significant, influential and ground-breaking contributions to our understanding of the neural mechanisms involved in motion processing and the use of sensory information as a basis for perceptual decisions. His contributions are outstanding in their breadth as well as their impact on the field and represent the uniqueness of the VSS community to integrate behavioral and neural approaches to vision science.

Elsevier/Vision Research Article

Some new perspectives in the primate motion pathway

Sunday, May 8, 7:00 pm, Royal Palm Ballroom

The dorsal (“where”) stream of visual processing in primates stands as one of the most fruitful domains for bridging neural activity with perception and behavior. In early stages of cortical processing, neurophysiology and psychophysics have elucidated the transformations from dynamic patterns of light falling upon the retinae, to simple 1D motion signals in primary visual cortex, and then to the disambiguated 2D motions of complex patterns and objects in the middle temporal area (MT). In later stages, the motion signals coming from MT have been shown to be accumulated over time in parietal areas such as LIP, and this decision-related activity has been quantitatively linked to behavioral outputs (i.e., the speed and accuracy of perceptual decisions). In this talk, I’ll revisit this pathway and suggest new functions in both the visual and decision stages. In the first part, I’ll describe new results revealing how 3D motion is computed in the classic V1-MT circuit. In the second part, I’ll address whether LIP responses are really a “neural correlate” of perceptual decision-making, or instead reflect a more general type of sensorimotor integration. These lines of work suggest that by building on the already well-studied primate dorsal stream, both psychophysics and physiology can investigate richer perceptual functions and entertain more complex underlying mechanisms.

 

2011 Keynote – Daniel M. Wolpert

Daniel M. Wolpert

Daniel M. Wolpert

Professor of Engineering, University of Cambridge

Audio and slides from the 2011 Keynote Address are available on the Cambridge Research Systems website.

Probabilistic models of human sensorimotor control

Saturday, May 7, 2011, 7:00 – 8:15 pm, Royal Palm Ballroom 4-5

The effortless ease with which humans move our arms, our eyes, even our lips when we speak masks the true complexity of the control processes involved. This is evident when we try to build machines to perform human control tasks. While computers can now beat grandmasters at chess, no computer can yet control a robot to manipulate a chess piece with the dexterity of a six-year-old child. I will review our recent work on how the humans learn to make skilled movements covering probabilistic models of learning, including Bayesian and structural learning, how the brain makes and uses motor predictions, and the interaction between decision making and sensorimotor control.

Biography

Daniel Wolpert is Professor of Engineering at the University of Cambridge and a Fellow of Trinity College. Daniel’s research focuses on computational and experimental approaches to human sensorimotor control. Daniel read medical sciences at Cambridge and clinical medicine at Oxford. After working as a medical doctor for a year he completed a D. Phil. in the Physiology Department in Oxford. He then worked as a postdoctoral fellow and Fulbright Scholar at MIT, before moving to the Institute of Neurology, UCL. In 2005 he took up his current post in Cambridge. He was elected a Fellow of the Academy of Medical Sciences in 2004 and was awarded the Royal Society Francis Crick Prize Lecture (2005) and has given the Fred Kavli Distinguished International Scientist Lecture at the Society for Neuroscience (2009). Further details can be found on www.wolpertlab.com.

S6 – Integrating local motion information

S6 – Integrating local motion information

Friday, May 6, 2:30 – 4:30 pm, Royal Ballroom 6-8

Organizer: Duje Tadin, University of Rochester, Center for Visual Science

Presenters: Xin Huang, partment of Physiology, University of Wisconsin; Duje Tadin, University of Rochester, Center for Visual Science; David R. Badcock, School of Psychology, The University of Western Australia; Christopher C Pack, Montreal Neurological Institute, McGill University; Shin’ya Nishida, NTT Communication Science Laboratories; Alan Johnston, Cognitive, Perceptual and Brain Sciences, University College London

Symposium Description

Since Adelson and Movshon’s seminal 1982 paper on the phenomenal coherence of moving patterns, a large literature has accumulated on how the visual system integrates local motion estimates to represent true object motion. Although this research topic can be traced back to the early 20th century, a number of key questions remain unanswered. Specifically, we still have an incomplete understanding of how ambiguous and unambiguous motions are integrated and how local motion estimates are grouped and segmented to represent global object motions. A key problem for motion perception involves establishing the appropriate balance between integration and segmentation of local motions. Local ambiguities require motion integration, while perception of moving objects requires motion segregation. These questions form the core theme for this workshop that includes both psychophysical (Tadin, Nishida, Badcock and Johnston) and neurophysiological research (Pack and Huang).

Presentations by Huang and Tadin will show that the center-surround mechanisms play an important role in adaptively adjusting the balance between integration and segmentation. Huang reached this conclusion by studying area MT and the effects of unambiguous motion presented to the receptive field surround on the neural response to an ambiguous motion in the receptive field. Tadin reports that the degree of center-surround suppression increases with stimulus visibility, promoting motion segregation at high-contrast and spatial summation at low-contrast. More recently, Tadin investigated the neural correlates of centre-surround interactions and their role in figure-ground segregation.

Understanding how we perceive natural motion stimuli requires an understating of how the brain solves the aperture problem. Badcock showed that spatial vision plays an important role in solving this motion processing problem. Specifically, he showed that oriented motion streaks and textural cues play a role in early motion processing. Pack approached this question by recoding single-cell responses at various stages along the dorsal pathway. Results with plaid stimuli show a tendency for increased motion integration that does not necessarily correlate with the perception of the stimulus. Data from local field potentials recorded simultaneously suggest that the visual system solves the aperture problem multiple times at different hierarchical stages, rather than serially.

Finally, Nishida and Johnston will report new insights into integration of local motion estimates over space. Nishida developed a global Gabor array stimulus, which appears to cohere when the local speeds and orientation of the Gabor are consistent with a single global translation. He found that the visual system adopts different strategies for spatial pooling over ambiguous (Gabor) and unambiguous (plaid) array elements. Johnston investigated new strategies for combining local estimates, including the harmonic vector average, and have demonstrated coherence in expanding a rotating motion Gabor arrays displays – implying only a few local interactions may be all that is required to solve the aperture problem in complex arrays.

The symposium will be of interest to faculty and students working on motion, who will benefit from an integrated survey of new approaches to the current central question in motion processing, and a general audience interested in linking local and global processing in perceptual organization.

Presentations

Stimulus-dependent integration of motion signals via surround modulation

Xin Huang, partment of Physiology, University of Wisconsin; Thomas D. Albright, Vision Center Laboratory, Salk Institute for Biological Studies; Gene R. Stoner, Vision Center Laboratory, Salk Institute for Biological Studies

The perception of visual motion plays a pivotal role in interpreting the world around us. To interpret visual scenes, local motion features need to be selectively integrated and segmented into distinct objects. Integration helps to overcome motion ambiguity in the visual image by spatial pooling, whereas segmentation identifies differences between adjacent moving objects. In this talk we will summarize our recent findings regarding how motion integration and segmentation may be achieved via ”surround modulation” in visual cortex and will discuss the remaining challenges. Neuronal responses to stimuli within the classical receptive field (CRF) of neurons in area MT (V5) can be modulated by stimuli in the CRF surround. Previous investigations have reported that the directional tuning of surround modulation in area MT is mainly antagonistic and hence consistent with segmentation. We have found that surround modulation in area MT can be either antagonistic or integrative depending upon the visual stimulus. Furthermore, we have found that the direction tuning of the surround modulation is related to the response magnitude: stimuli eliciting the largest responses yield the strongest antagonism and those eliciting the smallest responses yield the strongest integration. We speculate that input strength is, in turn, linked with the ambiguity of the motion present within the CRF – unambiguously moving features usually evoke stronger neuronal response than do ambiguously moving features. Our modeling study suggests that changes in MT surround modulation result from shifts in the balance between directionally tuned excitation and inhibition mediated by changes in input strength.

Center-surround interactions in visual motion perception

Duje Tadin, University of Rochester, Center for Visual Science

Visual processing faces two conflicting demands: integration and segmentation (Braddick, 1993). In motion, spatial integration is required by the noisy inputs and local velocity ambiguities. Local velocity differences, however, provide key segregation information. We demonstrated that the balance between integrating and differentiating processes is not fixed, but depends on visual conditions: At low-contrast, direction discriminations improve with increasing size – a result indicating spatial summation of motion signals. At high-contrast, however, motion discriminations worsen as the stimulus size increases – a result we describe as spatial suppression (Tadin et al., 2003). This adaptive integration of motion signals over space might be vision’s way of dealing with the contrasting requirements of integration and segmentation, where suppressive mechanisms operate only when the sensory input is sufficiently strong to guarantee visibility. In subsequent studies, we have replicated and expanded these results using a range of methods, including TMS, temporal reverse correlation, reaction times, motion-aftereffect, binocular rivalry and modeling. Based on the converging evidence, we show that these psychophysical results could be linked to suppressive center-surround receptive fields, such as those in area MT.

What are functional roles of spatial suppression? Special population studies revealed that spatial suppression is weaker in elderly and schizophrenic patients – a result responsible for their paradoxically better-than-normal performance in some conditions. Moreover, these subjects also exhibit deficits in figure-ground segregation, suggesting a possible functional connection. In a recent study, we directly addressed this possibility and report experimental evidence for a functional link between surround suppression and motion segregation.

The role of form cues in motion processing

David R. Badcock, School of Psychology, The University of Western Australia; Edwin Dickinson, University of Western Australia; Allison McKendrick, University of Melbourne; Anna Ma-Wyatt, University of Adelaide; Simon Cropper, University of Melbourne

The visual system initially collects spatially localised estimates of motion and then needs to interpret these local estimates to generate 2D object motion and self-motion descriptions. Commonly sinusoidal gratings have been employed to study the perception of motion and while these stimuli are useful for investigating the properties of spatial- and temporal-frequency tuned detectors they are limited. They remove textural and shape cues that are usually present in natural images, which has led to models of motion processing that ignore those cues. However, the addition of texture and shape information can dramatically alter perceived motion direction.

Recent work has shown that orientation-tuned simple cells are stimulated by moving patterns because of their extended temporal integration. This response (sometimes called motion streaks) allows orientation-tuned detectors to contribute to motion perception by signalling the axis of motion. The orientation cue is influential even if second-order streaks which could not have been produced by image smear are employed. This suggests that any orientation cue may be used to determine local direction estimates: a view that is extended to argue that aperture shape itself may have an impact by providing orientation cues which are incorporated into the direction estimation process. Oriented textural cues will also be shown to distort direction estimates, even though current models suggest they should not. The conclusion is that pattern information has a critical role in early motion processing and should be incorporated more systematically into models of human direction perception.

Pattern motion selectivity in macaque visual cortex

Christopher C Pack, Montreal Neurological Institute, McGill University

The dorsal visual pathway in primates has a hierarchical organization, with neurons in V1 coding local velocities and neurons in the later stages of the extrastriate cortex encoding complex motion patterns. In order to understand the computations that occur along each stage of the hierarchy, we have recorded from single neurons in areas V1, MT, and MST of the alert macaque monkey. Results with standard plaid stimuli show that pattern motion selectivity is, not surprisingly, more common in area MST than in MT or V1. However, similar results were found with plaids that were made perceptually transparent, suggesting that neurons at more advanced stages of the hierarchy tend to integrate motion signals obligatorily, even when the composition of the stimulus is more consistent with the motion of multiple objects. Thus neurons in area MST in particular show a tendency for increased motion integration that does not necessarily correlate with the (presumptive) perception of the stimulus. Data from local field potentials recorded simultaneously show a strong bias toward component selectivity, even in brain regions in which the spiking activity is overwhelmingly pattern selective. This suggests that neurons with greater pattern selectivity are not overrepresented in the outputs of areas like V1 and MT, but rather that the visual system computes pattern motion multiple times at different hierarchical stages. Moreover, our results are consistent with the idea that LFPs can be used to estimate different anatomical contributions to processing at each visual cortical stage.

Intelligent motion integration across multiple stimulus dimensions

Shin’ya Nishida, NTT Communication Science Laboratories; Kaoru Amano, The University of Tokyo; Kazushi Maruya, NTT; Mark Edwards, Australian National University; David R. Badcock, University of Western Australia

In human visual motion processing, image motion is first detected by one-dimensional (1D), spatially local, direction-selective neural sensors. Each sensor is tuned to a given combination of position, orientation, spatial frequency and feature type (e.g., first-order and second-order). To recover the true 2-dimensional (2D) and global direction of moving objects (i.e., to solve the aperture problem), the visual system integrates motion signals across orientation, across space and possibly across the other dimensions. We investigated this multi-dimensional motion integration process, using global motion stimuli comprised of numerous randomly-oriented Gabor (1D) or Plaid (2D) elements (for the purpose of examining integration across space, orientation and spatial frequency), as well as diamond-shape Gabor quartets that underwent rigid global circular translation (for the purpose of examining integration across spatial frequency and signal type). We found that the visual system adaptively switches between two spatial integration strategies — spatial pooling of 1D motion signals and spatial pooling of 2D motion signals — depending on the ambiguity of local motion signals. MEG showed correlated neural activities in hMT+ for both 1D pooling and 2D pooling. Our data also suggest that the visual system can integrate 1D motion signals of different spatial frequencies and different feature types, but only when form conditions (e.g., contour continuity) support grouping of local motions. These findings indicate that motion integration is a complex and smart computation, and presumably this is why we can properly estimate motion flows in a wide variety of natural scenes.

Emergent global motion

Alan Johnston, Cognitive, Perceptual and Brain Sciences, University College, London; Andrew Rider, Cognitive, Perceptual and Brain Sciences, University College, London; Peter Scarfe, Cognitive, Perceptual and Brain Sciences, University College, London

The perception of object motion requires the integration of local estimates of image motion across space. The two general computational strategies that have been offered to explain spatial integration can be classified as hierarchical or lateral interactive. The hierarchical model assumes local motion estimates at a lower point in the hierarchy are integrated by neurons with large receptive fields. These neurons could make use of the fact that due to the aperture problem the 2D distribution of local velocities for a rigid translation falls on a circle through the origin in velocity space. However the challenge for this approach is how to segment and represent the motion of different objects or textures falling within the receptive field, including how to represent object boundaries. Apparent global rotations and dilations can be instantiated in randomly oriented global Gabor arrays suggesting that the aperture problem can be resolved though local interactions. The challenge for this approach is to discover local rules that will allow global organizations to emerge. These rules need to incorporate the status of ambiguous motion signals and unambiguous motion signals to explain how unambiguous 2D motion cues (e.g. at corners) influence the computed global motion field. Here we will describe a simple least squares approach to local integration, demonstrate its effectiveness in dealing with the dual problems of integration and segmentation and consider its limitations.

 

S5 – Prediction in Visual Processing

S5 – Prediction in Visual Processing

Friday, May 6, 2:30 – 4:30 pm, Royal Ballroom 4-5

Organizers: Jacqueline M. Fulvio, Paul R. Schrater; University of Minnesota

Presenters: Jacqueline M. Fulvio, University of Minnesota; Antonio Torralba, Massachusetts Institute of Technology; Lars Muckli, University of Glasgow, UK; Eileen Kowler, Rutgers University; Doug Crawford, York University; Robert A. Jacobs, University of Rochester

Symposium Description

In a world constantly in flux, we are faced with uncertainty about the future and must make predictions about what lies ahead. However, research on visual processing is dominated by understanding information processing rather than future prediction – it lives in the present (and sometimes the past) without considering what lies ahead.

Yet prediction is commonplace in natural vision. In walking across a busy street in New York City, for example, successful prediction means both the life or death of the pedestrian and the employment status of the cab driver.

In fact, prediction plays an important role in almost all aspects of vision with a dynamic component, including object interception, eye-movement planning, visually-guided reaching, visual search, and rapid decision-making under risk, and is implicit in “top-down” processing in the interpretation of static images (e.g. object recognition, shape from shading, etc.). Prediction entails combining current sensory information with an internal model (“beliefs”) of the world to fill informational gaps and derive estimates of the world’s future “hidden” state. Naturally, the success of the prediction is limited by the quality of the information and the internal model. This has been demonstrated by a variety of behaviors described above.

The symposium will focus on the importance of analyzing the predictive components of human behavior to understand visual processing in the brain. The prevalence of prediction suggests there may be a commonality in both computational and neural structures supporting it. We believe that many problems in vision can be profitably recast in terms of models of prediction, providing new theoretical insights and potential transfer of knowledge.

Speakers representing a variety of research areas will lead a discussion under the umbrella of prediction that (i) identifies characteristics and limitations of predictive behavior; (ii) re-frames outstanding questions in terms of predictive modeling; & (iii) outlines experimental manipulations of predictive task components for future work. The symposium is expected to spark interest among all areas represented at the conference with the goal of group discovery of a common set of predictive principles used by the brain as the discussion unfolds.

Presentations

Predictive processing through occlusion

Jacqueline M. Fulvio, University of Minnesota; Paul R. Schrater, University of Minnesota

Missing information is a challenge for sensory motor processing. Missing information is ubiquitous – portions of sensory data may be occluded due to conditions like scene clutter and camouflage; or missing at the present time – task demands may require anticipation of future states, such as when we negotiate a busy intersection. Rather than being immobilized by missing information, predictive processing fills in the gaps so we may continue to act in the world. While much of perceptual-motor research implicitly studies predictive processing, a specific set of predictive principles used by the brain has not been adequately formalized. I will draw upon our recent work on visual extrapolation, which requires observers to predict an object’s location behind an occluder as well as its reemergence point. Through the results, I will demonstrate that these predictions are derived from model-based forward look ahead—current sensory data is applied to an internal model of the world. I will also show that predictions are subject to performance trade-offs, such that the choice of internal model may be a flexible one that appropriately weights the quality (i.e. uncertainty) of the sensory measurements and the quality (i.e. complexity) of the internal model. Finally, having established the role of internal models in prediction, I will conclude with a discussion about how prediction may be used as a tool in the experimental context to encourage general model learning, with evidence from our recent work on perceptual learning.

Predicting the future

Antonio Torralba, Massachusetts Institute of Technology; Jenny Yuen, Massachusetts Institute of Technology

In this talk I will make a link with computer vision and recent techniques for addressing the problem of predicting the future. Some of the representations to address this problem in computer vision are reminiscent of current views on scene understanding in humans. When given a single static picture, humans can not only interpret the instantaneous content captured by the image, but also they are able to infer the chain of dynamic events that are likely to happen in the near future. Similarly, when a human observes a short video, it is easy to decide if the event taking place in the video is normal or unexpected, even if the video depicts a an unfamiliar place for the viewer. This is in contrast with work in computer vision, where current systems rely on thousands of hours of video recorded at a single place in order to identify what constitutes an unusual event. In this talk I will discuss techniques for predicting the future based on a large collection of stored memories. We show how, relying on large collections of videos, using global images features, such as the ones used to model fast scene recognition, we can index events stored in memory similar to the query, and how we can build a simple model of the distribution of expected motions. Consequently, the model can make predictions of what is likely to happen in the future, as well as evaluate how unusual is a particular event.

Predictive coding – contextual processing in primary visual cortex V1

Lars Muckli, University of Glasgow, UK; Petra Vetter, University of Glasgow, UK; Fraser Smith, University of Glasgow, UK

Primary visual cortex (V1) is often characterized by the receptive field properties of its feed-forward input. Direct thalamo-fugal input to any V1 cell however, is less than 5 % (Douglas and Martin 2007), and much of V1 response variance remains unexplained. We propose that one of the core functions of cortical processing is to predict upcoming events based on contextual processing. To gain a better understanding of contextual processing in the cortex we focused our fMRI studies on non-stimulated retinotopic regions of early visual cortex (2). We investigated activation along the non-stimulated long-range apparent motion path (1), occluded a visual quarterfield of a natural visual scene (3), or blindfolded our subjects and presented environmental sounds (4). We were able to demonstrate predictive activity along the illusory apparent motion path (1), use decoding to classify natural scenes from non-stimulated regions in V1 (3), and to decode environmental sounds from V2 and V3, but not from V1 (4). Is this contextual processing useful to predict upcoming visual events? To investigate predictability we used our contextual stimuli (apparent motion) as the prime stimuli and tested with a probe stimulus along the apparent motion path to find that predicted stimuli are processed more efficiently – leading to less fMRI signal and better detectability (1). In summary, we have found brain imaging evidence that is consistent with the hypothesis of predictive coding in early visual areas.

Prediction in oculomotor control

Eileen Kowler, Rutgers University; Cordelia Aitkin, Rutgers University; Elio Santos, Rutgers University; John Wilder, Rutgers University

Eye movements are crucial for vision. Saccadic eye movements bring the line of sight to selected objects, and smooth pursuit maintains the line of sight on moving objects. A major potential obstacle to achieving accurate and precise saccadic or pursuit performance is the inevitable sensorimotor delay that accompanies the processing of the position or motion of visual signals.  To overcome the deleterious effects of such delays, eye movements display a remarkable capacity to respond on the basis of predicted sensory signals. Behavioral and neurophysiological studies over the past several years have addressed the mechanisms responsible for predictive eye movements. This talk will review key developments, focusing on anticipatory smooth eye movements (smooth eye movements in the direction of the expected future motion of a target).  Anticipatory smooth eye movements (a) can be triggered by high-level, symbolic cues that signal the future path of a target, and (b) are generated by neural pathways distinct from those responsible for maintained smooth pursuit. When the predictability of the target motion decreases, anticipatory smooth eye movements are not suppressed, but rather reflect expectations about the likely future path of the target estimated on the basis of the recent past history of motions.  Comparable effects of expectations have been shown to apply to the temporal pattern of saccades. The pervasive influence of prediction on oculomotor control suggests that one of the more important benefits of the ability to generate predictions from either explicit cues or statistical estimates is to ensure accurate and timely oculomotor performance.

Calculation of accurate 3-D reach commands from initial retinal and extra-retinal conditions

Doug Crawford, York University; Gunnar Blohm, Queen’s University

Reach movements can be guided in ‘closed loop’ fashion, using visual feedback, but in biological systems such feedback is relatively slow. Thus rapid movements require ‘open loop’ transformations based on initial retinal and extra-retinal conditions. This is complicated, because the retina is attached to the interior surface of a sphere (the eye) that rotates three-dimensionally with respect to the world, the other eye, and effectors such as the reach system. Further, head movement causes the eyes to translate with respect to both the visual world and the shoulder. Optimism continues to abound that linear approximations will capture the main properties of this system (i.e., most visuomotor studies implicitly treat the retina as a flat, shifting plane), but unfortunately this ignores several fundamentals that the real brain must deal with. Amongst these is the need for eye and head orientation signals to solve the spatial relationships between patterns of stimulation on the two retinas (for depth vision) and between the external world and motor effectors. Here we will describe recent efforts to 1) understand the geometric problems that the brain encounters in planning reach, 2) determine if the brain actually solves these problems, and 3) model how the brain might solve these problems.

Are People Successful at Learning Sequences of Actions on a Perceptual Matching Task?

Robert A. Jacobs, University of Rochester; Reiko Yakushijin, Aoyama Gakuin University

Human subjects were trained to perform a perceptual matching task requiring them to manipulate comparison objects until they matched target objects using the fewest manipulations possible. Efficient performance of this task requires an understanding of the hidden or latent causal structure governing the relationships between actions and perceptual outcomes. We use two benchmarks to evaluate the quality of subjects’ learning. One benchmark is based on optimal performance as calculated by a dynamic programming procedure. The other is based on an adaptive computational agent that uses a reinforcement learning method known as Q-learning to learn to perform the task. Our analyses suggest that subjects were indeed successful learners. In particular, they learned to perform the perceptual matching task in a near-optimal manner (i.e., using a small number of manipulations) at the end of training. Subjects were able to achieve near- optimal performance because they learned, at least partially, the causal structure underlying the task. In addition, subjects’ performances were broadly consistent with those of model-based reinforcement learning agents that built and used internal models of how their actions influenced the external environment. On the basis of these results, we hypothesize that people will achieve near-optimal performances on tasks requiring sequences of actions — especially sensorimotor tasks with underlying latent causal structures — when they can detect the effect of their actions on the environment, and when they can represent and reason about these effects using an internal mental model.

 

S4 – Ongoing fluctuation of neural activity and its relationship to visual perception

S4 – Ongoing fluctuation of neural activity and its relationship to visual perception

Friday, May 6, 2:30 – 4:30 pm, Royal Ballroom 1-3

Organizer: Hakwan Lau, Columbia University, Donders Institute, Netherlands

Presenters: Biyu Jade He, National Institute of Health; Charles Schroeder, Nathan S. Kline Institute for Psychiatric Research, Columbia University; Andreas Kleinschmidt, INSERM-CEA, NeuroSpin, Gif/Yvette, France; Hakwan Lau, Columbia University, Donders Institute, Netherlands; Tony Ro, City University of New York

Symposium Description

Even in the absence of external stimulation, the visual system shows ongoing fluctuations of neural activity. While some early theoretical analyses suggest that the impact of such fluctuations in activity on visual perception may be minimal, recent empirical results have given new insights on this issue. We will review this evidence and the new theoretical perspectives in this symposium. Below are a few key themes:

– Coverage of multiple experimental methods and fluctuations in activity at different time scales:

The 5 speakers will discuss experiments that employ different methods to measure ongoing fluctuations in neural activity, such as human fMRI (functional magnetic resonance imaging) in patients and healthy subjects, intracranial cortical EEG (electroencephalography) in presurgical epileptics, combined use of TMS (transcranial magnetic stimulation) and optical imaging, and electrophysiological studies in non-human primates. These methods investigate fluctuations in neural activity at different time scales, from 10-20 seconds per cycle to the sub-second oscillatory range. The relationship between these different activities will be discussed.

– What ongoing activities tell us about the mechanisms of attention?

In addition to discussing the nature of ongoing activity and its impact on perception, several speakers will also use ongoing activity as a tool to understand the basic mechanisms of attention and awareness.

– Implication for clinical studies of perception:

Several speakers will discuss data collected from presurgical epileptics, where intracranial cortical EEG data were recorded. The nature of ongoing fMRI activity in patients suffering from strokes will also be discussed.

– Debate of theoretical perspectives and interpretations of data:

The different speakers will present competing theoretical perspectives on the nature of ongoing activity, as well as alternative interpretations of the same results. This will promote an exchange of ideas and hopefully lead to consensus on and illumination of the issues.

The nature of ongoing neural activity and its relationship to perception should be relevant to all attendants of VSS. We aim to have a broad audience, as we will be covering different experimental paradigms with different empirical methods. We expect the symposium to be especially interesting for researchers specializing in attention and awareness. Also, although the topic is primarily on neural activity, one focus of the symposium is its relationship to behavior. Hence some speakers will also present behavioral studies inspired by the investigation of ongoing neural activity, which will be of interests to many. Specifically, in some talks the implications of our understanding of ongoing neural activity and issues of experimental design will be discussed.

Presentations

Spontaneous fMRI signals and slow cortical potentials in perception

Biyu Jade He, National Institute of Health

The brain is not a silent, complex input/output system waiting to be driven by external stimuli; instead, it is a closed, self-referential system operating on its own with sensory information modulating rather than determining its activity. Ongoing spontaneous brain activity costs the majority of the brain’s energy budget, maintains the brain’s functional architecture, and makes predictions about the environment and the future. I will discuss some recent research on the functional significance and the organization of spontaneous brain activity, with implications for perception research. The past decade has seen rapid development in the field of resting-state fMRI networks. In one of the first studies that established the functional significance of these networks, we showed that strokes disrupted large-scale networks in the spontaneous fMRI signals, and that the degree of such disruption predicted the patients’ behavioral impairment (spatial neglect). Next, we identified the neurophysiological signal underlying the coherent patterns in the spontaneous fMRI signal, the slow cortical potential (SCP). The SCP is a novel neural correlate of the fMRI signal; existing evidence suggests that it most likely underlies both spontaneous fMRI signals and task-evoked fMRI responses. I further discuss some existing data suggesting a potential involvement of the SCP in conscious awareness, including the influence of spontaneous SCP fluctuations on visual perception. Lastly, given that both the SCP and the fMRI signal display a power-law distribution in their temporal power spectra, I argue that the role of scale-free brain activity in perception and consciousness warrants future investigation.

Tuning of the neocortex to the temporal dynamics of attended event streams

Charles Schroeder, Nathan S. Kline Institute for Psychiatric Research, Columbia University

When events occur in rhythmic streams, attention may use the entrainment of neocortical excitability fluctuations (oscillations) to the tempo of a task-relevant stream, to promote its perceptual selection, and its representation in working memory. To test this idea, we studied humans and monkeys using an auditory-visual stream selection paradigm. Electrocortical (ECoG) activity sampled from subdural electrodes in epilepsy patients showed that: 1) attentional modulation of oscillatory entrainment operates in a network of areas including auditory, visual, posterior parietal, inferior motor, inferior frontal, cingulate and superior midline frontal cortex, 2) strength of oscillatory entrainment depends on the predictability of the stimulus stream, and 3) these effects are dissociable from attentional enhancement of evoked activity. Fine-grained intracortical analysis of laminar current source density profiles and concomitant neuronal firing patterns in monkeys showed that: 1) along with responses “driven” by preferred modality stimuli (e.g., visual stimuli in V1), attended non-preferred modality stimuli (e.g., auditory stimuli in V1) could “modulate” local cortical excitability by entraining ongoing oscillatory activity, 2) while this “heteromodal” entrainment occurred in the extragranular layers, the granular layers remain phase-locked to the stimulus stream in the preferred modality. Thus, attention may use phase modulation (coherence vs opposition) to control the projection of information from input to output layers of cortex. On a regional scale, oscillatory entrainment across a network of brain regions to may provide a mechanism for a sustained and distributed neural representation of attended event patterns, and for their availability to working memory.

Probing Perceptual Consequences of Ongoing Activity Variations

Andreas Kleinschmidt, INSERM-CEA, NeuroSpin, Gif/Yvette, France

Recordings of ongoing brain activity show remarkable spontaneous fluctuations such that detecting stimulus-driven responses usually requires multiple repetitions and averaging. We have assessed the functional impact of such fluctuations on evoked neural responses and human perceptual performance. We studied human participants using functional neuroimaging and sparse event-related paradigms with sensory probes that could be either ambiguous with respect to perceptual categories (faces) or peri-liminal for a given feature (visual motion coherence). In both instances, fluctuations in ongoing signal of accordingly specialized brain regions (FFA, hMT+) biased how upcoming stimuli were perceived. Moreover, the relation between evoked and ongoing activity was not simply additive, as previously described in other settings, but showed an interaction with perceptual outcome. This latter observation questions the logic of event-related averaging where responses are thought to be unrelated from the level of pre-stimulus activity. We have further analyzed the functional connotation of the imaging signal by analyzing false alarm trials. Counter the notion of this signal being a proxy of sensory evidence, false alarms were preceded by especially low signal. A theoretical framework that is compatible with our observations comes from the family of predictive coding models, the ‘free energy’ principle proposed by Karl Friston. Together, our findings illustrate the functional consequences of ongoing activity fluctuations and underline that they should not be left unaccounted for as in the traditional mainstream of data analysis.

The paradoxical negative relationship between attention-related spontaneous neural activity and perceptual decisions

Hakwan Lau, Columbia University, Donders Institute, Netherlands; Dobromir Rahnev, Columbia University

One recent study reported that when ongoing pre-stimulus fMRI activity in the dorsal attention network was high, the hit rate in an auditory detection task was surprisingly low. This result is puzzling because pre-stimulus activity in the dorsal attention network presumably reflects the subjects’ attentional state, and high attention is supposed to improve perception, not impair it. However, it is important to distinguish between the capacity and decision/criterion aspects of perception. Using signal detection theoretic analysis, we provide empirical evidence to show that spatial attention can lead to conservation bias in detection, although it boosts detection capacity. In behavioral experiments we confirmed the prediction, derived from signal detection theory, that this conservative bias in detection is coupled with lowered confidence ratings in a discrimination task. Based on these results, we then used fMRI to test the hypothesis that low pre-stimulus ongoing activity in the dorsal attention network predicts high confidence rating in a visual motion discrimination task. We confirmed this counter-intuitive hypothesis, and also found that functional connectivity (i.e. correlation) between areas within the dorsal attention network negatively predicts confidence rating.

Taken together, these results support the notion that attention may have a negative impact on the decision/criterion aspects of perception. This negative relationship may explain why under the lack of attention, we may have an inflated sense of subjective experience: e.g. the vividness of peripheral vision; and the overconfidence in naïve subjects in inattentional blindness and change blindness experiments despite their poor performance capacity.

Oscillatory and Feedback Activity Mediate Conscious Visual Perception

Tony Ro, City University of New York

Under identical physical stimulus conditions, sometimes visual events are detected whereas at other times these same visual events can go unnoticed. Using both metacontrast masking and transcranial magnetic stimulation (TMS) of the primary visual cortex to induce visual suppression, we have been examining the neural mechanisms underlying this variability in perception. Our results indicate that the timing of arrival of visual events in primary visual cortex with respect to ongoing oscillatory activity and feedback signals play an important role in dictating whether a visual event is detected or not. Furthermore, experiments manipulating visual stimulus salience suggest that the strength of only feedforward signals, but not feedback signals in primary visual cortex is affected by manipulations of saliency. Taken together, our studies shed some insight into the nature and variability of the neural signals that support conscious visual perception.

 

Perception of Emotion from Body Expression: Neural basis and computational mechanisms

S3 – Perception of Emotion from Body Expression: Neural basis and computational mechanisms

Friday, May 6, 12:00 – 2:00 pm, Royal Ballroom 6-8

Organizer: Martin A. Giese, Hertie Institute for Clinical Brain Research, CIN, Tübingen, Germany

Presenters: Maggie Shiffrar, Dept. of Psychology, Rutgers University, Newark, NJ; Beatrice de Gelder, Dept. of Psychology, University of Tilburg, NL; Martin Giese, Hertie Inst. f. Clinical Brain Research, CIN, Tübingen, Germany; Tamar Flash, Weizmann Institute of Science, Rehovot, IL

Symposium Description

The expression of emotion by body postures and movements is highly relevant in social communication. However, only recently this topic has attracted substantial interest in visual neuroscience. The combination of modern approaches for stimulus generation by computer graphics, psychophysics, brain imaging, research on patients with brain damage, and novel computational methods have revealed interesting novel insights in the processing of these complex visual stimuli. The combination of experimental techniques with different computational approaches, including ones from computational vision, has revealed novel insights in the critical visual features for the perception of emotions from bodily expressions. Likewise, such approaches have provided novel insights in the relationship between visual perception and action generation, and the influence of attention on the processing of such stimuli. The symposium brings together specialists from different fields who have studied the perception of emotional body expressions with complementary methodologies. This work has revealed the importance of affective signals conveyed the whole body, in addition and beyond the well-studied channel of static facial expressions. The first talk by M. Shiffrar presents work that investigates the perception of threats from body stimuli. The second contribution by B. de Gelder will discuss experiments showing that the perception of emotion from bodies is still possible without visual awareness, potentially involving subcortical visual structures. These experiments include functional imaging studies and studies in patients. The contribution by M. Giese presents several examples how a combination of psychophysical experiments and statistical techniques from machine learning is suitable for the identification of critical visual features that are essential for the recognition of emotions of interactive and non-interactive body movements. Finally, the contribution of T. Flash shows evidence from psychophysical and imaging experiments that supports the hypothesis that the visual system is tuned to the perception of spatio-temporal invariants that are common, specifically, to emotional body movements. Summarizing, the symposium will present examples for a novel approach for the study of complex visual mechanism that provide a basis for the quantitative and well—controlled study of the visual processing of complex social signals. Such work will be interesting for a broad spectrum of VSS visitors, including faculty, researcher and students. The topic should be of particular interest to visitors from high-level vision, face / body and motion perception.

Presentations

The perception of bodily threats

Maggie Shiffrar, Dept. of Psychology, Rutgers University, Newark, NJ

Numerous results indicate that observers are particularly sensitive to angry and fearful faces. Such heightened sensitivity supports the hypothesis that observers are best able to detect potentially harmful information. Because bodily cues to threat can be seen from farther away, the goal of our work is to determine whether observers demonstrate enhanced visual sensitivity to body signaling different types of threat. One set of studies consisted of a modified “face in a crowd” paradigm in which observers viewed arrays of body postures depicting various emotional states. All emotional expressions were applied to the same generic male body with a neutral facial expression. Body postures were normed for perceived emotional content. Participants sequentially viewed circular arrays of 6 emotional body postures and reported with a key press whether or not each array contained a different or oddball body posture. Consistent with the threat advantage hypothesis, observers demonstrated speeded detection of threatening body postures. Another series of studies investigated a more subtle type of threat detection. Previous work has shown that women preferentially attend to thin bodies. We investigated whether this effect is specific to women looking at other women’s bodies. Using a dot probe paradigm, the strongest attentional bias was found with women looking at women’s bodies. Bias magnitude correlated positively with each observer’s level of dissatisfaction with her own body. To the extent that women compare their own bodies with observed bodies, this effect also conforms to the threat advantage hypothesis. This research was supported by NSF grant EXP-SA 0730985 and the Simons Foundation (grant 94915).

Perceiving bodily expressions with or without visual awareness

Beatrice de Gelder, Dept. of Psychology, University of Tilburg, NL

Bodily expressions of emotion are powerful signals regulating communicative exchanges. For better or worse, we spend our life surrounded by other people. Nothing is less surprising than to assume that we are trained and over-trained to read their body language. When we see someone running with the hands protecting his face we perceive at once the fear and the action of running for cover. We rarely hesitate to assign meaning to such behaviors, and we do not wait to recognize fight behavior till we are close by enough to see the person’s facial expression. Here we report on new findings concerning the role of attention and of visual awareness on the perception and neurofunctional basis of our ability to recognize bodily expressions. Our experiments show that briefly seen, but also consciously unseen bodily stimuli may induce an emotional state and trigger adaptive actions in the observer. Exposure to unseen emotional stimuli triggers activity in the cortical and subcortical visual system and is associated with somatic changes typical of emotions. Specifically, unattended but also non-consciously perceived emotional body expressions elicit spontaneous facial expressions and psychophysiological changes that reflect the affective valence and arousal components of the stimuli. Similar results are also obtained in neurologically intact subjects in whom blindsight-like effects are induced by visual masking. Moreover, participants facial reactions are faster and autonomic arousal is higher for unseen than for seen stimuli. We will discuss the implications of these findings for current debates in human emotion theories.

Features in the perception of interactive and non-interactive bodily movements

Martin Giese, Hertie Inst. f. Clinical Brain Research, CIN, Tübingen, Germany

Body postures and movements provide important information about affective states. A variety of existing work has focused on the characterization of the perception of emotions from bodies and point-light motion, often using rather qualitative or heuristic methods. Recent advances in computational learning and computer animation have opened novel possibilities for the well-controlled study of emotional signals conveyed by the human body and their visual perception. In addition, almost no quantitative work exists on the features that underlie the perception of emotions conveyed by the body during interactive behavior. Using motion capture combined with a mood induction paradigm, we studied systematically the expression and perception of emotions expressed by interactive and non-interactive movements. Combining methods from machine learning with psychophysical experiments we characterize the kinematic features that characterize emotional movements and investigate how they drive the visual perception of emotions from the human body.

Invariants common to perception and action in bodily movements

Tamar Flash, Weizmann Institute of Science, Rehovot, IL

Behavioral and theoretical studies have focused on identifying the kinematic and temporal characteristics of various movements ranging from simple reaching to complex drawing and curved motions. These kinematic and temporal features have been quite instrumental in investigating the organizing principles that underlie trajectory formation. Similar kinematic constraints play also a critical role in the visual perception of abstract and biological motion stimuli, and in visual action recognition. To account for these observations in the visual perception and production of body motion we present a new model of trajectory formation inspired by geometrical invariance. The model proposes that movement duration, timing, and compositionality arise from cooperation among several geometries. Different geometries possess different measures of distance. Hence, depending on the selected geometry, movement duration is proportional to the corresponding distance parameter. Expressing these ideas mathematically, the model has led to concrete predictions concerning the kinematic and temporal features of both drawing and locomotion trajectories. The model has several important implications with respect to action observation and recognition and the underlying brain representations. Some of these implications were examined in a series of fMRI studies which point top the importance of geometrical invariances and kinematic laws in visual motion processing.

 

 

Vision Sciences Society