Beyond the FFA: The role of the ventral anterior temporal lobes in face processing

Organizers: Jessica Collins & Ingrid Olson; Temple University
Presenters: Winrich Friewald, Stefano Anzellotti, Jessica Collins, Galia Avidan, Ed O’Neil

< Back to 2014 Symposia

Symposium Description

Extensive research supports the existence of a specialized face-processing network that is distinct from the visual processing areas used for general object recognition. The majority of this work has been aimed at characterizing the response properties of the fusiform face area (FFA) and the occipital face area (OFA), which are thought to constitute the core network of brain regions responsible for facial identification. Recent findings of face-selective cortical regions in more anterior regions of the macaque brain– in the ventral anterior temporal lobe (vATL) and in the orbitofrontal cortex casts doubt on this simple characterization of the face network. This macaque research is supported by fMRI research in humans showing functionally homologous face-processing areas in the vATLs of humans. In addition, there is intracranial EEG and neuropsychology research all pointing towards the critical role of the vATL in some aspect of face processing. The function of the vATL face patches is relatively unexplored and the goal of this symposium is to bring together researchers from a variety of disciplines to address the following question: What is the functional role of the vATLs in face perception and memory and how does it interact with the greater face network? Speakers will present recent findings organized around the following topics: 1) The response properties of the vATL face areas in humans; 2) the response properties of the vATL face area in non-human primates; 3) The connectivity of vATL face areas with the rest of the face-processing network; 4) The role of the vATLs in the face-specific visual processing deficits in prosopagnosia; 5) The sensitivity of the vATLs to conceptual information; and 6) the representational demands that modulate the involvement of the perirhinal cortex in facial recognition. The implications of these findings to theories of face processing and object processing more generally will be discussed.

Presentations

Face-processing hierarchies in primates

Speaker: Winrich Friewald; The Rockefeller University

The neural mechanisms of face recognition have been extensively studied in both humans and macaque monkeys. Results obtained with similar technologies, chiefly functional brain imaging now allows for detailed cross-species comparisons of face-processing circuitry. A crucial node in this circuit, located at the interface of face perception and individual recognition, is located in the ventral anterior temporal lobe. In macaque monkeys, face selective cells have been found in this region through electrophysiological recordings, a face-selective patch identified with functional magnetic resonance imaging (fMRI), and the unique functional properties of cells within these fMRI-identified regions characterized, suggesting a role in invariant face identification. Furthermore activity in this patch been causally linked, through combinations of electrical microstimulation and psychophysics, to different kinds of face recognition behavior. Not far away from this face selective region, experience-dependent specializations for complex object shapes and their associations have been located, and the mechanisms of these processes studied extensively. In my talk I will present this work on face processing in the ventral anterior temporal lobe of the macaque brain, its relationship to face processing in other face regions and to processes in neighboring regions, its implications for object recognition in general, and the impact of this work for understanding the mechanisms of human face recognition.

Invariant representations of face identity in the ATL

Speaker: Stefano Anzellotti; Harvard University
Authors: Alfonso Caramazza, Harvard University

A large body of evidence has documented the involvement of occipitotemporal regions in face recognition. Neuropsychological studies found that damage to the anterior temporal lobes (ATL) can lead to face recognition deficits, and recent neuroimaging research has shown that the ATL contain regions that respond more strongly to faces than to other categories of objects. What are the different contributions of anterior temporal and occipitotemporal regions to face recognition? In a series of fMRI studies, we investigated representations of individual faces in the human ATL using computer generated face stimuli for which participants did not have individual-specific associated knowledge. Recognition of face identity from different viewpoints and from images of part of the face was tested, using an approach in which pattern classifiers are trained and tested on the responses to different stimuli depicting the same identities. The anterior temporal lobes were found to encode identity information about faces generalizing across changes in the stimuli. Invariant information about face identity was found to be lateralized to the right hemisphere. Some tolerance across image transformations was also detected in occipitotemporal regions, but it was limited to changes in viewpoint, suggesting a process of increasing generalization from posterior to anterior temporal areas. Consistent with this interpretation, information about identity-irrelevant details of the images was found to decline moving along the posterior-anterior axis, and was not detected in the ATL.

The role of the human vATL face patches in familiar face processing

Speaker: Jessica Collins; Temple University
Authors: Ingrid Olson, Temple University

Studies of nonhuman primates have reported the existence of face sensitive patches in the ventral anterior temporal lobes. Using optimized imaging parameters recent fMRI studies have identified a functionally homologous brain region in the ventral anterior temporal lobes (vATLs) of humans. The human vATL shows sensitivity to both perceptual and conceptual features of faces, suggesting that it is involved in some aspects of both face perception and face memory. Supporting a role of the vATLs in face perception, activity patterns in the human vATL face patches discriminate between unfamiliar facial identities, and unilateral damage to the vATLs impairs the ability to make fine-grained discriminations between simultaneously displayed faces when morphed stimuli are used. Supporting a role of the vATLs in face memory, activity in the vATLs is up-regulated for famous faces and for novel faces paired with semantic content. The left ATL appears to be relatively more sensitive to the verbal or semantic aspects of faces, while the right ATL appears to be relatively more sensitive to visual aspects of face, consistent with lateralized processing of language. We will discuss the implications of these findings and propose a revised model of face processing in which the vATLs serve a centralized role in linking face identity to face memory as part of the core visual face-processing network.

Structural and functional impairment of the face processing network in congenital prosopagnosia

Speaker: Galia Avidan; Ben Gurion University
Authors: Michal Tanzer, Ben Gurion University; Marlene Behrmann, Carnegie Mellon University

There is growing consensus that accurate and efficient face recognition is mediated by a neural circuit comprised of a posterior “core” and an anterior “extended” set of regions. In a series of functional and structural imaging studies, we characterize the distributed face network in individuals with congenital prosopagnosia (CP) – a lifelong impairment in face processing – relative to that of matched controls. Interestingly, our results uncover largely normal activation patterns in the posterior core face patches in CP. More recently, we also documented normal activity of the amygdala (emotion processing) as well as normal, or even enhanced functional connectivity between the amygdala and the core regions. Critically, in the same individuals, activation of the anterior temporal cortex, which is thought to mediate identity processing, was reduced and connectivity between this region and the posterior core regions was disrupted. The dissociation between the neural profiles of the anterior temporal lobe and amygdala was evident both during a task-related face scan and during a resting state scan, in the absence of visual stimulation. Taken together, these findings elucidate selective disruptions in neural circuitry in CP, and are also consistent with impaired white matter connectivity to anterior temporal cortex and prefrontal cortex documented in these individuals. These results implicate CP as disconnection syndrome, rather than an alteration localized to a particular brain region. Furthermore, they offer an account for the known behavioral differential difficulty in identity versus emotional expression recognition in many individuals with CP.

Functional role and connectivity of perirhinal cortex in face processing

Speaker: Ed O’Neil; University of Western Ontario
Authors: Stefan Köhler, University of Western Ontario

The prevailing view of medial temporal lobe (MTL) functioning holds that its structures are dedicated to declarative long-term memory. Recent evidence challenges this view, suggesting that perirhinal cortex (PrC), which interfaces the MTL with the ventral visual pathway, supports highly integrated object representations that are critical for perceptual as well as for memory-based discriminations. Here, we review research conducted with fMRI in healthy individuals that addresses the role of PrC, and its functional connectivity, in the context of face processing. Our research shows that (i) PrC exhibits a performance-related involvement in recognition-memory as well as in perceptual oddball judgments for faces; (ii) PrC involvement in perceptual tasks is related to demands for face individuation; (ii) PrC exhibits resting-state connectivity with the FFA and the amygdala that has behavioural relevance for the face-inversion effect; (iii) task demands that distinguish recognition-memory from perceptual-discrimination tasks are reflected in distinct patterns of functional connectivity between PrC and other cortical regions, rather than in differential PrC activity. Together, our findings challenge the view that mnemonic demands are the sole determinant of PrC involvement in face processing, and that its response to such demands uniquely distinguishes its role from that of more posterior ventral visual pathway regions. Instead, our findings point to the importance of considering the nature of representations and functional connectivity in efforts to elucidate the contributions of PrC and other cortical structures to face processing.

< Back to 2014 Symposia

Vision and eye movements in natural environments

Organizers: Brian J. White & Douglas P. Munoz; Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada
Presenters: Jared Abrams, Wolfgang Einhäuser, Brian J. White, Michael Dorr, Neil Mennie

< Back to 2014 Symposia

Symposium Description

Understanding how we perceive and act upon complex natural environments is one of the most pressing challenges in visual neuroscience, with applications that have potential to revolutionize our understanding of the brain, machine vision, and artificial intelligence, to clinical applications such as the detection of visual or mental disorders and neuro-rehabilitation. Until recently, the study of active vision – how visual stimuli give rise to eye movements, and conversely how eye movements influence vision – has largely been restricted to simple stimuli in artificial laboratory settings. Historically, much work on the visual system has been accomplished in this way, but to fully understand vision it is essential to measure behavior under the conditions in which visual systems naturally evolved. This symposium covers some of the latest research on vision and eye movements in natural environments. The talks will explore methods of quantifying natural vision, and compare/contrast behavior across various levels of stimulus complexity and task constraint, from visual search in natural scenes (Abrams, Bradley & Geisler), to unconstrained viewing of natural dynamic video in humans (Dorr, Wallis & Bex), and non-human primates during single-cell recording (White, Itti & Munoz), and real-world gaze behavior using portable eye-tracking (Einhäuser & ‘t Hart; Mennie, Zulkifli, Mahadzir, Miflah & Babcock). Thus, the symposium should be of interest to a wide audience from visual psychophysicists, to oculomotor neurophysiologists, and cognitive/computational scientists.

Presentations

Fixation search in natural scenes: a new role for contrast normalization

Speaker: Jared Abrams; Center for Perceptual Systems, University of Texas, Austin, USA
Authors: Chris Bradley, Center for Perceptual Systems, University of Texas, Austin; Wilson S. Geisler, Center for Perceptual Systems, University of Texas, Austin.

Visual search is a fundamental behavior, yet little is known about search in natural scenes. Previously, we introduced the ELM (entropy limit minimization) fixation selection rule, which selects fixations that maximally reduce uncertainty about the location of the target. This rule closely approximates the Bayesian optimal decision rule, but is simpler computationally, making the ELM rule a useful benchmark for characterizing human performance. Previously, we found that the ELM rule predicts several aspects of fixation selection in naturalistic (1/f) noise, including the distributions of fixation location, saccade magnitude, and saccade direction. However, the ELM rule is only optimal when the detectability of the target (the visibility map) falls off from the point of fixation in the same way for all potential fixation locations, which holds for backgrounds with relatively constant spatial structure, like statistically stationary 1/f noise. Most natural scenes do not satisfy this assumption; they are highly non-stationary. By combining empirical measurements of target detectability in natural backgrounds with a straight-forward mathematical analysis, we arrive at a generalized ELM rule (nELM rule) that is optimal for non-stationary backgrounds. The nELM searcher divides (normalizes) the current target probability map (posterior-probability map) by the estimated local contrast at each location in the map. It then blurs (convolves) this normalized map with the visibility map for a uniform background. The peak of the blurred map is the optimal location for the next fixation. We will describe the predictions and performance of the nELM searcher.

Eye movements in natural scenes and gaze in the real world.

Speaker: Wolfgang Einhäuser; Philipps-University Marburg, Department of Neurophysics, Marburg, Germany
Authors: Bernard Marius ‘t Hart, Philipps-University Marburg, Department of Neurophysics, Marburg, Germany.

Gaze is widely considered a good proxy for spatial attention. We address whether such “overt attention” is related to other attention measures in natural scenes, and to what extent laboratory results on eye movements transfer to real-world gaze orienting. We find that the probability of a target to be detected in a rapid-serial-visual-presentation task correlates with its probability to be fixated during prolonged viewing, and that both measures are similarly affected by modifications to the target’s contrast. This shows a direct link between covert attention in time and overt attention in space for natural stimuli. Especially in the context of computational vision, the probability of an item to be fixated (“salience”) is frequently equated with its “importance”, the probability of it being recalled during scene description. While we confirm a relation between salience and importance, we dissociate these measures by changing an item’s contrast: whereas salience is affected by the actual features, importance is driven by the observer’s expectations about these features based on scene statistics. Using a mobile eye-tracking device we demonstrate that eye-tracking experiments in typical laboratory conditions have limited predictive power for real-world gaze orienting. Laboratory data fail to measure the substantial effects of implicit tasks that are imposed on the participant by the environment to avoid severe costs (e.g., tripping over) and typically fail to include the distinct contributions of eye, head and body for orienting gaze. Finally, we provide some examples for applications of mobile gaze-tracking for ergonomic workplace design and aiding medical diagnostics.

Visual coding in the superior colliculus during unconstrained viewing of natural dynamic video

Speaker: Brian J. White; Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada
Authors: Laurent Itti, Dept of Computer Science, University of Southern California, USA; Douglas P. Munoz, Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada

The superior colliculus (SC) is a multilayered midbrain structure with visual representations in the superficial-layers (SCs), and sensorimotor representations linked to the control of eye movements/attention in the intermediate-layers (SCi). Although we have extensive knowledge of the SC using simple stimuli, we know little about how the SC behaves during active-vision of complex natural stimuli. We recorded single-units in the monkey SC during unconstrained viewing of natural dynamic video. We used a computational model to predict visual saliency at any retinal location, any point in time. We parsed fixations into tertiles according to the averaged model-predicted saliency value (low, medium, high) in the response field (RF) around the time of fixation (50-400ms post-fixation). The results showed a systematic increase in post-fixation discharge with increasing saliency. We then examined a subset of the total fixations based on the direction of the next saccade (into vs. opposite the RF), under the assumption that saccade direction coarsely indicates the top-down goal of the animal (“value” of the goal-directed stimulus). SCs neurons showed the same enhanced response for greater saliency irrespective of next saccade direction, whereas SCi neurons only showed an enhanced response for greater saliency when the stimulus that evoked it was the goal of the next saccade (was of interest/value). This implies that saliency is controlled closer to the output of the saccade circuit, where priority (combined representation of saliency and relevancy) is presumably signaled and the saccade command is generated. The results support functionally distinct roles of SCs and SCi, whereby the former fit the role of a visual saliency map, and the latter a priority map.

Visual sensitivity under naturalistic viewing conditions

Speaker: Michael Dorr; Schepens Eye Research Institute, Dept of Ophthalmology, Harvard Medical School, and Institute for Neuro- and Bioinformatics, University of Lübeck, Germany
Authors: Thomas S Wallis, Schepens Eye Research Institute, Dept of Ophthalmology, Harvard Medical School, and Centre for Integrative Neuroscience and Department of Computer Science, The University of Tübingen, Tübingen, Germany; Peter J Bex, Schepens Eye Research Institute, Dept of Ophthalmology, Harvard Medical School.

Psychophysical experiments typically use very simple stimuli, such as isolated dots and gratings on uniform backgrounds, and allow no or only very stereotyped eye movements. While these viewing conditions are highly controllable, they are not representative of real-world vision, which is characterized by a complex, broadband input and several eye movements per second. We performed a series of experiments in which subjects freely watched high-resolution nature documentaries and TV shows on a gaze-contingent display. Eye-tracking at 1000 Hz and fast video processing routines allowed us to precisely modulate the stimulus in real time and in retinal coordinates. The task then was to locate either bandpass contrast changes or geometric distortions that briefly appeared in one of four locations relative to the fovea every few seconds. We confirm a well-known loss of sensitivity when video modulations took place around the time of eye movements, i.e. around episodes of high-speed retinal motion. However, we found that replicating the same retinal input in a passive condition, where subjects maintained central fixation and the video was shifted on the screen, led to a comparable loss in sensitivity. We conclude that no process of active, extra-retinal suppression is needed to explain peri-saccadic visual sensitivity under naturalistic conditions. We further find that the detection of spatial modifications depends on the spatio-temporal structure of the underlying scene, such that distortions are harder to detect in areas that vary rapidly across space or time. These results highlight the importance of naturalistic assessment for understanding visual processing.

Spatio-Temporal Dynamics of the use of gaze in natural tasks by a Sumatran Orangutan (Pongo abelli)

Speaker: Neil Mennie; University of Nottingham, Malaysia Campus, Malaysia
Authors: Nadia Amirah Zulkifli, University of Nottingham Malaysia Campus; Mazrul Mahadzir, University of Nottingham Malaysia Campus; Ahamed Miflah, University of Nottingham Malaysia Campus; Jason Babcock, Positive Science LLC, New York, USA.

Studies have shown that in natural tasks where actions are often programmed sequentially, human vision is an active, task-specific process (Land, et al., 1999; Hayhoe et al., 2003). Vision plays an important role in the supervision of these actions, and knowledge of our surroundings and spatial relationships within the immediate environment is vital for successful task scheduling and coordination of complex action. However, little is known about the use of gaze in natural tasks by great apes. Orangutans usually live high in the canopy of the rainforests of Borneo and Sumatra, where a good spatial knowledge of their immediate surroundings must be important to an animal that has the capability to accurately reach/grasp with four limbs and to move along branches. We trained a 9yr old captive born Sumatran orangutan to wear a portable eye tracker and recorded her use of gaze in a number of different tasks such as locomotion, visual search and tool use in an enclosure at the National Zoo of Malaysia. We found that her gaze was task specific, with different eye movement metrics in different tasks. Secondly we also found that this animal made anticipatory, look-ahead eye movements to future targets (Mennie et al., 2007) when picking up sultanas from a board using her upper limbs. This semi-social animal is likely to be capable of the similar, high-level use of gaze to that of a social species of hominidae – humans.

< Back to 2014 Symposia

2014 Symposia

Vision and eye movements in natural environments

Organizers: Brian J. White & Douglas P. Munoz, Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada

Historically, the study of vision has largely been restricted to the use of simple stimuli in controlled tasks where observers are required to maintain stable gaze, or make stereotyped eye movements. This symposium presents some of the latest research aimed at understanding how the visual system behaves during unconstrained viewing of natural scenes, dynamic video, and real-world environments. Understanding how we perceive and act upon complex natural environments has potential to revolutionize our understanding of the brain, from machine vision and artificial intelligence to clinical applications such as the detection of visual or mental disorders and neuro-rehabilitation. More…

Beyond the FFA: The role of the ventral anterior temporal lobes in face processing

Organizer: Jessica Collins & Ingrid Olson, Temple University

Although accruing evidence has shown that face-selective patches in the ventral anterior temporal lobes (vATLs) are highly interconnected with the FFA and OFA, and that they play a necessary role in facial perception and identification, the contribution of these brain areas to the face-processing network remains elusive. The goal of this symposium is to bring together researchers from a variety of disciplines to address the following question: What is the functional role of the vATLs in face perception and memory, and how do they interact with the greater face network? More…

Mid-level representations in visual processing

Organizer: Jonathan Peirce, University of Nottingham

The majority of studies in vision science focus on the representation of low-level features, such as edges, color or motion processing, or on the representation of high-level constructs such as objects, faces and places. Surprisingly little work aims to understand the link between the two; the intermediate representations of “mid-level” vision. This symposium invites a series of speakers that have spent time working on mid-level vision to present their views on what those intermediate representations might be, of the problems that such processing must overcome, and the methods that we might use to understand them better. More…

The visual white-matter matters! Innovation, data, methods and applications of diffusion MRI and fiber tractography

Organizers: Franco Pestilli & Ariel Rokem, Stanford University

Many regions of the cerebral cortex are involved in visual perception and cognition. In this symposium, we will focus on the neuroanatomical connections between them. To study the visual white-matter connections, speakers in this symposium use diffusion MRI (dMRI), an imaging method that probes the directional diffusion of water. The talks will present studies of connectivity between visual processing streams, development of visual white matter, and the role of white matter in visual disorders. We will also survey publicly available resources available to the Vision Sciences community to extend the study of the visual white matter. More…

What are you doing? Recent advances in visual action recognition research.

Organizers: Stephan de la Rosa & Heinrich Bülthoff, Max Planck Institute for Biological Cybernetics

Knowing what another person is doing by visually observing the other person’s actions (action recognition) is critical for human survival. Although humans often have little difficulty recognizing the actions of others, the underlying psychological and neural processes are complex. The understanding of these processes has not only implications for the scientific community but also for the development of man-machine interfaces, robots, and artificial intelligence. The current symposium summarizes recent scientific advances in the realm of action recognition by providing an integrative view on the processes underlying action recognition. More…

Understanding representation in visual cortex: why are there so many approaches and which is best?

Organizers: Thomas Naselaris & Kendrick Kay, Department of Neurosciences, Medical University of South Carolina & Department of Psychology, Washington University in St. Louis

Central to visual neuroscience is the problem of representation: what features of the visual world drive activity in the visual system? In recent years a variety of new methods for characterizing visual representation have been proposed. These include multivariate pattern analysis, representational similarity analysis, the use of abstract semantic spaces, and models of stimulus statistics. In this symposium, invitees will present recent discoveries in visual representation, explaining the generality of their approach and how it might be applicable to future studies. Through this forum we hope to move towards an integrative approach that can be shared across experimental paradigms. More…

2014 Young Investigator – Duje Tadin

Duje Tadin

Associate Professor, Department of Brain and Cognitive Sciences, Center for Visual Science, Department of Ophthalmology, University Of Rochester, NY, USA

Duje Tadin is the 2014 winner of the Elsevier/VSS Young Investigator Award. Trained at Vanderbilt, Duje Tadin was awarded the PhD. in Psychology in 2004 under the supervision of Joe Lappin. After 3 years of post-doctoral work in Randolph Blake’s lab, he took up a position at the University of Rochester, where he is currently an associate professor. Duje’s broad research goal is to elucidate neural mechanisms that lead to human visual experience. He seeks converging experimental evidence from a range of methods, including human psychophysics, computational modeling, transcranial magnetic stimulation (TMS), neuroimaging, research on special populations, collaborations on primate neurophysiology, and adaptive optics to control retinal images. Duje is probably best known for his elegant and illuminating research on spatial mechanisms of visual motion perception – work that has had a lasting impact on the field. He developed a new method to quantify motion perception using brief, ecologically relevant time scales, and then used it to discover a functionally important phenomenon of spatial suppression: larger motion patterns are paradoxically more difficult to see. Duje’s results revealed joint influences of spatial integration and segmentation mechanisms, showing that the balance between these two competing mechanisms is not fixed but varies with visibility, with spatial summation giving way to spatial suppression as visibility increases. He has also made significant contributions to several high-profile papers dealing with binocular rivalry, rapid visual adaptation, multi-sensory interactions, and visual function in individuals with low-vision and children with autism.

Elsevier/Vision Research Article

Dr. Tadin’s presentation:

Suppressive neural mechanisms: from perception
to intelligence

Monday, May 19, 12:30 pm, Talk Room 2

Perception operates on an immense amount of incoming information that greatly exceeds brain’s processing capacity. Because of this fundamental limitation, our perceptual efficiency is constrained by the ability to suppress irrelevant information. Here, I will present a series of studies
investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. We find that these suppressive mechanisms are adaptive, operating only when the sensory input is sufficiently strong to guarantee visibility. Utilizing a range of methods, we link these behavioral results with inhibitory center-surround receptive fields, such as those in cortical area MT.

What are functional roles of spatial suppression? Spatial suppression is weaker in old age and schizophrenia—as evident by paradoxically better-than-normal performance in some conditions. Moreover, these subjects also exhibit deficits in figure-ground segregation, suggesting a functional
connection. In recent studies, we report direct experimental evidence for a functional link between spatial suppression and figure-ground segregation.

Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, we find that individual differences in spatial suppression of motion signals strongly predict individual variations in WAIS IQ scores (r = 0.71).

2014 Keynote – Mandyam V. Srinivasan

Mandyam V. Srinivasan, Ph.D.

Mandyam V. Srinivasan, Ph.D.

Queensland Brain Institute and School of Information Technology and Electrical Engineering, University of Queensland
Website

Audio and slides from the 2014 Keynote Address are available on the Cambridge Research Systems website.

MORE THAN A HONEY MACHINE: Vision and Navigation in Honeybees and Applications to Robotics

Saturday, May 17, 2014, 7:15 pm, Talk Room 1-2

Flying insects are remarkably adept at seeing and perceiving the world and navigating effectively in it, despite possessing a brain that weighs less than a milligram and carries fewer than 0.01% as many neurons as ours does. Although most insects lack stereo vision, they use a number of ingenious strategies for perceiving their world in three dimensions and navigating successfully in it.

The talk will describe how honeybees use their vision to stabilize and control their flight, and navigate to food sources. Bees and birds negotiate narrow gaps safely by balancing the apparent speeds of the images in the two eyes. Flight speed is regulated by holding constant the average image velocity as seen by both eyes. Visual cues based on motion are also used to compensate for crosswinds, and to avoid collisions with other flying insects. Bees landing on a surface hold constant the magnitude of the optic flow that they experience as they approach the surface, thus automatically ensuring that flight speed decreases to zero at touchdown. Foraging bees gauge distance flown by integrating optic flow: they possess a visually-driven “odometer” that is robust to variations in wind, body weight, energy expenditure, and the properties of the visual environment. Mid-air collisions are avoided by sensing cues derived from visual parallax, and using appropriate flight control maneuvers.

Some of the insect-based strategies described above are being used to design, implement and test biologically-inspired algorithms for the guidance of autonomous terrestrial and aerial vehicles. Application to manoeuvres such as attitude stabilization, terrain following, obstacle avoidance, automated landing, and the execution of extreme aerobatic manoeuvres will be described.

This research was supported by ARC Centre of Excellence in Vision Science Grant CE0561903, ARC Discovery Grant DP0559306, and by a Queensland Smart State Premier’s Fellowship.

Biography

Srinivasan’s research focuses on the principles of visual processing, perception and cognition in simple natural systems, and on the application of these principles to machine vision and robotics.
He holds an undergraduate degree in Electrical Engineering from Bangalore University, a Master’s degree in Electronics from the Indian Institute of Science, a Ph.D. in Engineering and Applied Science from Yale University, a D.Sc. in Neuroethology from the Australian National University, and an Honorary Doctorate from the University of Zurich. Srinivasan is presently Professor of Visual Neuroscience at the Queensland Brain Institute and the School of Information Technology and Electrical Engineering of the University of Queensland. Among his awards are Fellowships of the Australian Academy of Science, of the Royal Society of London, and of the Academy of Sciences for the Developing World, the 2006 Australia Prime Minister’s Science Prize, the 2008 U.K. Rank Prize for Optoelectronics, the 2009 Distinguished Alumni Award of the Indian Institute of Science, and the Membership of the Order of Australia (AM) in 2012

Vision Sciences Society