Common mechanisms in Time and Space perception

Common mechanisms in Time and Space perception

Friday, May 8, Time 1:00 – 3:00 pm
Royal Ballroom 1-3

Organizer: David Eagleman (Baylor College of Medicine)

Presenters: Concetta Morrone (Università di Pisa, Pisa, Italy), Alex Holcombe (University of Sydney), Jonathan Kennedy (University of Cardiff), David Eagleman (Baylor College of Medicine)

Symposium Description

Most of the actions we carry out on a daily basis require timing on the scale of tens to hundreds of milliseconds. We must judge time to speak, to walk, to predict the interval between our actions and their effects, to determine causality and to decode information from our sensory receptors. However, the neural bases of time perception are largely unknown. Scattered confederacies of investigators have been interested in time for decades, but only in the past few years have new techniques been applied to old problems. Experimental psychology is discovering how animals perceive and encode temporal intervals, while physiology, fMRI and EEG unmask how neurons and brain regions underlie these computations in time. This symposium will capitalize on new breakthroughs, outlining the emerging picture and highlighting the remaining confusions about time in the brain. How do we encode and decode temporal information? How is information coming into different brain regions at different times synchronized? How plastic is time perception? How is it related to space perception?  The experimental work of the speakers in this symposium will be shored together to understand how neural signals in different brain regions come together for a temporally unified picture of the world, and how this is related to the mechanisms of space perception.  The speakers in this symposium are engaged in experiments at complementary levels of exploring sub-second timing and its relation to space.

Abstracts

A neural model for temporal order judgments and their active recalibration: a common mechanism for space and time?

David M. Eagleman, Mingbo Cai, Chess Stetson

Human temporal order judgments (TOJs) dynamically recalibrate when participants are exposed to a delay between their motor actions and sensory effects.  We here present a novel neural model that captures TOJs and their recalibration.  This model employs two ubiquitous features of neural systems: synaptic scaling at the single neuron level and opponent processing at the population level.  Essentially, the model posits that different populations of neurons encode different delays between motor-sensory or sensory-sensory events, and that these populations feed into opponent processing neurons that employ synaptic scaling.  The system uses the difference in activity between populations encoding for ‘before’ or ‘after’ to obtain a decision.  As a consequence, if the network’s ‘motor acts’ are consistently followed by sensory feedback with a delay, the network will automatically recalibrate to change the perceived point of simultaneity between the action and sensation.  Our model suggests that temporal recalibration may be a temporal analogue to the motion aftereffect.  We hypothesize that the same neural mechanisms are used to make perceptual determinations about both space and time, depending on the information available in the neural neighborhood in which the module unpacks.

Space-time in the brain

Concetta Morrone, David Burr

The perception of space and time are generally studied separately and thought of as separate and independent dimensions. However, recent research suggests that these attributes are tightly interlinked: event timing may be modality-specific and tightly linked with space. During saccadic eye movements, time becomes severely compressed, and can even appear to run backwards. Adaptation experiments further suggest that visual events of sub-second duration are timed by neural visual mechanisms with spatially circumscribed receptive fields, anchored in real-world rather than retinal coordinates. All these results sit nicely with recent evidence implicating parietal cortex with coding of both space and sub-second interval timing.

Adaptation to space and to time

Jonathan Kennedy, M.J. Buehner, S.K. Rushton

Human behavioural adaptation to delayed visual-motor feedback has been investigated by Miall and Jackson (2006: Exp Brain Res) in a closed-loop manual tracking task with a semi-predictably moving visual target. In intersensory, open-loop and predictable sensory-motor tasks, perceptual adaptation of the involved modalities has been demonstrated on several occasions in recent years, using temporal order judgments and perceptual illusions (e.g. Stetson, Cui, Montague, & Eagleman, 2006: Neuron; Fujisaki, Shimojo, Kashino, & Nishida, 2004: Nature Neuroscience).
Here we present results from two series of experiments: the first investigating perceptual adaptation in Miall and Jackson’s tracking task, by adding visual-motor temporal order judgments; and the second investigating the localization of perceptual adaptation across the involved modalities.
We will discuss these results in the light of recent developments in modeling adaptation to misalignment in spatial (Witten, Knudsen, & Sompolinsky, 2008: J Neurophysiol) and temporal  (Stetson et al, 2006) domains, and consider their implications for what, if any, common mechanisms and models may underlie all forms of adaptation to intersensory and sensory-motor misalignment.

A temporal limit on judgments of the position of a moving object

Alex Holcombe, Daniel Linares, Alex L. White

The mechanisms of time perception have consequences for perceived position when one attempts to determine the position of a moving object at a particular time. While viewing a luminance-defined blob orbiting fixation, our observers report the blob’s perceived position when the fixation point changes color. In addition to the error in the direction of motion (flash-lag effect), we find that the standard deviation of position judgments increases over a five-fold range of speeds such that it corresponds to a constant 70-80 ms of the blob’s trajectory (also see Murakami 2001). This result is in sharp contrast to acuity tasks with two objects moving together, for which thresholds vary very little with velocity. If the 70 ms of temporal variability is dependent on low-level factors, we would expect a different result when we triple the eccentricity, but this had little effect. If the variability is due to uncertainty about the time of the color change, then we should be able to reduce it by using a sound as the time marker (as the auditory system may have better temporal resolution) or by using a predictable event, such as the time a dot moving at a constant velocity arrives at fixation. Although average error differs substantially for these conditions, in both the reported positions still spanned about 70-80 ms of the blob’s trajectory. Finally, when observers attempt to press a button in time with arrival of the blob at a landmark, the standard deviation of their errors is about 70 ms. We theorize that this temporal imprecision originates in the same mechanisms responsible for the poor temporal resolution of feature-binding (e.g. Holcombe & Cavanagh 2001; Fujisaki & Nishida 2005).