Attention! Features? Objects? How features, objects, and categories control visual selective attention

Time/Room: Friday, May 15, 2015, 12:00 – 2:00 pm, Talk Room 1
Organizer(s): Rebecca Nako; Birkbeck, University of London
Presenters: Kia Nobre, Stefan Treue, Martin Eimer, Daniel Baldauf, Greg Zelinsky, Johannes Fahrenfort

< Back to 2015 Symposia

Symposium Description

The cognitive and neural processes of visual selective attention have been investigated intensively for more than a century. Traditionally, most of this research has focused on spatially selective processing in vision, to the extent that “visual attention” and “spatial attention” are sometimes regarded as near synonymous. However, more recent investigations have demonstrated that non-spatial attributes of visual objects play a critical role in the top-down control of visual attention. We now know that feature-based attention, object-based attention, and category-based attention all affect how attention is allocated to specific objects in visual selection tasks. However, the functional and neural basis of these different types of non-spatial visual attention, and the ways in which they interact with each other and with space-based attention are still poorly understood. The aim of this symposium is to provide new and integrated perspectives on feature-, object-, and category-based visual attention. It brings together a group of leading researchers who have all made recent important contributions to these topics, with very different methodological approaches (single-unit electrophysiology, fMRI, EEG, MEG, and computational modelling). The symposium will start with an integrative overview of current views on spatial versus non-spatial attentional control. This is followed by two presentations on the neural basis and time course of feature-based and object-based attention, which address closely related questions with monkey single cell recordings and human electrophysiology. The second part of the symposium will focus on top-down control mechanisms of object-based attention (with fMRI and MEG) and category-based attention (using modelling methods from computer vision to predict attentional performance). The final presentation re-assesses the links between selective attention, feature integration, and object categorization, and will challenge the widely held view that feature integration requires attention. Much recent work in visual attention is characterized by extending its perspective beyond purely space-based models, and this symposium aims to provide a timely state-of-the-art assessment of this “new look” on visual attention. Attention research is conducted with a wide range of different methodological approaches. Our symposium celebrates this methodological diversity, and will demonstrate how these different methods converge and complement each other when they highlight different aspects (such as the time course, the neural implementation, and the functional organization) of visual attention and its top-down control. This symposium brings together the research and perspectives of three highly respected researchers in the field of visual attention (Kia Nobre, Stefan Treue, and Martin Eimer), who have literally written the (Oxford Hand) book on Attention, yet have never attended VSS before, and three researchers whose recent work has had great impact on the field, and who have attracted large audiences at VSS previously, but have not had the opportunity to present in a cohesive symposium with this array of speakers. The methodological breadth of this symposium, and the fact that it will integrate new perspectives and current views on attentional control, makes it ideal for a broad and very large VSS audience, including students, postdocs, and senior scientists with different specialist research interests.

Presentations

Multiple sources of attentional biases on visual processing

Speaker: Kia Nobre; University of Oxford

Attention refers to the set of mechanisms that tune psychological and neural processing in order to identify and select the relevant events against competing distractions. This type of definition casts attention as function rather than as representation or as state. This presentation will examine the various possible “sources” of biases that can prepare perceptual mechanisms to improve interactions with the environment. Whereas many studies in the literature have probed how biases can facilitate neural processing according to receptive-field properties of neurons, it is clear that it is possible to anticipate stimulus properties that may not be easily mapped onto receptive fields. Space-based, feature-based, object-based, category-based, and temporal attention can all affect visual information processing in systematic and adaptive ways. In addition to such goal-related factors, there may also be other possible potent modulators of ongoing information processing, such as long-term memories and motivational factors associated with anticipated events.

Features and objects in the physiology of attention

Speaker: Stefan Treue; University of Göttingen

Recording from single neurons in the visual cortex of rhesus monkeys trained to perform complex attention tasks has been a highly successful approach to investigate the influence of spatial and feature-based attention on sensory information processing. For object-based attention this has been much more difficult. The presentation will explain this difference and give examples of studies of the neural correlate of object-based attention. Because object-based attention is characterized by the spread of attention across multiple features of a given object the presentation will also address studies of feature-based attention involving more than one feature. The latter will demonstrate that feature-based attentional modulation in extrastriate cortex seems to be restricted to those features for which a given neuron is genuinely, rather than accidentally, tuned. The data show a system of attentional modulation that combines spatial, feature-based and object-based attention and that seems designed to create an integrated saliency map where the perceptual strength of a given stimulus represents the combination of its sensory strength with the behavioral relevance the system attributes to it.

The time course of feature-based and object-based control of visual attention

Speaker: Martin Eimer; Birkbeck, University of London

Many models of attentional control in vision assume that the allocation of attention is initially guided by independent representations of task-relevant visual features, and that the integration of these features into bound objects occurs at a later stage that follows their feature-based selection. This presentation will report results from recent event-related brain potential (ERP) experiments that measured on-line electrophysiological markers of attentional object selection to dissociate feature-based and object-based stages of selective attention in real time. These studies demonstrate the existence of an early stage of attentional object selection that is controlled by local feature-specific signals. During this stage, attention is allocated in parallel and independently to visual objects with target-matching features, irrespective of whether another target-matching object is simultaneously present elsewhere. From around 250 ms after stimulus onset, information is integrated across feature dimensions, and attentional processing becomes object-based. This transition from feature-based to object-based attentional control can be found not only in tasks where target objects are defined by a combination of simple features (such as colour and form), but also when one of the two target attributes is defined at the categorical level (letter versus digit). Overall, the results of these studies demonstrate that feature-based and object-based stages of attentional selectivity in vision can be dissociated in real time.

Top-down biasing signals of non-spatial, object-based attention

Speaker: Daniel Baldauf; Massachusetts Institute of Technology

In order to understand the neural mechanisms that control non-spatial attention, such as feature-based, object-based, or modality-based attention we use signal processing tools in temporally high-resolving MEG signals to identify the inter-areal communication, through which large-scale attentional networks orchestrate the enhanced neural processing of attended non-spatial properties. In particular, we investigate interactions by means of synchronous, coherent oscillations of neuronal activity. Applying those methods allowed us identifying a fronto-temporal network that biases neural processing on a high, object-class level of neuronal representation. In particular, an area in the inferior part of frontal cortex, the inferior-frontal junction (IFJ), seems to be a key source of non-spatial attention signals. For example, when attending to one of two spatially overlapping objects that can not be separated in space, IFJ engages in coherent, high-frequent oscillations with the respective neuronal ensembles in IT cortex that represent the respectively attended object-class. A detailed analysis of the phase relationships in these coupled oscillations reveals a predominant top-down directionality, as IFJ seems to be the driver of those coherent interactions. We propose that the selective synchronization with different object representations in IT cortex allows IFJ to route top-down information about the attended object-class and to flexibly set up perceptual biases. Our results also suggest that attention networks in frontal cortex may be subdivided in dorsal and ventral subnets providing spatial and non-spatial attention biases, respectively.

Combining behavioral and computational tools to study mid-level vision in a complex world

Speaker: Greg Zelinsky; Stony Brook University

As vision science marches steadily into the real world, a gap has opened between theories built on data from simple stimuli and theories needed to explain more naturalistic behaviors. Can “old” theories be modified to remain relevant, or are new theories needed, tailored to these new questions? It will be argued that existing theories are still valuable, but they must be bolstered by new computational tools if they are to bear the weight of real-world contexts. Three lines of research will be discussed that attempts to bridge this theoretical divide. The first is categorical search—the search for a target that can be any member of an object category. Whereas the largely artificial task of searching for a specific target can be modeled using relatively simple appearance-based features, modeling more realistic categorical search tasks will require methods and features adapted from computer vision. Second, we can no longer simply assume to know the objects occupying our visual world—techniques must be developed to segment these objects from complex backgrounds. It will be argued that one key step in this process is the creation of proto-objects, a mid-level visual representation between features and objects. The role of image segmentation techniques in constructing proto-objects will be discussed. Lastly, the real world creates untold opportunities for prediction. Using Kalman filters, it will be shown how motion prediction might explain performance in multiple-object tracking tasks. Rather than tearing down our theoretical houses, we should first consider remodeling them using new computational tools.

Neural markers of perceptual integration without attention

Speaker: Johannes Fahrenfort; Vrije Universiteit, Amsterdam

A number of studies have shown that object detection and object categorisation can occur outside consciousness, and are largely mediated by feedforward connections in the brain. Conscious object perception on the other hand, requires a process of neuronal integration mediated by recurrent connections. The question I will address in this talk, is to what extent this process of integration requires attention. Traditionally, recurrent processing has been associated with top down attention and control. However, going against a long tradition in which attention is thought to cause feature integration, a number of studies suggest that feature integration also takes place without attention. This would imply that neuronal integration does not require attentional control. In a recent EEG experiment, we tested whether neural markers of feature integration occur without attention. Employing a 2 by 2 factorial design of masking and the attentional blink, we show that behaviourally, both masking and attention affect the degree to which subjects are able to report on integrated percepts (i.e. illusory surface perception in a Kanizsa figure). However, when using a multivariate classifier on the EEG, one can decode the presence of integrated percepts equally well for blinked and non-blinked trials, whereas masking selectively abolishes the ability to decode integrated percepts (but not features). This study uncovers a fundamental difference in the way attention and masking impact cortical processing. Together, these data suggest that feature integration does not require attention, whereas it is abolished by masking.

< Back to 2015 Symposia