Visual Search: From youth to old age, from the lab to the world

Time/Room: Friday, May 17, 2019, 2:30 – 4:30 pm, Talk Room 2
Organizer(s): Beatriz Gil-Gómez de Liaño, Brigham & Women’s Hospital-Harvard Medical School and Cambridge University
Presenters: Beatriz Gil-Gómez de Liaño, Iris Wiegand, Martin Eimer, Melissa L-H Võ, Lara García-Delgado, Todd Horowitz

< Back to 2019 Symposia

Symposium Description

In all stages of life, visual search is a fundamental aspect of everyday tasks from a child, looking for the right Lego blocks, to her parent, searching for lost keys in the living room, to an expert, hunting for signs of cancer in a lung CT, to grandmother finding the right tablets in the medicine cabinet. Many (perhaps, most) cognitive processes interact with selective attention. Those processes change from children to older adults, and vary between the processing of simple elements to processing of more complex objects in richer environments. This symposium uses visual search tasks as a way to probe changes and consistencies in cognition over the lifespan and in different types of real world environments. Basic research in visual search has revealed essential knowledge about human behavior in vision and cognitive science, but usually in repetitive and unrealistic environments, unfortunately many times lacking ecological validity. This symposium aims to go one step further to more realistic situations, and give insights into how humans from early childhood to old age perform visual search in the real world. Importantly, we will put forward how essential this knowledge is to develop systems and ways to use it in global human challenges in today’s society. The multidisciplinary and applied character of this proposal, from vision science, neuroscience, medicine, engineering, video games applications and education makes it of interest to students, postdocs, faculty, and even general audience. This symposium is a perfect example of how cognitive and vision science can be transferred to society in real products improving human lives, involving the adult population, as well as children and older adults. The first two talks will tell us about age differences in visual search from childhood to younger and older adulthood in more realistic environments. The third talk will give us clues to understand how the brain processes that support visual search change over the lifespan. The lifespan approach can give us insights to better understand visual search as a whole. In the fourth and fifth talks we will turn to visual search in the real world; its applications and new challenges. We will review what we know about visual search in real and virtual scenes (fourth talk), including applications of visual search in real world tasks. In the last talk we will show how from engineering and video game fields it has been possible to develop a reliable diagnostic tool based on crowdsourcing visual search, including people of all ages, from youth to old age, to diagnose diseases as malaria, tuberculosis or cancer breast in the real world.


Visual Search in children: What we know so far, and new challenges in the real world.

Speaker: Beatriz Gil-Gómez de Liaño, Brigham & Women’s Hospital-Harvard Medical School and Cambridge University

While we have a very substantial body of research on visual search in adults, there is a much smaller literature in children, despite the importance of search in cognitive development. Visual Search is a vital task in everyday life of children: looking for friends in the park, choosing the appropriate word within a word-list in a quiz at school, looking for the numbers given in a math problem… For feature search (e.g. “pop-out” of red among green), it is well-established that infants and children generally perform similarly to adults, showing that exogenous attention is stable across the lifespan. However, for conjunction search tasks there is evidence of age-related performance differences through all stages of life showing the typical inverted U shape function from childhood to older age. In this talk I will review some recent work and present new data showing that different mechanisms of selective attention operate at different ages within childhood, not only at a quantitative level but also qualitatively. Target salience, reward history, child-friendly stimuli and video-game-like tasks may be also important factors modulating attention in visual search in childhood, showing that children’s attentional processes can be more effective than has been believed to date. We will also show new results from a visual search foraging task, highlighting it as a potentially useful task in a more complete study of cognitive and attentional development in the real world. This work leads to better understanding of typical cognitive development and gives us insights into developmental attentional deficits.

Visual Search in the older age: Understanding cognitive decline.

Speaker: Iris Wiegand, Max Planck UCL Center for Computational Psychiatry and Ageing Research

Did I miss that sign post? – Where did I leave my glasses? Older adults increasingly report experiencing such cognitive failures. Consistent with this experience, age-related decline has been demonstrated in standard visual search experiments. These standard laboratory tasks typically use simple stimulus material and brief trial structures that are well-designed to isolate some specific cognitive process component. Real-world tasks, however, while built of these smaller components, are complex and extend over longer periods of time. In this talk, I will compare findings on age differences in simple visual search experiments to our recent findings from extended hybrid (visual and memory) search and foraging tasks. The extended search tasks resemble complex real-world tasks more closely and enable us to look at age differences in attention, memory, and strategic process components within one single task. Surprisingly, after generalized age-related slowing of reaction times (RT) was controlled for, the extended search tasks did not reveal any age-specific deficits in attention and memory functions. However, we did find age-related decline in search efficiency, which were explained by differences between age groups in foraging strategies. I will discuss how these new results challenge current theories on cognitive aging and what impact they could have on the neuropsychological assessment of age-related cognitive changes.

Component processes of Visual Search: Insights from neuroscience.

Speaker: Martin Eimer, Birkbeck, University of London

I will discuss cognitive and neural mechanisms that contribute to VS, how these mechanisms are organized in real time, and how they change across the lifespan. These component processes include the ability to activate representations of search targets (attentional templates), the guidance of attention towards target objects, as well as the subsequent attentional selection of these objects and their encoding into working memory. The efficiency of VS performance changes considerably across the life span. I will discuss findings from search experiment with children, adults, and the elderly, in order to understand which component processes of VS show the most pronounced changes with age. I will focus on the time course of target template activation processes, differences between space-based and feature-based attentional guidance, and the speed with which attention is allocated to search targets.

Visual Search goes real: The challenges of going from the lab to (virtual) reality.

Speaker: Melissa L-H Võ, Goethe University Frankfurt

Searching for your keys can be easy if you know where you put them. But when your daughter loves playing with keys and has the bad habit of just randomly placing them in the fridge or in a pot, your routine search might become a nuisance. What makes search in the real world usually so easy and sometimes so utterly hard? There has been a trend to study visual perception in increasingly more naturalistic settings due to the legit concern that evidence gathered from simple, artificial laboratory experiments does not translate to the real world. For instance, how can one even attempt to measure set size effects in real world search? Does memory play a larger role when having to move your body towards the search target? Do target features even matter when we have scene context to guide our way? In this talk, I will review some of my labs’ latest efforts to study visual search in increasingly realistic environments, the great possibilities of virtual environments, and the new challenges that arise when moving away from highly controlled laboratory settings for the sake of getting real.

Crowdsourcing Visual Search in the real world: Applications to Collaborative Medical Image Diagnosis.

Speaker: Lara García-Delgado, Biomedical Image Technologies, Department of Electronic Engineering at Universidad Politécnica de Madrid, and member of Spotlab, Spain.
Additional Authors: Miguel Luengo-Oroz, Daniel Cuadrado, & María Postigo. Universidad Politécnica de Madrid & founders of Spotlab.

We will present the project that develops collective tele-diagnosis systems through visual search video games to empower citizens of all ages to collaborate in solving global health challenges. It is based on a crowd-computing platform, which analyses medical images taken by a 3D printed microscope embedded in a smartphone connected to the internet, using image processing and human crowdsourcing through online visual search and foraging video games. It runs on the collective power of society in an engaging way, using visual and big data sciences to contribute to global health. So far more than 150.000 citizens around the world have learnt and contributed to diagnosis of malaria and tuberculosis. The multidisciplinary nature of this project, at the crossroads of medicine, vision science, video games, artificial intelligence and education, involves a diverse range of stakeholders that requires tailoring the message to each discipline and cultural context. From education activities to mainstream media or policy engagement, this digital collaboration concept behind the project has already impacted several dimensions of society.


Speaker: Todd Horowitz, Program Director at the National Cancer Institute. USA.

Discussant will summarize the five talks and will open a general discussion with the audience about visual search appliances in the real life in the life span.

< Back to 2019 Symposia