Fixated but Failed to See: Recognition Errors in Visual Search
Poster Presentation 26.315: Saturday, May 16, 2026, 2:45 – 6:45 pm, Banyan Breezeway
Session: Visual Search: Search strategies, clinical
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Jonathan Nir1 (jonathan.nir@mail.huji.ac.il), Carmel Ruth Auerbach-Asch1, Leon Yona Deouell1,2; 1Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, 2Department of Psychology, The Hebrew University of Jerusalem
People routinely perform visual search – from everyday actions like locating our keys to high-stakes tasks such as airport security screening – yet they sometimes fail to notice and respond to clearly visible items. Authors miss typos, drivers fail to stop for pedestrians, and radiologists overlook tumors. These “looked but failed to see” (LBFTS) errors (Wolfe et al., 2022) offer a useful window into the mechanisms linking perception, awareness, and action. In a subset of LBFTS errors, corresponding with Kundel’s (1978) recognition- or decision-errors, observers fixate on a target but still fail to report it. The probability of this phenomenon varies remarkably across studies, from 2% to 34%, depending on target salience, prevalence and specificity. By incorporating eye-tracking within a visual search task that includes easily recognizable objects and allows observers to freely refixate on target-examples, we were able to minimize memory- and decision-based errors, thus focusing on errors where subjects fixate a target but fail to recognize it. A hierarchical Bayesian logistic regression modelled the probability that a fixated target was not recognized based on array type (color items, grayscale items, color items with noise background), target category (human, animal, inanimate), and rotation angle (±20°). Across observers, 26.2% of target visits were classified as recognition errors. Posterior predictive estimates showed similar error rates for color items with and without noise-background (23.9% and 25.0%), but substantially higher rates for grayscale arrays (31.8%). Target category also modulated recognition: animate targets increased error probability by 20.5 percentage points (95% HDI = [12.5%, 27.5%]). Rotation angle did not meaningfully affect error probability. Our systematic characterization of recognition-based LBFTS errors in visual search, reveals that both low-level visual features and high-level semantic categories shape whether observers fail to detect a target they have fixated.