A serial attentional bottleneck in texture processing: Evidence from a dual-task continuous-report paradigm
Poster Presentation 26.408: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Attention: Features, objects
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Mary Catington1, Michael Pratte1; 1Mississippi State University
Recent work suggests that simple visual features, such as colors, can be attended to in parallel across items. However, more complex information suffers from a serial processing limitation. For example, you can only read one word at a time, and have difficulty perceiving more than one object simultaneously. Here we explore the degree to which mid-level visual features like texture can be processed in parallel across objects. The results suggest that attempting to perceive two textures simultaneously incurs a significant performance cost compared to perceiving only one texture. The critical question is how exactly performance suffers when two visual stimuli must be attended: are you able to build less precise representations of both items, or is the trade-off discrete such that on some trials you can perceive both stimuli with high precision, but on other trials perceive only one? To address this issue we developed a task in which participants reported the orientation of a single texture, or of two textures presented simultaneously. Performance is worse when two textures must be perceived, and the accuracy of reports implies a discrete model: performance on some trials is high for both stimuli, but on other trials only one stimulus is perceived accurately while the other fails to be seen at all. These results support the proposal by Popovkina, Palmer, Moore and Boynton (2021) that the attentional bottleneck is serial, but that in some cases attention can shift rapidly across items such that both are perceived with high resolution. Our findings provide evidence that this serial bottleneck is present as early as mid-level visual texture segmentation, not just in higher-order object recognition.
Acknowledgements: This work was supported by National Institutes of Health Grant R15MH113075.