Dynamics of Spatial and Object-Based Attention
Poster Presentation 26.411: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Attention: Features, objects
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
John Gonzalez-Amoretti1,2, Adam Snyder1,2,3; 1Neuroscience Graduate Program, University of Rochester, 2Brain and cognitive Science, University of Rochester, 3Center for Visual Science, University of Rochester
Selective attention facilitates behavior by filtering out visual information and prioritizing goal-relevant stimuli. These processes depend on internal templates that inform the identity of selection targets, facilitating the guidance of attention to their possible locations. However, the neurocomputational mechanisms of how these templates are formed and then accessed to guide attention remain unclear. We propose that attentional templates are generated by transforming a sensory code to a task-ready latent code held in higher-level areas such as frontoparietal attention networks. A latent template representation provides a flexible code that can be integrated with other processes, such as spatial orienting and saccade planning, or retransformed into a sensory code for evaluating sensory input. To study this, we designed a task that distinctively engages spatial attention, object-based attention or a combined process based on cueing conditions. Unlike typical paradigms that rely on exact template matching, or low-level feature cues, such as colors or shapes, our design promotes higher-order search strategies by implementing categorical template matching with grayscale images. Latent factor analysis of human EEG data revealed varying spatiotemporal dynamics across conditions, suggestive of flexible coordination between visual and frontoparietal networks that changes with task demand. Image cues evoked activation of occipitotemporal regions that spread to frontoparietal regions during delay periods, suggesting that identity information is projected into higher-order regions. Spatial cues showed a reverse effect with centroparietal activation preceding occipital activation after cue onset, likely leveraging retinotopic maps for anticipation. The combined spatial-image cue recruited both pathways, revealing shared dynamics and suggesting that both processes can unfold in parallel. Moving forward, we will focus on multivariate pattern classification and representational similarity analysis to determine whether these dynamics play a role in transforming sensory codes into a latent template representation.