The representational dynamics of visual expectations in the brain

Poster Presentation 26.418: Saturday, May 18, 2024, 2:45 – 6:45 pm, Pavilion
Session: Object Recognition: High-level features

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Laurent Caplette1 (), Tetsu Kurumisawa1, Helen Borges1, Jose Cortes-Briones1,2,3, Nicholas B. Turk-Browne1; 1Yale University, 2Veterans Affairs Connecticut Healthcare System, 3Connecticut Mental Health Center

Visual perception is modulated by expectations resulting from prior knowledge. Despite significant progress in recent decades, the neural mechanisms underlying this phenomenon remain unclear. Notably, the features in which expectations of real-world objects are represented in the brain are largely unknown: Are expected objects represented as detailed images with both low- and high-level features or are they represented only in terms of some features? Which features play a part in the modulation of sensory processing once an object is seen? In this study, participants were shown cues followed by object images. There were 8 cues associated with 8 object images, with a 58% validity; these associations were not explicitly learned. Participants had to categorize objects as animate or inanimate while their brain activity was recorded using magnetoencephalography (MEG). We used representational similarity analysis and a convolutional neural network to assess the features in which expected and perceived objects were represented during the task. Perceived objects were first represented in low-level features on posterior sensors, and then in high-level features on anterior sensors. During that time, expected objects were represented in high-level features, on anterior sensors. Interestingly, a low-level representation of expected objects was observed during cue presentation prior to object onset (starting around 300 ms after cue onset). These results suggest that expected objects are represented both with low- and high-level features but that only high-level features play a role in the integration of expectations with sensory information. The fact that this high-level representation was only visible on anterior sensors, throughout all object processing, suggests that this integration happens in high-level brain areas. The precise loci of these phenomena will be further investigated using source-level analyses.

Acknowledgements: Funding: NSF CCF 1839308 and NSERC Postdoctoral Fellowship