Representational organization of dynamic and static visual features in the human brain

Poster Presentation: Tuesday, May 21, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Action: Representation

Hamed Karimi1 (), Jeff Wang1, Nicholas Arangio1, Stefano Anzellotti1; 1Boston College

Visual information consists of static properties and dynamic properties. How is the representation of static and dynamic information organized in the brain? Previous work investigated this question using point-light displays and kinematograms to create stimuli that isolate motion information. However, such stimuli might not capture the rich dynamics contained in realistic videos. Here, we separated static and dynamic visual features in realistic videos using two-stream deep convolutional neural networks and used these features to investigate how the representation of static and dynamic information is organized in the human visual system. Using fMRI and representational dissimilarity analysis, we found that static and dynamic features 1) are encoded across all three visual streams (ventral, lateral, and dorsal) and 2) give rise to a parallel posterior-to-anterior topography that spans both the ventral and dorsal streams. Clustering brain regions based on the features they encode revealed a common cluster for posterior portions of both the ventral and dorsal streams and a separate cluster for anterior portions of both the ventral and dorsal streams. In contrast with the view that the ventral stream is exclusively dedicated to the processing of static features (and consistent with recent evidence using kinematograms), we find that representations in ventral temporal regions also correlate with dynamic features, even when regressing out the contribution of static features to rule out potential confounds.

Acknowledgements: This work was supported by the National Science Foundation CAREER Grant 1943862 to S.A.