Context as a scaffold and details as bricks: Narrative understanding and updating information

Poster Presentation 43.316: Monday, May 20, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Memory: Encoding, retrieval

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Jayoon Choi1 (), Seongyun Kim2, Minjae Jo1, Min-Suk Kang1,2; 1Sungkwunkwan University, 2Center for Neuroscience and Imaging Research, Institute for Basic Science

When individuals perceive the real world, they actively maintain and update a representation of the current event as an event model. The event model can then be updated as those individuals take in and handle new information. We investigated how the brain serves the maintenance and modification of the event model while participants understand narratives of four short visual-audio clips in an fMRI scanner. In the initial session, participants watched only the visual stimuli of the four clips where sound was removed (visual encoding). In the second session, participants listened to only the sound extracted from the same, original clips (auditory encoding) and were instructed to integrate the new auditory information with the visual stimuli from the previous session. After completing the narrative comprehension task, participants were surveyed outside the scanner about their personal experience with the tasks. The survey indicated that the second encoding and recall were comparatively easier than the first encoding and recall across all stories. To identify brain regions sharing a common neural response among participants, we compared the inter-subject correlation of BOLD responses for the visual and auditory encoding conditions, respectively. Across all stories, the neural responses of the TPJ are similar across participants. More important, to identify regions maintaining information of the event model, we calculated intra-subject correlations between BOLD responses of the visual and auditory encoding conditions within each participant. We found a positive correlation for most stories in TPJ and PCC, indicating that the regions within the DMN play a key role not only in story integration but also in updating event models. In summary, participants demonstrated constructing a robust model during auditory encoding, aided by the event model formed during visual encoding. Together, neural results suggest that maintaining necessary information in the TPJ is instrumental in forming a richer event model.

Acknowledgements: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF-2022R1A2C2007363).