Similarity-dependent memory integration of scene images

Poster Presentation 43.336: Monday, May 20, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Memory: Capacity, long-term memory

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Simeng Guo1,2,3,4,5, Sheng Li1,2,3,4,5; 1School of Psychological and Cognitive Sciences, Peking University, Beijing, China, 2PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China, 3Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China, 4Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China, 5National Key Laboratory of General Artificial Intelligence

People often encounter novel events similar to their previous experiences. An intriguing question is how the similar representations interact during learning. Previous studies suggested that feature-based similarity resulted in systematic memory distortions. The present study examined the effect of memory integration due to learning similar scenes. We used generative adversarial networks (GANs) to generate scene wheels from which the to-be-remembered scenes were selected. In an online experiment (n = 59), we evaluated the perceptual similarity of images from the scene wheels and selected scene-pairmates (A1 and A2) with varying perceptual similarities. In three main experiments (n = 27, 27, 28), A1 and A2 were paired with different images (B1 and B2) to form competitive associations. Subjects learned these associations with explicit knowledge that scenes paired with different images were always different (even though they might look similar). Importantly, learning of competitive associations was temporally separated (“A1-B1” preceded its competitor, “A2-B2”). Across three experiments, we found robust attractive memory distortion of A2 towards its highly-similar competitor (A1). In Experiment 1 and Experiment 2 (with increased training on A2-B2), the attraction effects were asymmetric: memory of A1 was not biased relative to A2. Interestingly, in Experiment 3 in which training on A1-B1 was increased, the asymmetry disappeared: memories of A1 and A2 were biased towards each other. Moreover, we examined the consequences of the distortions. As expected, attractive distortions decreased discriminability between highly-similar associative memories. We unified these findings using a Hebbian learning framework and suggested that (1) greater coactivation between B2 and A1 as compare to the coactivation between B1 and A2 caused asymmetric integration, and (2) the balanced coactivations eliminated such asymmetry. Collectively, we showed that similarity-dependent integration of complex visual experiences might cause asymmetric memory distortion. The degree of the asymmetry depends on the level of coactivation during integration.