FalseResMem: A Neural Network to Predict False Alarms in Image Memory

Poster Presentation 36.311: Sunday, May 17, 2026, 2:45 – 6:45 pm, Banyan Breezeway
Session: Visual Memory: Long-term memory

Anastasiia Mikhailova1, Wilma A. Bainbridge1; 1Department of Psychology, University of Chicago

Predicting false alarms, i.e., incorrectly identifying novel images as previously seen, is critical for understanding how visual memory fails, quantifying memory bias and the reliability of memory decisions in basic research and applied settings. Prior work has shown that people are similar in their propensity to false alarm for a given image, suggesting some images are consistently more ‘false alarmable’ than others. This finding has received little attention but allows us to develop models that explicitly capture these error patterns, thereby providing a more complete and ecologically valid account of visual memory. It is particularly important because hit rate and false alarm rate capture distinct components of recognition performance and can vary independently. In this work, we introduce a neural network, titled FalseResMem, that predicts image-level false alarm rates. The model uses features extracted from an ImageNet-pretrained ResNet50 backbone, which are concatenated with additional retrained AlexNet-like convolutional layers and subsequently retrained fully connected layers to learn image properties that drive false recognitions. After training with MemCat, a large-scale image memorability dataset of objects, the model demonstrates consistent performance (test Spearman’s rank correlation on 10-fold cross-validation: 0.48 ± 0.13; max. 0.58) in anticipating which images are more likely to elicit false recognitions, complementing existing hit-rate-based memorability predictions (ResMem). Furthermore, the present model successfully makes out-of-sample predictions for other types of images, such as scenes, art, symbols, and even those images that elicit the visual Mandela Effect, confirming robust results on a diverse set of images. This contribution highlights the potential of integrating error-based metrics into visual memory modeling and opens new pathways for studying bias, confusion, and reliability in computational measures of image memory. To further facilitate broader adoption and experimental use, we provide an easy-to-use guide that supports researchers in using FalseResMem to test images for predicted false alarm rates.

Acknowledgements: The present work was supported by the National Science Foundation under Grant No. CAREER 2441710. The authors declare no competing interests.