Forgetting in long-term memory: Recognition does not induce the forgetting of similar objects

Poster Presentation 43.314: Monday, May 20, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Visual Memory: Encoding, retrieval

Jamal Williams1 (), Timothy Brady1; 1University of California, San Diego

Recent work has proposed that testing items in memory (e.g., a specific mug) causes the forgetting of related items (other mugs) when compared against a baseline of items from untested categories. Using behavioral studies and computational modeling, we challenge the view that active inhibition or suppression is responsible for the accuracy difference observed in recognition induced forgetting (RIF) studies. Across 6 experiments, we had participants encode items from 12 categories, with half of the items from half of the categories being restudied during a ‘practice’ session where participants had to discriminate an item from the initial encoding session against a new foil from the same category in a 2AFC task. Across all experiments, we replicate the RIF effect, although we find that the standard analyses in this literature inflate the effect size of RIF by conflating response bias with the true effect. Critically, we also find that participants have strong memories for the foils used in the practice tests, and that memory for these items drives the purported RIF effect. We demonstrate that the classic REM model of memory predicts RIF without any modifications—with no notion of inhibition or suppression— and can predict this effect simply based on the finding that memory for foils increases the set size of studied categories and induces cue overload. Importantly, using this insight we can create a reversal of the RIF effect: Showing worse memory for baseline items compared to non-studied items from studied categories. Our results suggest that differences observed in RIF studies do not stem from inhibition or suppression but are instead due to an inaccurate baseline, as restudy-foils are encoded and inadvertently increase the item count in studied categories thereby making them larger categories than baseline, non-studied categories, and impairing performance for all items—studied or not—in those categories.