Gist-first Learning Facilitates Category Abstraction Across GAN-generated Scene Space

Poster Presentation 43.472: Monday, May 18, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Perceptual Training, Learning and Plasticity: Category learning

Hala Rahman1, Maya R. Newton1, Jeffrey D. Wammes1,2; 1Queen's University, Psychology Department, 2Queen's University, Centre for Neuroscience Studies

To organize information in our world, we are constantly using shared features to separate individual exemplars into distinct categories. A hallmark of human intelligence is later generalizing this category learning to novel, relevant contexts. While current methods effectively capture this learning using simple stimuli, less is known about how we manage to parse continuous stimuli that approach the complexity of real world learning. Here, we use learning conditions of varying composition to test whether learning the most difficult aspects of a category promotes improved generalization. Following prior work (Son et al., 2024), category learning stimuli were developed by sampling points along a plane in the latent space learned by a generative adversarial network (GAN), describing the continuously varying features of indoor apartment scenes (e.g. kitchens). Participants were tasked with categorizing a subset of scenes from the innermost region of the plane, with feedback. A “hard” group received disproportionately more exemplars close to the decision boundary than the “easy” group. At test, we assessed generalization in three different ways: interpolation: untrained scenes in the innermost region of the plane; extrapolation: scenes from outside the innermost region; and parallel-plane: scenes from a different area in the latent space. Unsurprisingly, the “easy” group performed better, but unexpectedly this was especially true for the hardest exemplars, where the “hard” group had triple the experience. Even more striking, this group generalized better than the “hard” group in all but interpolation. This implies that the “easy” group extending their enhanced knowledge of coarse category distinctions to guide their judgments about even the hardest cases. Our work here uses GAN-generated scenes to fill theoretical gaps in our understanding of visual category learning, revealing distinct patterns of learning and generalization depending on the learning regimen.