Learning Relational Categories through Guided Comparisons

Poster Presentation 53.349: Tuesday, May 21, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Plasticity and Learning: Properties

Andrew Jun Lee1 (), Hongjing Lu1, Keith Holyoak1; 1University of California, Los Angeles

Visual scenes are not perceived as simple constellations of objects, but rather as objects in relation to one another. Humans can efficiently learn visual categories based on relational knowledge from just a handful of examples; however, the learning mechanisms remain unclear. Here, we hypothesize that analogical comparisons can facilitate learning of visual categories defined by relations. Method: We examined learning using the Synthetic Visual Reasoning Test (SVRT), a collection of 23 relational category learning problems (Fleuret et al., 2011). Each problem consists of images involving artificial-island-shaped objects; positive image exemplars instantiated a rule based on spatial relations and negative exemplars did not. Participants categorized each successive test image into the correct set until an accuracy criterion was met. Feedback was provided on each trial. We conducted two experiments that varied the display format and coloring scheme for the SVRT images. In both experiments, images from previous trials were displayed on the screen as a visual record. In Experiment 1, these record images were either spatially segregated or intermixed by category membership. In Experiment 2, the record images were colored in a way that differentiated object entities based on relations. The colored display of the record images thus guided analogical comparisons between them. Results: Learning was more efficient when prior images in the display were spatially segregated by category membership, resulting in an average 53% reduction in proportion of SVRT problem failures. Furthermore, when objects were assigned with corresponding colors to facilitate the alignment of related objects across images, learning was more efficient relative to the uncolored condition (33% reduction in failure proportion). Conclusion: Human learning of visual relational categories depends on the ability to efficiently extract relational knowledge from visual inputs. Visual displays that facilitate relation extraction promote learning on the basis of analogical comparisons.

Acknowledgements: NSF Grant IIS-1956441