State-space modeling reveals separable cognitive color and shape representations in macaques despite lifelong salience of color-shape conjunctions

Poster Presentation 26.453: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Color, Light and Materials: Affect, cognition

Spencer Loggia1,2 (), Bevil Conway1,3,4; 1National Eye Institute, 2Brown University, 3National Institute of Mental Health, 4University of Maryland, College Park

A longstanding debate is whether the brain employs separate representations for color and shape. People can contemplate an object’s color independently of its shape, but this may be learned, reflecting language and experience controlling the colors of objects. Neurophysiological work has been invoked both to support conjoint “multiplexing” and separate, parallel processing of color and shape, yet neural tuning cannot be directly mapped onto cognitive representations. Here we asked whether macaque monkeys reared interacting with 2-D digital objects acquired separable or conjoined cognitive codes. Over four years, four animals performed a touchscreen foraging task. On each trial they viewed four objects drawn at random from 1,296 possibilities and were rewarded according to the one they touched. Objects were defined in a perceptually uniform 4-D space (color: u*, v*; shape: spikiness, animacy), with every shape in every color. Each color–shape combination was assigned a reward value that varied smoothly over this space. We analyzed choices with a two-stage cognitive state-space model (cSSM) that (i) learns a diffeomorphic warp of color–shape space to capture perceptual non-uniformities and (ii) discovers category basis functions whose dynamics implement an interpretable learning rule. The cSSM shows that behavior is not explained by memorizing individual color–shape pairs, but by a low-dimensional category geometry that evolves over experience. This geometry is initially dominated by shape and progressively incorporates color (avg. neg. log-likelihood of behavior under category model 0.43 ± .04, memorization model 0.58 ± .05). At plateau, the category geometry shows separable color and shape bases (avg. covariance magnitude 0.08 ± .06). Models trained to maximize reward without considering behavior instead learn conjunctive color–shape categories (avg. covariance magnitude 0.34 ± .11). These results indicate that the primate brain computes separate color and shape representations that underlie flexible cognition.

Acknowledgements: Supported by NIH IRP (1ZIAEY000558 to BRC), NSF (0918064 to BRC) and NIH (R01 EY023322 to BRC). Contributions of NIH authors are considered Works of the United States Government. The findings and conclusions are those of the authors and do not necessarily reflect the views of the NIH or the U.S. Department of Health and Human Services.