Conjunctive representation of colors and shapes in human occipitotemporal and posterior parietal cortices

Poster Presentation 63.305: Wednesday, May 22, 2024, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Perceptual Organization: Segmentation, shapes, objects

Benjamin Swinchoski1 (), JohnMark Taylor2, Yaoda Xu1; 1Yale University, 2Columbia University

How does the human brain jointly represent color and shape? In contrast with the traditional view that color and form are represented by separate visual areas and bound together via selective attention, a recent study using simple artificial shape stimuli and an orthogonal luminance change task found that color and form were largely jointly encoded in the same brain regions (including regions defined by their univariate response to color or shape), albeit in an independent manner, such that a classifier trained to discriminate shapes in one color could cross-decode the same shapes in a different color. The present study aims to understand how attention may impact feature representation when complex, real-world object shapes are encoded. We used three shapes (generated from side-view silhouettes of cars, helicopters, and ships) and three colors (red, green, and blue, equated in luminance and saturation). We obtained fMRI response patterns from 12 human participants as they viewed blocks of images, with each block containing exemplars of the same object and color with slight variations in shape and hue. In different fMRI runs, participants either attended to shape, color, or both features and had to respond to repetitions in the attended feature dimension(s). Unlike in the earlier study examining simple shape features with an orthogonal task, regardless of the feature attended, we found a drop in cross-color shape decoding compared to within-color shape decoding across occipitotemporal and posterior parietal cortices. These results indicate that nonlinear conjunctive coding of shape and color exists across the human ventral and dorsal visual regions when attention is directed towards real-world object features.

Acknowledgements: This project is supported by NIH grant 1R01EY030854 to Y.X. J.T. is supported by NIH grant 1F32EY033654.