Temporal signatures of color are sufficient to decode color from a single MEG sensor
Poster Presentation 53.426: Tuesday, May 19, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Temporal Processing: Neural mechanisms, models
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Quinn Battagliese1 (), Sabrina Ripsman1, Bevil Conway1; 1National Institutes of Health
Color can be decoded using multi-sensor magnetoencephalography (MEG), with average peak decoding of 51% accuracy (chance 12.5%) 120 ms after stimulus onset (Rosenthal et al., Current Biology, 2020; Hermann et al, Nature Communications, 2022). To what extent does decoding depend on the temporal versus spatial pattern of activity? If temporal information is sufficient, decoding should be possible using data from one sensor; if spatial information is exploited, then temporal structure should differ across sensors. We addressed these questions by analyzing the MEGco data set (8 colors, 375 trials/color/subject, 18 subjects) from Rosenthal et al. and Hermann et al., which describe the details of the color-viewing task and decoding strategy. In the original analysis, for each subject, the 25 sensors whose activity co-varied most significantly with the training labels were analyzed. Here we evaluate the extent to which decoding is successful using data from each of these top 25 sensors by itself. Single-sensor decoding in at least one sensor was successful in all 18 participants, over 47 consecutive time bins (SD=21), with average peak decoding of 32% [30-38%, 95% CI] at 95 ms after stimulus onset [range:85ms-270ms]. Across subjects, 22 (SD=3) sensors yielded significant decoding accuracy lasting more than 15 consecutive time bins. These results show the brain represents color by the temporal structure of the response. To determine the underlying temporal structure that enables single-sensor decoding, we analyzed the univariate responses measured by each sensor. A multivariate repeated measures ANOVA showed significant differences (p<0.05) in peak latency of the colors across sensors in 14/18 subjects (F(7,168) range:3.32-18.51) and significant differences in peak magnitudes of the colors in 11/18 subjects (F(7,168) range:2.77-7.51). These results show, in most participants, multi-sensor MEG can detect differences in the spatial representation of colors across sensors.
Acknowledgements: This research was supported by the NIH IRP (NIH,1ZIAEY000558 to BRC). Contributions of NIH authors are Works of the United States Government. Conclusions in this paper are of the authors and do not necessarily reflect the views of the NIH or the U.S. Department of Health and Human Services.