Reconstructing 2D images from brain activity evoked by random-dot stereograms
Poster Presentation 26.430: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Perceptual Organization: Features, parts, wholes, objects
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Taiga Kurosawa1 (), Shuntaro C Aoki1,2, Tanaka Misato1,2, Yukiyasu Kamitani1,2; 1Kyoto University, 2Advanced Telecommunications Research Institute International
Random-dot stereograms (RDS) induce the perception of 3D structure through binocular disparity, despite containing no explicit shape information in the monocular 2D views. In this study, we investigate whether the neural representations generated by these stereoscopic inputs can be translated into 2D visual reconstructions. We employed a cross-modal decoding approach, first training a decoder on fMRI signals collected while participants viewed natural 2D images. We then applied this decoder, trained solely on standard 2D visual features, to fMRI data recorded while participants viewed shapes defined exclusively by RDS disparity cues. We found that the decoder successfully reconstructed recognizable 2D images of the object shapes from the RDS-evoked brain activity. These reconstructions filtered out the high-frequency dot-pattern texture of the stimuli and instead visualized the holistic shape, exhibiting high similarity to reconstructions obtained from standard 2D luminance-defined stimuli. These results demonstrate that brain activity evoked by binocular depth cues can be decoded into 2D images, suggesting that the visual system transforms disparity information into a representation compatible with the neural coding of 2D pictorial features.
Acknowledgements: Supported by JSPS KAKENHI (JP25H00450, JP20H05705, JP20H05954), NEDO (JPNP20006), and JST CREST (JPMJCR22P3).