How does your brain know what’s real? Domain adaptation and fMRI approaches to revealing the neural mechanisms of reality monitoring

Poster Presentation 26.466: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Theory

Ali Ekhlasi1 (), Emil Olsson1, Michaela Klimova2, Xueyi Huang2, Angela Shen1, Nadine Dijkstra3, Jorge Morales2, Megan A. K Peters1; 1University of California, Irvine, 2Northeastern University, 3University College London

The fundamental question of how the brain distinguishes internally generated experiences (imagination) from external reality (perception) is central to vision science. Higher-Order (HO) theories of consciousness propose that this distinction relies on a Reality Monitoring (RM) signal that evaluates first-order visual content. However, leading HO models disagree on the nature of this signal. Perceptual Reality Monitoring (PRM) (Lau, 2019, 2022) posits a dedicated RM signal. The Higher-Order State Space (HOSS) framework (Fleming, 2020; Dijkstra & Fleming, 2023) proposes that vividness of the internal representation serves as the primary RM cue. Rich HO theories, such as Higher-Order Representation of a Representation (HOROR; Brown, 2015), suggest that HO representations contain content dimensions themselves. To arbitrate among these models, and as part of an adversarial collaboration (TWCF-2021-22032), we introduce a novel analytical framework using Domain Adaptation (DA), a machine-learning technique that statistically aligns fMRI activation patterns across perception and imagination. Using two publicly available fMRI datasets in which participants perceived and imagined objects (Margolles et al., 2024; Horikawa & Kamitani, 2017), we used DA-enhanced searchlights to discover brain regions that encode content similarly across perception and imagination. Critically, we also identified regions whose brain activity patterns were divergent between both states, but strongly benefited from DA’s statistical alignment—showing improved alignment of brain activity distribution but not improved content decoding. This analysis allows us to find regions that may specifically differentiate between reality and imagination regardless of the content being represented. Our method suggests that these “pure RM” regions are located in the dorsomedial prefrontal cortex (dmPFC), dorsal parietal cortex, and the left ventral occipital cortex. These findings provide the basis for directly testing our core hypotheses: whether a dedicated, source-specific RM dimension (as predicted by PRM) exists independently of the visual content's vividness and features, and which brain regions may support such representations.

Acknowledgements: This work was supported by the Templeton World Charity Foundation, Inc. (funder DOI 501100011730) under the grant https://doi.org/10.54224/22032 to (JM, MAKP & ND) and in part by the Canadian Institute for Advanced Research (CIFAR, Fellowship in Brain, Mind, & Consciousness; to MAKP).