Tracking Trust in Artificial Intelligence with EEG Measures of Visual Attention (N2pc) and Working Memory (CDA)
Poster Presentation 26.460: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Theory
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Tobias Feldmann-Wüstefeld1, Eva Wiese1; 1Technische Universität Berlin
Visual attention and working memory are limited resources, making cognitive offloading to AI a potential means to enhance human performance. However, such benefits depend on the degree to which users trust the system. Here, we introduce a novel approach to measure trust in AI using two EEG components: the N2pc, indexing attentional deployment, and the CDA, indexing working-memory load. In Experiment 1, we recorded the N2pc in a visual search task in which participants either performed the task themselves (solo trials) or monitored a simple algorithm, framed as AI, performing it (coop trials). In solo trials, participants reported the orientation of a target line. In coop trials, the AI suggested a response of varying accuracy, and participants confirmed or overruled it. Because N2pc amplitude reflects the amount of attention allocated to a target, offloading should decrease its amplitude. Indeed, the N2pc was smaller when participants worked with a highly reliable AI than with an unreliable one, indicating greater willingness to offload. N2pc amplitudes were highest in solo trials, suggesting that participants never fully relinquished attentional control to the AI. In Experiment 2, we recorded the CDA in a change detection task. Solo trials required monitoring one hemifield only. In coop trials, participants memorized items from one hemifield while the algorithm (again framed as AI) was assigned the other. The CDA reflects the asymmetry of encoded items; reduced asymmetry therefore indicates less offloading. CDA amplitudes were generally smaller in coop than solo trials, showing that participants continued to encode items from both sides rather than relying fully on the AI. When AI reliability was low, CDA amplitude was even further reduced, indicating increased monitoring of the AI’s side due to distrust. Together, these findings demonstrate that N2pc and CDA amplitudes provide sensitive, implicit neural markers of trust in AI.
Acknowledgements: This research was supported by the Alexander von Humboldt Foundation and the Deutsche Forschungsgemeinschaft (DFG)