Eye metrics during fixation not modulated by spoken language comprehension

Undergraduate Just-In-Time Abstract

Poster Presentation 23.351: Saturday, May 16, 2026, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Undergraduate Just-In-Time 1

Will Epstein1, Grace Edwards1, Sophia Lipetzky2, Anna Seydell-Greenwald3, Elisha Merriam1, Laurentius Huber4, Ella Striem-Amit3, Christopher Baker1; 1National Institute of Mental Health, 2University of New Mexico, 3Georgetown University Medical Center, 4Massachusetts General Hospital

When listening to spoken language, congenitally blind populations show increased activity in primary visual cortex (V1), potentially indicative of cross-modal plasticity. However, recent evidence shows that V1 is also modulated by spoken language in sighted individuals (Seydell-Greenwald et al., 2023), suggesting innate connections between V1 and areas associated with language processing. We replicated the activation of V1 during spoken language comprehension in 7 Tesla (7T) MRI, to examine the depth-dependent profile of feedback in V1 from non-visual stimuli. In order to isolate non-visual activity in V1, we examined fixational eye-movements to ensure that differential V1 activation could not be explained by condition dependent eye-movements. Following the methods of Seydell-Greenwald and colleagues, we presented sighted participants (n=10) with 3 runs of audio recordings of forward and reversed speech in 30-second counterbalanced blocks, both within the 7T and in a purely behavioral setting. In the forward speech condition, participants heard short sentences (e.g., “Something that keeps food cold is a refrigerator”, “A three-sided shape is a globe”) and were tasked to report semantically incorrect statements. In the reversed speech condition, participants heard the same sentences played backwards and were asked to report a beep at the end of some statements. Participants were asked to maintain fixation at the center of the screen while listening to the speech, and their eye movements were recorded using an Eyelink 1000 in both the 7T and the behavioral suite. Between the forward and reverse conditions, we found no difference in pupil size, gaze position density, and fixation drift across time. These results suggest that the V1 activation difference is not driven by differences in eye metrics across conditions, and strengthens the interpretation that the depth dependent profiles for forward and reverse spoken language in V1 can be restricted to feedback modulation from non-visual areas.