Gaze patterns modeled with a LLM can be used to classify autistic vs. non-autistic viewers

Poster Presentation 63.458: Wednesday, May 22, 2024, 8:30 am – 12:30 pm, Pavilion
Session: Attention: Exogenous, endogenous, gaze

Amanda J Haskins1 (), Thomas L Botch1, Brenda D Garcia1, Jennifer McLaren2, Caroline E Robertson1; 1Dartmouth College, 2Dartmouth Hitchcock Medical Center

Atypical visual attention is a promising marker of autism spectrum conditions (ASC). Yet, it remains unclear what mental processes guide individual and group-level gaze differences in autism. This is in part because eyetracking analyses have focused on properties of external visual stimuli (e.g., object categories) and failed to investigate a key influence on gaze: the viewer’s own internal conceptual priorities. Disambiguating these influences is crucial for advancing gaze as an endophenotype for autism. Here, we tested the hypothesis that gaze differences in autism stem from abstract conceptual-level information, rather than object categorical information. Adult participants (N = 40; 20 ASC) viewed real-world photospheres (N = 60) in VR. We characterized conceptual-level scene information using human captions, which we transformed into sentence-level embeddings using a large language model (BERT). For each participant, we obtained a “conceptual gaze model”: the linear relationship between each participant’s gaze and conceptual features (BERT; dimensionality reduced using PCA). To compare the influence of internal, conceptual-level information (“for sale”, “sports fan”) with external, image-based properties (“hat”), we also modeled gaze patterns using a vision model with comparable transformer architecture (ViT). Using a support vector machine (SVM) iteratively trained to classify participant pairs using conceptual gaze models, we find that individual classification for both autistic and non-autistic participants significantly exceeds chance (62% overall, p < 0.001); moreover, individual classification for conceptual gaze models is higher than classification for visual categorical models (t(39) = 4.9, p < 0.001). Next, using a binary SVM to evaluate group-level differences in autistic gaze patterns, we found higher group classification accuracy for left-out participants when training the SVM on conceptual vs. categorical gaze models (t(399) = 3.88, p < 0.001). These results suggest that gaze differences are reliable within autistic individuals, and that group-level gaze differences are particularly driven by conceptual-level informational priorities.