The text is all in their hands: A neurofunctional model for limb-to-text (VWFA) cortical recycling
Poster Presentation 53.466: Tuesday, May 19, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Face and Body Perception: Development, clinical
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Sharon Gilaie-Dotan1 (); 1Bar Ilan University
In recent years Dehaene, Cohen and colleagues proposed that the VWFA recycled evolutionarily pre-existing neural circuits in ventral visual cortex, likely those with visual preferences for T or Y junctions useful for letter recognition. While additional propositions were put forward, it was recently shown by Nordt and colleagues (2021) that the VWFA is built upon recycled prereading limb-selective neural substrates. Here I propose that prereading foveal visual expertise for limb movements (likely essential for motor-action expertise development) substantiates a representation sufficient to allow for efficient reading to successfully develop. The prereading neural limb representation consists of 3 distinct (yet likely connected) components that are activated when we see another person: the hand or whole-body stick figure (HSF/WBSF), its “outfit” (skin, cloths, etc.), and its kinematics. The HSF/WBSF aspect is the only one clearly coded in the VWFA and represents the instantaneous structure/configuration of the limb segments at that given timepoint. Its development is not necessarily dependent on vision (may develop from somatosensory/motor/vestibular inputs). The stick figure is composed of a coarse Gabor-like configuration (mixture of frequencies and sizes corresponding to limb sizes, distances etc.) each representing one segment in the HSF/WBSF. Importantly, the activated HSF/WBSF representation allows viewers to successfully decipher all limb locations overcoming occlusion (by cloths, other body or hand parts). Critically, the representational space these stick figures span are the building blocks of written languages. Specifically, finger segments’ structure and spatial relations allow to create any written letter combination (with sufficient number of hands/limbs). This recycled representation is likely hierarchical in its transition from the physics of vision to the semantics of language, being shaped by top-down language specific statistics and information. Future studies are needed to test the model’s validity and the multiplicity of predictions stemming from it.
Acknowledgements: This study was funded by ISF Individual Research Grants 1462/23 to SGD