Haptic contributions to visual-memory-guided grasping
Poster Presentation 43.435: Monday, May 18, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Action: Grasping, affordances
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Logan McIntosh1,3 (), Robert Volcic1,2,3; 1New York University Abu Dhabi, 2Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, 3Center for Brain and Health, New York University Abu Dhabi
There is a large body of research examining how reaching and grasping movements to visually-previewed locations change under memory conditions. However, not all movements are made purely under visual memory, such as when unscrewing the lid of a jar. In this case, the hand holding the jar might provide haptic information that would supplement the initial visual preview. Additionally, the studies on visual-memory-guided movements also used discrete delay timings for movement initiation, typically 0 and 2 seconds (and/or 5 seconds), so it is unclear how performance changes between these delays. Here, participants completed movements to grasp objects of different sizes, presented in different locations, with their right hand. These were completed in three conditions; haptic-only, vision-only, and visuo-haptic. In the vision-only condition, participants were provided a 1 second viewing time before vision was again obscured. In the visuo-haptic condition, the viewing time began once the object was grasped by the left hand. Grasping movements were initiated by a tone, which was presented at a random delay of up to 2 seconds after obscuration, increasing the extent to which the movement depended on memory-based information. In the haptic-only condition, participants’ vision was obscured for the entire block and they first moved their left hand to grasp the object for 1 second before the starting tone. Maximum grip aperture, peak velocity, and duration were recorded as measures of performance. We replicated previous results, with better performance under visuo-haptic reaching than vision-only, which was again better than for haptic-only grasping. Performance decreased with longer delays between obscuration and movement initiation, with a slower decline in the visuo-haptic condition than in vision-only.
Acknowledgements: We acknowledge the support of the NYUAD Center for Artificial Intelligence and Robotics and the NYUAD Center for Brain and Health, funded by Tamkeen under the NYUAD Research Institute Awards CG010 and CG012.