Inverse dynamics induced vector fields explain object affordances
Poster Presentation 43.439: Monday, May 18, 2026, 8:30 am – 12:30 pm, Pavilion
Session: Action: Grasping, affordances
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Aalap D. Shah1 (), Ilker Yildirim1; 1Yale University
Gibson’s proposal conceptualized affordances as ‘action possibilities’ jointly determined by the agent’s capabilities and the structure of the physical environment. For instance, the affordance of ‘pick-ability’ applies to a human and an apple, but not to an ant and an apple. Despite the large body of empirical work accumulated in the half-century since its introduction and the several competing conceptual frameworks attempting to organize these findings, a computational specification of affordances remains elusive. This knowledge gap poses a central challenge for vision science and related fields: Are affordances computable? Here, we present the first computational account of affordance, formalizing it as goal-conditioned, dynamics-based vector fields over objects. Given an object, the physical environment, and a desired object trajectory, we derive a vector field over the object via *inverse dynamics*. Subject to the constraints imposed by the agent (biomechanics of opposable digits and non-slipping contacts), the object (shape and mass), and the environment (surface configurations), this yields a distribution over contact points between the object, agent, and the environment, and their associated force vectors for reproducing the desired trajectory. We empirically evaluated our account via an experiment in which participants viewed an object moving along a trajectory across an environment of rigid surfaces and subsequently indicated two grasp locations on the object — one for the thumb and one for the index finger — to reproduce the depicted motion. Notably, each object-environment pairing was associated with multiple distinct trajectories. The results were clear and striking: participants systematically and flexibly adjusted contact points in response to changes in the objects, trajectories, and the configuration of surfaces; and their performance aligned with the predictions of our inverse-dynamics model. These results demonstrate that affordances can be quantitatively determined, opening a path to synergize affordances within the computational frameworks of visual cognition and physical interaction.
Acknowledgements: National Science Foundation (under CAREER Award No. BCS-2441520)