Automatically perceiving paths through a scene

Poster Presentation 53.348: Tuesday, May 19, 2026, 8:30 am – 12:30 pm, Banyan Breezeway
Session: Scene Perception: Categorization, memory

Giacomo Aldegheri1, Başak Güvensin2, Roland W. Fleming1; 1Justus Liebig University Giessen, 2Bilkent University

As we look at scenes in the real world, we can perceive not just “what is out there” but also “what is possible”. In an apparently automatic and effortless way, we can extract information about actions that are possible in a scene, locations that can be reached, or events that can happen. For example, we can quickly perceive the possible paths to reach a door, or whether an object at the edge of a table is likely to fall. It is still unknown, however, how we automatically determine the paths leading to these possible states, taking environment dynamics and constraints into account. Here, we use simple maze-like scenes as a case study, using a Go/No-Go task to probe which states are perceived as “leading up to” a given goal state. We instruct participants to press a button whenever they see a scene in which an agent has reached the end of the maze and is next to a target. Importantly, we do not describe the scene in these terms, but simply show them the image they have to respond to. We find that they make a substantially larger number of false alarms on scenes in which the agent is closer to the goal along the path delimited by the maze. Other scenes, in which the agent has the same Euclidean distance to the goal but would have to cross a wall to reach it, do not lead to false alarms. These results provide preliminary evidence that scene constraints (such as the walls of a maze) determine the perceived similarity of different scene states. We plan on expanding this research to more complex scenes, and other domains beyond maze navigation, in order to elucidate the mechanisms by which we determine the paths leading up to a possible state.

Acknowledgements: This research was funded by the European Research Council (ERC) Advanced Award ‘STUFF’ (ERC-ADG-2022- 101098225)