A computational model for the concurrent retrieval of object and self-motion information from optic flow

Poster Presentation 26.302: Saturday, May 18, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Motion: Optic flow

There is a Poster PDF for this presentation, but you must be a current member or registered to attend VSS 2024 to view it.
Please go to your Account Home page to register.

Malte Scherff1 (), Markus Lappe1; 1University of Münster

In a scene where self-motion and observer independent movement is present, the optic flow is complex because the global flow pattern resulting from ego-motion is locally confounded. Important information can still be obtained, although various biases were reported. The estimation of self-motion direction is affected by the direction of object movement and the perception of the object's trajectory by the ego-motion. While the underlying processes and their interactions remain largely unknown, some research proposes a sequential procedure. First, an initial estimation of the heading is made, followed by segmentation of areas of flow that do not match the estimation. The heading estimation is then refined, excluding these areas. The estimation could then aid in accounting for self-motion and disentangle the combined flow at the objects' retinal location for the retrieval of object properties. However, other research contests the need for prior heading estimation to estimate an object's trajectory. We present a computational model that computes retinotopic maps displaying the likelihood of heading directions given local flow. The likelihood distribution serves as a reliable indicator for the presence of independent object motion. Omitting certain parts of the flow, the objects' influence on heading estimation can be reduced. Furthermore, details about the objects' retinal position and movement can be extracted from the distributions. In summary, the model offers a concurrent estimation of both object properties and heading without either process relying on the outcome of the other. The model replicates various aspects of human performance, including the initial rise in heading estimation error with an increase in object speed, followed by a reduction in error due to enhanced object detection performance. The perception of an object's trajectory is biased by its heading direction, in line with prior research. This bias occurs without requiring the completion of the heading estimation process beforehand.