Can material-robust detection of 3D non-rigid deformation be explained by predictive processing through generative models?

Poster Presentation 26.434: Saturday, May 18, 2024, 2:45 – 6:45 pm, Pavilion
Session: Color, Light and Materials: Surfaces, materials

Shin'ya Nishida1,2 (), Mitchell van Zuijlen1, Yung-Hao Yang1, Jan Jaap van Assen3; 1Cognitive Informatics Lab, Graduate School of Informatics, Kyoto University, 2NTT Communication Science Labs, Nippon Telegraph and Telephone Corporation, 3Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology

Depending on the optical material property of the object (e.g., matte, glossy, transparent), the optical flow generated by a non-rigid deformation of a 3D object dramatically changes. Nevertheless, a recent study (van Zuijlen et al., VSS2022) shows that the sensitivity to detect deformation of a rotating object is similar for matte and glossy objects, and only slightly worse for transparent objects. What makes deformation perception robust to material changes? One possibility is that the visual system constructs a generative model for each object that can correctly predict how the image should change if the object rigidly moves, being able to detect deformation when there is a significant deviation from the prediction. According to this hypothesis, the deformation detection sensitivity would be impaired when extra image deviations from the model predictions are additionally produced by unusual global movements in the surrounding lightfield. In the experiment, the target object was an infinite knot stimulus rotating around a vertical axis, rendered with one of four optical properties (dot-textured matte, glossy, mirror-like, and transparent). The object was deformed by an inward pulling force in seven levels of intensity (including a rigid condition). Using Maxwell Renderer, the movie of each object was rendered under one of three lightfield conditions: static, imploding, or rotating. The object’s background was black-masked to make the lightfield change directly invisible to observers. Observers performed a 2-IFC task to choose which of the two stimuli (one being always rigid) deformed more. The results do not support the prediction made by the generative model: light-field manipulation had no significant influence on the deformation detection threshold, nor on the effect of material on the threshold. The results rather support the idea that the visual system effectively ignores the complex flow produced by material-dependent features (e.g., highlights, refractions) in deformation detection.

Acknowledgements: Supported by JSPS Kakenhi JP20H05957 and a Marie-Skłodowska-Curie Actions Individual Fellowship (H2020-MSCA-IF-2019-FLOW).