Using image-based metrics to predict gloss perception across different scenes
Poster Presentation 26.444: Saturday, May 16, 2026, 2:45 – 6:45 pm, Pavilion
Session: Color, Light and Materials: Material perception
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions
Zoe R. Goll1,2 (), Jacob R. Cheeseman1,2, Roland W. Fleming1,2; 1Justus Liebig University Giessen, Germany, 2Center for Mind, Brain and Behavior, Universities of Marburg, Giessen, and Darmstadt
Material appearance depends strongly on distal scene factors like viewpoint, illumination, and object shape, causing discriminability between two surfaces to vary dramatically across scenes. The same surface reflectance can appear very different under certain viewing conditions, while two different materials may look perceptually equivalent under other conditions. Here, we investigate whether perceived equivalence between materials across such scene changes can be predicted using image-based metrics. We created a set of images that systematically varied viewpoint, illumination and object shape, rendering each scene with five levels of surface roughness. Within scenes, we calculated the predicted perceptual differences between neighbouring roughness levels using a visual difference predictor (VDP, Mantiuk et al., 2023). We averaged VDP values across each scene and selected three “high-VDP” scenes with the largest differences and three “low-VDP” scenes with the smallest. In this way, VDP was used to select scenes for which we expect high sensitivity to changes in surface roughness, when other scene factors are held constant. Here we are interested in whether those VDP values also predict asymmetric gloss judgements across varying scene conditions. Observers were asked to choose the object with the rougher material in a 2AFC task. A scene with a near-median VDP value was chosen as the comparison stimulus. Psychometric functions were estimated for the three “high-VDP” and three “low-VDP” scenes, and a control with symmetric scene parameters. We find that VDP well predicts biases in gloss perception across different scenes, even though it is based on measuring visibility of differences within particular scenes. Matches for “low-VDP” scenes were rougher than “high-VDP” scenes and exhibited greater variability across participants. These results suggest that image metrics can be a useful tool for estimating bias and sensitivity not just within scenes, providing a possible first step towards predicting material discriminability across scenes.