VSS, May 13-18

Color, Light and Materials: Light, materials, categories

Talk Session: Sunday, May 15, 2022, 2:30 – 4:15 pm EDT, Talk Room 1
Moderator: Karl Gegenfurtner, JLU, Giessen, Germany

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:40 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 2:30 pm, 34.11

The role of texture summary-statistics in material recognition from drawings and photographs

Benjamin Balas1 (), Michelle Greene2; 1North Dakota State University, 2Bates College

Material depictions in artwork are useful tools for revealing image features that support material categorization. For example, artistic ‘recipes’ for drawing specific materials make explicit the critical information leading to recognizable material properties (Di Cicco et al., 2020) and investigating the recognizability of material renderings as a function of their visual features supports conclusions about the vocabulary of material perception. Presently, we compared material categorization abilities between photographic stimuli (Sharan et al., 2014) and line drawings (Saito et al., 2015) in their original format and after texture synthesis. Specifically, our participants (N=52) completed a 4AFC material categorization task in which stimulus appearance was manipulated across participant groups via the Portilla-Simoncelli texture synthesis model. This manipulation allowed us to examine how categorization may be affected differently across materials and image formats when only summary-statistic information about appearance was retained. Our accuracy results revealed a three-way interaction (F(3,150)=4.80, p=0.003) between image format (photographs/drawings), material category (metal, stone, water, or wood) and appearance (original/texture-synthesized, driven by differential advantages for photographic vs. line drawing stimuli as a function of both material category and appearance. While line drawings supported better recognition of metal and wood across all stimulus manipulations, photographic water and stone were better recognized than drawings in original and synthetic versions, respectively. Do these patterns emerge purely from the image data? A linear SVM classifier assessed discriminability of the four materials in the eight layers of a deep convolutional neural network (AlexNet). Classification accuracy increased across layers, and photographs out-performed drawings. Category confusion rates were more correlated across humans and the classifier for photographs as well. Together, these results demonstrate that line drawings can make materials more recognizable to humans than photographs in some cases, perhaps by isolating critical image features used by human vision that are not captured by pre-trained dCNNs.

Talk 2, 2:45 pm, 34.12

Asymmetric matching of color and gloss across different lighting environments

Takuma Morimoto1,2 (), Arash Akbarinia1, Katherine Storrs1, Hannah E. Smithson2, Karl R. Gegenfurtner1, Roland W. Fleming1; 1University of Giessen, 2University of Oxford

Natural lighting environments can dramatically differ depending on location, time, or weather conditions. Here we tested the degree to which humans can simultaneously judge color and gloss of objects under diverse lighting environments. We selected 12 image-based environmental illuminations captured under different weather conditions (sunny and cloudy) and locations (indoor and outdoor) and applied two manipulations to each illumination to expand the diversity: (i) rotating the chromatic distribution by 90 degrees to generate chromatically atypical environments and (ii) scrambling phase in the frequency domain to make the lighting geometry unnatural. Under each of 36 environments, we used a physics-based renderer to generate a test image from a single 3D mesh of a random everyday object that was assigned random color and specularity. In a different lighting environment, we separately rendered a comparison image containing a bumpy object. In each trial test and comparison images were presented side-by-side on a computer screen, and participants were asked to adjust color (in lightness, hue and chroma) and specularity of the comparison object until it appeared to be made of the same material as the test, shown in a different lighting environment. Results showed that hue settings were highly correlated with ground-truth values for natural and phase-scrambled lighting conditions, but the accuracy of the settings worsened in chromatically atypical environments. Chroma and lightness constancy were generally poor, but these failures correlated with simple image statistics such as mean chroma and mean lightness over the object region. Gloss constancy was limited especially under diffuse lighting (e.g. cloudy environments). Constancy errors had high consistency across participants. These results suggest that though color and gloss constancy hold well in many situations, some properties in lighting environments such as chromatic unfamiliarity or diffuseness potentially hamper our stable visual judgements of material properties.

Acknowledgements: Authors thank Wiebke Siedentop for assisting data collection. TM is supported by a Sir Henry Wellcome Postdoctoral Fellowship from Wellcome Trust (218657/Z/19/Z) and a Junior Research Fellowship from Pembroke College, University of Oxford.

Talk 3, 3:00 pm, 34.13

A Perceptual Evaluation of the StyleGAN2-ADA Generated Images of Translucent Objects

Chenxi Liao1 (), Masataka Sawayama2, Bei Xiao1; 1American University, 2Inria

Translucent materials have a wide variety of appearances due to variability of scattering, geometry, and lighting condition. Most previous studies used rendered images as stimuli. However, it is challenging to acquire accurate physical parameters to render materials with realistic appearance. Here, we investigate to what extent Generative Adversarial Networks (GANs) trained on unlabeled photographs of translucent materials can produce perceptually realistic images and achieve diverse appearances, without knowing about the physical parameters. We created a dataset of 3000 photographs of soaps with a variety of translucent appearances. The images were trained with StyleGAN2-ADA, a generative network that is trained on limited data with data-augmentation. We then conducted human psychophysical experiments to measure the perceived quality of the model’s output. In Experiment 1, we sampled 250 images from the real photographs and another 250 from the generated images, and asked observers to judge whether the soap in the image is real or fake after a brief 300ms presentation. In Experiment 2, observers rated the level of translucency of the material for the same 500 images on a 5-point-scale. Ten observers sequentially completed both experiments. We find observers can correctly judge the vast majority of real photographs (73% of the real images are correctly judged by at least 9 observers) but they make substantial mistakes for the generated images (60% fake images are falsely judged to be real by at least 2 observers and 7% fake images are falsely judged by more than 5 observers). Second, observers can discriminate a range of translucency from the generated images from opaque to transparent, similar to that of the real photographs. Our results suggest StyleGAN2-ADA has the potential to learn a representation of translucent appearances similar to that of humans, and it is useful to explore its latent space to disentangle the material-related features.

Talk 4, 3:15 pm, 34.14

Color constancy as a function of similarity in material appearance

Robert Ennis1, Karl Gegenfurtner1, Katja Doerschner1,2; 1Justus-Liebig Universitaet Giessen, 2National Magnetic Resonance Research Center, Bilkent University

Materials all share the various hues of color, but different image features contribute to the colors of different materials. Which features and how they interact has become clearer, but further insight might be obtained by understanding how observers make color matches across materials, for example matching the color of a matte cone to that of a metallic glossy sculpture. Some investigation has already been done on this topic (Xiao & Brainard, 2008; Giesel & Gegenfurtner, 2010; Granzier, Vergne & Gegenfurtner, 2014), showing that it holds much potential. For example, is color constancy better when materials are more similar in appearance (e.g., is it better when matching a glossy object to a glass object, rather than matching a matte object to a glass object)? With virtual reality headsets, we can simulate 3D environments where observers can manipulate the colors of objects with different materials. We had 7 observers do a web-based virtual reality experiment, where they changed the color of either a matte or glossy cone to match the color of a glass sculpture (red, green, blue, yellow). The illumination was a basic sunsky model that could be blue or yellow (daylight axis). We tested whether the following image statistics predicted observer matches: mean color, color of the brightest region (excluding highlights), most saturated color, and most frequent color. We found that observer matches for the matte object showed some consistency across illuminations, but did not exhibit color constancy, and it was not clear which image statistics observers were using. In contrast, we found that observers exhibit color constancy for glossy matches to a glass sculpture and they used the color of the brightest regions of both objects, excluding the highlights, to make the match. Our results indicate that color constancy performance varies as a function of similarity in material appearance.

Talk 5, 3:30 pm, 34.15

Spatial and temporal dynamics of effective daylight in natural scenes

Cehao Yu1 (), Maarten Wijntjes1, Elmar Eisemann2, Sylvia Pont1; 1Perceptual Intelligence Lab (π–Lab), Delft University of Technology, 2Computer Graphics and Visualization Group, Delft University of Technology

In vision science, it is presumed that daylight priors affect color, shape, and light perception. Earth-Sun geometry, weather, climate, and optical effects such as Rayleigh scattering cause daylighting variations in time, direction, and space, which are well described by hemispherical models such as the CIE daylight(ing) model. These models however exclude influences such as vignetting in and (inter)reflections from the environment. These can cause the effective daylight spectral power distribution (SPD) to vary from one time and position to another. Here we aimed to quantify such effective temporal and spatial variations of intensity, direction, color and diffuseness in daylit natural scenes via cubic spectral irradiance metering. We measured the diffuse (light density) and directed (light vector) light-field components in a sunlit rural scene over a day with 5-minute intervals in experiment one, and in the shade and in the light for 24 sunlit rural and urban scenes across multiple days in experiment two. In the first (temporal) experiment, we found that the chromaticities of light densities were rather stable, but the light vectors varied from warm during dawn and dusk to cool around noon. The light vectors' directions varied systematically during daytime and were close to the sun path, while those for twilight were pointing upwards. The second (spatial) experiment revealed that the chromaticities and intensities of the light vectors in the shade and light showed larger differences than the light densities. The diffuseness in the shade was 1.1 to four times larger and the colour temperature thousands of Kelvins higher than in the light. This study demonstrates how differential contributions of the effective diffuse and directed day-light-field components can be captured and how these may vary in many ways. Vision research into color, shape, and light perception in natural scenes need to take such variations into account.

Acknowledgements: This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 765121; project "DyViTo".

Talk 6, 3:45 pm, 34.16

The geometry of high-level colour perception reflects the amount of information provided by colours about objects.

Mubaraka Muchhala1 (), Nick Scott-Samuel1, Roland Baddeley1; 1University of Bristol

The optimal representation of a signal is determined by the task and the dominant noise. For high-level colour perception, the purpose of colour is to inform the observer about the world, and the dominant noise is due to failure of colour memory (Baddeley & Attewell, 2009). We estimated the properties of high-level colour perception by testing colour memory in CIE1931 chromaticity space. We observed categorical biases in colour memory across hue and saturation dimensions, where colour memory was biased towards category foci, corresponding to six basic colour terms: red, blue, green, pink, orange and grey (Berlin & Kay, 1969). We propose that these biases are due to a non-uniform prior over colours, which originates from the distribution of colours across objects in the environment. To identify the form of this prior, we trained a deep neural network to identify objects using only object colour. A single pixel was sampled from images of objects in ImageNet, and the model learnt to predict the probability of objects for a given pixel colour. We measured the amount of information provided by colour about objects across CIE1931 chromaticity space. Five colour categories were observed in the information geometry corresponding to basic colour terms, where category foci were more informative about objects than category boundaries. We replicated these results using the OpenImages V6 dataset, which produced a very similar categorical structure. The geometry of high-level colour perception reflected the information geometry of object colour space: colour memory was biased towards category foci which were more informative about objects, and away from category boundaries which were less informative about objects. These findings support the theory that the colour statistics of our environment form the basis of a non-uniform prior which directs perceptual processes towards the most informative colours, and explains the emergence of basic colour terms.

Talk 7, 4:00 pm, 34.17

Color category boundaries predict generalization of color-concept associations

Melissa A. Schoenlein1 (), Karen B. Schloss1; 1University of Wisconsin-Madison

People have systematic associations between colors and concepts formed through experiences. Several factors influence how people learn and generalize color-concept associations, including co-occurrence frequencies in the input and color typicality (Schoenlein & Schloss, VSS-2019). Rathore et al. (2019) proposed the category extrapolation hypothesis to account for how people form color-concept associations for colors not seen in the input: color-concept associations for an observed color (e.g., a purple) extrapolate to all other colors within the observed color’s category (e.g., all purples). This hypothesis implies learned color-concept associations for novel concepts will mimic structure defined by color-category boundaries. Associations between a concept and observed color should spread to other colors within its category, and then quickly decrease upon crossing the category boundary (e.g., purple to blue category boundary). We tested this hypothesis in two experiments. Both experiments included three tasks: (1) co-occurrence exposure for novel concepts (Filk, Slub), (2) color-concept association judgments, and (3) color-category membership judgments. In Experiment 1 we compared two models predicting the extent to which color-concept associations for observed colors during category learning generalized to a series of unobserved colors straddling category boundaries. The “ΔE only” model included color distance from the observed color (ΔE). The “ΔE+category” model included ΔE plus a measure of color-category membership. Supporting the category extrapolation hypothesis, ΔE+category fit the data better than ΔE only (model comparison accounting for different parameter counts: p<.001), and color category was a significant predictor (p=.034). Generalization in color-concept associations dropped upon crossing the category boundary, which cannot be explained by color distance alone. In Experiment 2, individual differences in color-category boundaries predicted individual differences in generalization patterns (purples: r=.65; yellows: r=.46). This work provides the first evidence that category boundary structure shapes color-concept associations for novel concepts, emphasizing the importance of cognitive and perceptual factors when forming associations.

Acknowledgements: This work was supported in part by the National Science Foundation (BCS-1945303).