Asymmetries and spatial gradients in color and brightness induction
33.3014, Sunday, 17-May, 8:30 am - 12:30 pm, Banyan Breezeway
Romain Bachy1, Qasim Zaidi1; 1Graduate Center for Vision Research, SUNY Optometry
We tackle two issues in color/brightness induction with a new measurement method. First, there are reports of asymmetry in the literature that dark induction is stronger than light induction, but these claims are based on experiments that did not separate adaptation effects from lateral interactions. Second, the magnitude of induction is generally reduced by spatially separating the test and surround, and by decreasing the physical contrast between them, so blurring the edge between the test and the surround should reduce the magnitude of induction, but this has not been tested experimentally. In our method, observers viewed a centrally fixated annulus (0.66° to 2.0°) surrounded by a 12° square. The edges between the annulus and its surrounds had either a square (sharp) or sinusoidal (blur) gradient. The color of the square was modulated for 0.5 seconds as a half-sinusoid between mid-Grey and one of the six poles of DKL space, roughly Light, Dark, Red, Green, Yellow, Blue. When the annulus was steady at mid-Grey, observers perceived an induced color/brightness shift towards the complementary pole. The magnitude of the perceived shift was measured as the amplitude of real modulation needed to null it, using a double-random 2AFC staircase-procedure. Each block of trials alternated surround modulation between complementary poles to keep adaptation at the mid-Grey. This method estimates stronger induction effects than any other method. Across 2 observers, there were no consistent asymmetries in induction along any of the 3 color axes, suggesting that previously reported asymmetries reflect adaptation effects rather than lateral interactions. Spatial blur did not reduce the magnitude of induction, and in some cases seemed to increase the effect. The lack of effect of moderate blur on induction magnitude, suggests that neural models pooling the outputs of spatial filters may be a better representation than models incorporating edge detection.