A Novel Explanation of the Inverted Face Effect

Talk Presentation 22.24: Saturday, May 16, 2026, 10:45 am – 12:30 pm, Talk Room 2
Session: Face and Body Perception: Mechanisms

Garrison Cottrell1 (gary@ucsd.edu), Kira Fleischer1, Nikita Kachappilly1, Alexander Tahan1, Hsin-Yuan Lee1, Xavier Chen1; 1UCSD

Vision researchers often assert that when shown an inverted face, subjects “revert to feature processing.” But how can they still use feature processing on inverted features? Eyebrows, eyes, noses and mouths are mono-oriented in everyday life. Furthermore, in the Thatcher illusion, subjects do not notice the features are right-side up relative to the face. This is a mystery: If subjects revert to feature processing, how would they use inverted features in one case, and not notice that they are upright in another? We suggest that, because of the log-polar mapping from the visual field to V1, in *both* cases, the features are *not* inverted when they enter the cortex. In this representation, rotation is just a vertical shift – features remain in the same orientation when the face is upright or inverted. So why are inverted faces difficult to recognize or remember? In face processing, small changes to configuration make the face appear to be someone else. Because the cortex is flat and not a torus, it can’t represent that 270 degrees is continuous with -90 degrees. When the face is upright, the nose is above the left eye; inverted, the nose is below the right eye, disrupting the configuration of the features. We test this hypothesis with a DCNN, trained using a log-polar version of faces. The model is disrupted by inverted faces, but it still recognizes 50% of familiar faces. A standard DCNN is nearly at chance, unlike humans. This effect is much smaller for inverted objects, trained to be recognized at the basic level, where configuration doesn't matter. When the model is trained to be a dog expert, it again is disrupted by inversion (Diamond & Carey, 1986). Hence, we have a novel explanation of the inverted face effect, based on the transformation that occurs in V1.

Acknowledgements: This work was supported by NSF CRCNS grant #2208362