Sparse null codes emerge and dominate representations in deep neural network vision models

Poster Presentation 56.337: Tuesday, May 21, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Perceptual Organization: Neural mechanisms, models

Brian S. Robinson1 (), Nathan Drenkow1, Colin Conwell2, Michael F. Bonner2; 1Johns Hopkins University Applied Physics Laboratory, 2Johns Hopkins University

Representations in vision-based deep neural networks and biological vision are often analyzed from the perspective of the image features they encode, such as contours, textures, and object parts. In this work, we present evidence for an alternative, more abstract type of representation in deep neural networks, which we refer to as a “null code”. Through a series of analyses inspecting the embeddings of a range of neural networks, including different transformer architectures and a recent performant convolutional neural network, we observe null codes that are both statistically and qualitatively distinct from the more commonly reported feature-related codes of vision models. These null codes are highly sparse, have a single unique activation pattern for each network, emerge abruptly at intermediate network depths, and are activated in a feature-independent manner by weakly informative image regions, such as backgrounds. We additionally find that these sparse null codes are approximately equal to the first principal component of representations in middle and later network layers across all analyzed models, which means that they have a major impact on methodological and conceptual approaches for relating deep neural networks to biological vision. In sum, these findings reveal a new class of highly abstract representations that emerge as major components of modern deep vision models: sparse null codes that seem to indicate the absence of features rather than serving as feature detectors.

Acknowledgements: This work was supported by funding from the Johns Hopkins University Applied Physics Laboratory