Possible Optimal Strategies for Orientation Coding in Macaque V1 Revealed with a Self-Attention Deep Neural Network (SA-DNN) Model

Poster Presentation 56.458: Tuesday, May 21, 2024, 2:45 – 6:45 pm, Pavilion
Session: Spatial Vision: Machine learning, neural networks

Xin Wang1, Cai-Xia Chen2, Sheng-Hui Zhang3, Dan-Qing Jiang4, Shu-Chen Guan5, Shi-Ming Tang6, Cong Yu7; 1Peking University, 2Peking University, 3Peking University, 4Peking University, 5Justus-Liebig Universität, 6Peking University, 7Peking University

The orientation tuning bandwidths of individual V1 neurons are not sufficiently narrow to support fine psychophysical orientation discrimination thresholds. Here we explore the possibility that V1 neurons as a population may apply optimal orientation coding strategies to achieve superb orientation tuning. We trained a self-attention deep neural network (SA-DNN) model to reconstruct a Gabor stimulus image from neuronal responses obtained through two-photon calcium imaging in five awake macaques. Each response field of view (FOV) contains 1400-1700 neurons, and their responses to a Gabor stimulus are used as the model inputs. The SA-DNN model consists of a self-attention mechanism followed by a feedforward layer. The self-attention mechanism can reveal cooperative coding by neurons activated by the Gabor stimulus, yielding attention maps that display two-way connections among neurons. The results suggest: (1) Neurons tuned to the stimulus orientation tend to have higher attention scores with all other neurons. The top 25% of orientation-tuned neurons with the highest mean attention scores can best reconstruct the stimulus images, while the bottom 50% neurons are unable to do so. (2) The responses of the top 25% neurons, after self-attention transformation, generate significantly sharpened population orientation tuning functions, with the amplitude increased by 3-5 times and bandwidth narrowed by approximately 30%. (3) After excluding the self-attention component, the forward propagation of the model would only reconstruct very coarse stimulus images. (4) The tuning sharpening displays an oblique effect: attention maps have higher variabilities at cardinal than at oblique orientations, producing more sharpened orientation tuning functions at cardinal orientations. These modeling results suggest that the self-attention mechanisms optimize orientation coding in macaque V1, reweighting responses to accentuate neurons based on attention scores. The results provide new insights into V1 neuronal connectivity, elaborating how self-attention refines neuronal interactions and reweights responses to process orientation information.

Acknowledgements: This work was supported by the National Science and Technology Innovation 2030 Major Program (2022ZD0204600)