Behavioral and neural signatures of efficient sensory encoding in the tilt illusion

Poster Presentation 36.345: Sunday, May 19, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Spatial Vision: Models

Ling-Qi Zhang1,2 (), Geoffrey K. Aguirre1, Alan A. Stocker1; 1University of Pennsylvania, 2Janelia Research Campus, Howard Hughes Medical Institute

Human perception of orientation is shaped by spatial and temporal context. For example, in the tilt illusion, the surround orientation induces a distinctive bias pattern in the perceived orientation of the center (Gibson, 1937). At the neuronal level, it has been shown that stimulus context alters the response properties of visual neurons (e.g., Benucci, Saleem & Carandini, 2013). Connecting from neural coding to behavior, however, is difficult as it requires specific assumptions about how orientation is represented (encoding) and interpreted (decoding). Here, we conduct a study that characterizes the surround modulation of orientation encoding simultaneously at both levels. Across 1,200 trials during fMRI scanning, each of 10 subjects estimated the orientation of a briefly displayed grating (1.5s, 1 Hz contrast modulation) by rotating a line probe after a short delay (3.5s - 4.5s). The stimuli were presented within an annular surround of either non-oriented noise, or gratings with one of two fixed orientations (±30 degrees off vertical). We extracted encoding accuracy, expressed as Fisher information (FI), based on a lawful relationship between FI and the bias and variance in behavioral responses (Noel et al., 2021). We also obtained the neural population FI for each of the retinotopically organized visual areas by fitting voxel-wise probabilistic encoding models to the fMRI data (van Bergen & Jehee, 2018). At both behavioral and neural levels, sensory encoding for the stimulus condition with an unoriented surround accurately reflects the natural scene statistics of orientation. In the presence of the oriented surround, encoding accuracy is significantly increased at the corresponding surround orientation, which also matches the conditional orientation statistics in natural scenes. Further, the effect of surround modulation upon FI increased steadily across the ventral visual area hierarchy. Our results are consistent with the notion that contextual modulation represents a form of efficient coding.

Acknowledgements: This work is supported by a MindCORE Collaborative Research Grant from the University of Pennsylvania