Plasticity and Learning 2

Talk Session: Tuesday, May 23, 2023, 8:15 – 9:45 am, Talk Room 1
Moderator: Biyu He, NYU

Talk 1, 8:15 am, 51.11

Mapping the invariance properties of perceptual priors in one-shot perceptual learning

Ayaka Hachisuka1 (), Jonathan D. Shor1, Xujin C. Liu2, Eric K. Oermann3, Biyu J. He1; 1New York University Grossman School of Medicine, 2New York University Tandon school of engineering, 3New York University Langone Health

Prior knowledge powerfully facilitates object recognition. In a dramatic example of one-shot perceptual learning, a previously unrecognizable, degraded image of a real-world object becomes instantly recognizable after exposure to the corresponding original image. We previously showed that neural activity changes driven by one-shot perceptual learning are widespread across the ventral visual stream, extending into frontoparietal (FPN) and default-mode (DMN) networks. However, it remained unclear where the image-specific prior knowledge is encoded in the brain and what type of information is stored. To address these questions, we modified the original images based on receptive field sizes and orientation tuning properties of object representation across the ventral visual hierarchy. We then tested for potential generalization of one-shot perceptual learning to systematically map out the invariance properties of prior knowledge encoded in the brain. First, changing the image size from the original 12 DVA to 6 or 24 DVA, thereby altering perceptual information available to the small receptive fields of early visual cortex, did not change the perceptual learning effect. However, shifting the image position by 6 DVA to the left or right, and therefore strongly influencing but not eliminating inferotemporal (IT) neural coding, significantly diminished the effect without abolishing it. Similarly, rotations and inversions (targeting orientation invariance properties emerging within IT cortex) significantly diminished without abolishing the learning effect. These results suggest that priors are likely encoded in the inferotemporal cortex, wherein rotation invariance emerges, although they do not rule out the possible involvement of higher order regions. Interestingly, we found no change in the learning effect when we biased image properties to selectively activate parvo- or magnocellular pathways, suggesting that either pathway alone can encode the perceptual prior. Together, we show that encoding of priors in one-shot perceptual learning depends on regions involved in whole-object representations.

Acknowledgements: Funding source: NSF BCS-1926780

Talk 2, 8:30 am, 51.12

Masking that disrupts late phases of visual processing eliminates location specificity of visual perceptual learning

Yusuke Nakashima1 (), Yuka Sasaki1, Takeo Watanabe1; 1Brown University

Visual perceptual learning (VPL) is characterized by location specificity, in which learning is specific to the trained retinal location. Some studies have suggested that the location specificity of VPL is a manifestation of the involvement of VPL in early visual areas that have smaller receptive fields than higher areas. Other studies have suggested that location specificity results from higher-level involvement. To test which possibility is more likely, a psychophysical experiment was conducted using the following two types of masking. Previous neurophysiological studies have suggested that forward masking disrupts early phases of visual processing, while backward masking disrupts later processing phases. Since feedback occurs at later phases, disruption of location specificity only by backward masking would support the hypothesis that feedback is involved in location specificity. On the other hand, no disruption of location specificity by backward masking would support the hypothesis that location specificity occurs in early visual areas. During six training days, participants performed an orientation discrimination task at one location with either forward or backward masking. In the forward masking group, a noise mask was presented before a grating with a constant orientation. In contrast, in the backward masking group, a ring-shaped mask was presented after the grating, termed metacontrast masking. The strength of the masking effect was equated across participants from the two masking groups by adjusting the luminance contrast of the mask stimuli before training. In the pretest and posttest, participants performed the orientation discrimination task at the trained and untrained locations without masking. We found that while performance was improved only at the trained location in the forward masking group, performance improved equally at the trained and untrained locations in the backward masking group. These results suggest that temporally late processing, including feedback processing, contributes to the location specificity of VPL.

Acknowledgements: NIH R01EY019466, R01EY027841, R01EY031705

Talk 3, 8:45 am, 51.13

Alterations in Orientation-Selective Early Visual Neural Functions Are Associated With Reduced Orientation-Dependent Surround Suppression In Schizophrenia

Samuel Klein1 (), Collin Teich1, Eric Rawls1, Cheryl A. Olman1, Scott R. Sponheim1,2; 1University of Minnesota-Twin Cities, Departments of Psychology and Psychiatry, 2Minneapolis Veterans Affairs Medical Center

Perceptual surround suppression is a phenomenon in which the perceived contrast of a stimulus is reduced when accompanied by a surrounding stimulus. This effect is influenced by various stimulus features of the center-surround, including relative orientation (i.e., orientation-dependent surround suppression [ODSS]). Patients with schizophrenia (SZ) often exhibit reduced ODSS, though the neurophysiological correlates of this atypical perception are not yet fully understood. Accordingly, the present study examined differences in electrophysiological (EEG) response between SZ (N=28), healthy controls (HC;N=26), patients with bipolar disorder (BP;N=29) and first-degree relatives of both patient groups (SZR,BPR;N=25,N=19) to probe whether atypical neural functions during ODSS are related to severe mental illness more broadly, and/or mark genetic liability for psychosis. Participants indicated whether a circular grating with a surrounding annulus randomly set to one of five relative orientations, or a reference grating with no surround, had greater perceived contrast, while undergoing 126-channel EEG recording. We applied current source density interpolation to more accurately capture contralateral visual cortical activation within group-averaged visual evoked potentials. Visual P1 (50-140 ms; reflects low-level visual processing) was insensitive to orientation, with SZ demonstrating markedly reduced amplitudes; BP and SZR exhibited intermediate reductions. The N1 component (120-205 ms; related to visual discrimination) was modulated by surround orientation, and attenuated in SZ and BPR. Blunted P1 response was associated with greater psychotic psychopathology, whereas blunted N1 was associated with reduced perceptual suppression. Experimental results suggest that early neural functions reflecting basic visual processing are disrupted in severe mental illness, and associated with genetic liability for psychosis—these disruptions were greatest in SZ, and associated with dimensional aspects of psychotic psychopathology. This study highlights that SZ exhibit alterations in orientation-selective neural functions that relate to reduced perceptual suppression, yielding novel insights into the neurophysiological mechanisms underlying reduced ODSS in SZ.

Talk 4, 9:00 am, 51.14

Comparing retinotopic maps of children and adults reveals a late-stage change in how V1 samples the visual field

Marc Himmelberg1, Ekin Tünçok1, Jesse Gomez2, Kalanit Grill-Spector3, Marisa Carrasco1, Jonathan Winawer1; 1New York University, 2Princeton University, 3Stanford University

Adult visual performance is better along the horizontal than vertical meridian, and along the lower than upper vertical meridian of the visual field. These perceptual asymmetries are matched by an asymmetric distribution of cortical tissue in adult primary visual cortex (V1). Children, like adults, have better visual performance along the horizontal than vertical meridian. However, unlike adults, children have similar visual performance along the upper and lower vertical meridians. Last year, we reported that children, unlike adults, have a similar V1 surface area representing the upper and lower vertical meridians. Here, we expanded our sample of children (n=17 to 25) and included a new sample of adults (n=24), whose data was collected and analyzed with the same methods as the children. We asked whether the cortical representation of the visual field differs between children and adults. Methods: We used fMRI population receptive field (pRF) mapping to measure retinotopic maps in children (5-12 yrs) and adults (≥ 22 yrs). We make these data publicly available. We calculated the amount of surface area representing different regions of the V1-V3 maps, including the polar angle meridians in V1. Results: Adults and children had ~80% more V1 surface area representing the horizontal than vertical meridian (±25° of polar angle, 1-7° eccentricity). Adults had ~40% more V1 surface area representing the lower than upper vertical meridian, whereas children had a similar surface area representing the lower and upper vertical meridian. Comparing the groups showed that children had a narrower cortical representation of the lower vertical meridian, relative to adults. Conclusion: These data provide evidence for unexpectedly large, late-stage change in the functional organization of V1. We speculate that the increase in the V1 representation of the lower vertical meridian between child- and adulthood drives the visual performance asymmetry along the vertical meridian seen in adults.

Acknowledgements: R01-EY027401 to M.C and J.W; R01-EY022318 and R01-EY023915 to K.G-S; Princeton Neuroscience Institute start-up funds to J.G.

Talk 5, 9:15 am, 51.15

A neural network model of category-learning induced transfer of visual perceptual learning

Luke Rosedahl1, Thomas Serre1, Takeo Watanabe1; 1Brown University

Visual Perceptual Learning (VPL; often defined as a long-term performance increase resulting from visual experience) is highly specific to trained features. Previous work found that performing category learning before VPL causes VPL to transfer to stimuli from the same category as the trained stimulus (Category-Learning Induced Transfer of VPL or CIT-VPL, Wang et al, 2018, Current Biology). However, the mechanism of transfer is unknown. Based on work showing that Feature-Based Attention (FBA) during category learning can increase within-category stimulus similarity (Brouwer and Heeger, 2013), here we postulate that CIT-VPL occurs through FBA. We test this hypothesis utilizing two category structures: Rule-Based (RB) and Information-Integration (II). In RB structures, the optimal strategy involves making binary decisions along feature dimensions and performance is increased if FBA is targeted to specific feature values. In II structures, information from multiple feature dimensions must be combined before a decision can be made and performance is decreased if FBA is targeted to specific feature values. The theory therefore predicts that RB structures will cause greater transfer of VPL than II structures. Subjects (n=10) were divided evenly between RB and II conditions and underwent category learning followed by five days of VPL training. VPL for the trained stimulus and a transfer stimulus from each category was measured using pre- and post-testing. The RB condition induced transfer for the stimulus from the same category as the trained stimulus but not for the opposing category stimulus, while the II condition induced no transfer. We then implement a neural network model that learns to apply feature-specific feedback (gain) modulation during category learning. We demonstrate that feedback connections enable the network to show the same transfer patterns as humans. Overall, this work provides computational and behavioral evidence for feature-based attention being the mechanism for category-learning induced transfer of VPL.

Talk 6, 9:30 am, 51.16

Feature Representation Covaries With Practice Effects Around The Visual Field

David Tu1 (), Shutian Xue1, Marisa Carrasco1; 1NYU

[Background] Practice effect refers to the improvement in performance after repeatedly performing a task. The computations underlying practice effects and whether their magnitude varies around the visual field are unknown. Here, we used a detection task and reverse correlation to ask (1) whether task performance and representation for orientations and spatial frequencies (SF) change with practice, (2) what is the association between task performance and representation changes and (3) do task performance and featural representation interact with visual field location. [Method] Observers detected a horizontal Gabor embedded in noise appearing at either the fovea or four locations at 6° eccentricity: left/right horizontal meridian (HM) and upper/lower vertical meridian (VM). We calculated contrast sensitivity by taking the reciprocal of the Gabor contrast titrated per location. At each location, we grouped the data into 4 separate bins (containing ~600 subsequent trials) and implemented reverse correlation in each to estimate the weights assigned by the visual system to a range of orientations and SFs, interpreted as the perceptual sensitivity to the corresponding feature. We then characterized and compared these representations across bins and locations. [Results] Over practice: (1) Contrast sensitivity increased at all locations, confirming that performance improved. Meanwhile, the perceptual sensitivity to task-relevant orientations (except at the lower VM) and SF (except along the VM) increased; (2) Contrast sensitivity increased proportionally with greater sensitivity to task-relevant orientations, suggesting that the increased perceptual sensitivity to task-relevant orientations underlies the observed practice effect; (3) the extent of increment in the contrast sensitivity and changes in tuning characteristics were similar across locations. [Conclusion] These data indicate that practice improves detection performance and modulates the representation of features similarly around the visual field. The change in the feature representation, especially the increased sensitivity to task-relevant orientations, may underlie improved performance.

Acknowledgements: Funding: NIH R01-EY027401 to M.C