Plasticity and Learning 1
Talk Session: Sunday, May 21, 2023, 10:45 am – 12:30 pm, Talk Room 1
Moderator: Kristina Visscher, UAB, University of Alabama, Birmingham
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 10:45 am, 32.11
The effect of consolidation and explicitness on learning and transferring higher-level structural knowledge in vision
Dominik Garber1,2 (), Jozsef Fiser1,2; 1Department of Cognitive Science, Central European University, 2Center for Cognitive Computation, Central European University
While studies on visual statistical learning focus on how specific chunks based on co-occurrence of observable elements are learned, they typically neglect exploring the role of knowledge about the higher-level structure of these chunks in learning. We studied this role of structural knowledge by investigating how first being exposed to only horizontal or vertical shape-pairs in scenes affected the subsequent implicit learning of both vertical and horizontal pairs defined by completely novel shapes. In 6 experiments, we found that participants with more explicit knowledge of individual pairs were immediately able to generalize structural knowledge by extracting new pairs with matching orientation better and they kept this ability after both awake and sleep consolidation. In contrast, participants with weaker, more implicit knowledge and without consolidation showed a structural novelty effect, learning better new non-matching pairs. However, after sleep consolidation, this pattern reversed and they showed generalization similar to the “explicit” participants. This reversal did not occur after awake consolidation of the same duration as participants showed strong proactive interference and learned no new pairs. We validated our findings by multiple measures of explicitness both at the participant level (free report) and at the item level (confidence judgments) and by inducing explicitness via instructions. Furthermore, matched sample analysis revealed that the difference between “explicit” and “implicit” participants was not predicted by different strengths of learning in the first exposure phase, but only by the quality of the structural knowledge. Our results show that knowledge of higher structure underlying visual chunks is automatically extracted even in an unsupervised setup and has differential effects depending on the complexity of the extracted knowledge. Moreover, sleep consolidation facilitates transformation of structural knowledge in memory. Overall, these results highlight how momentary learning interacts with already acquired structural knowledge, leading to complex hierarchical knowledge of the visual environment.
Acknowledgements: This work was supported by the Office for Naval Research Grant ONRG-NICOP-N62909-19-1-2029
Talk 2, 11:00 am, 32.12
Neuromodulatory functions (reward and arousal) induce separate effects on visual perceptual learning (VPL) of a salient but goal-irrelevant visual feature
Zhiyan Wang1 (), Mark Greenlee1; 1University of Regensburg, Germany
Neuromodulatory signals such as reward and arousal modulate visual perceptual learning (VPL). Two hypotheses could explain how reward and arousal modulate VPL. A goal-dominance model predicts that reward or arousal enhance visual features relevant to the current goal and inhibit irrelevant features. Conversely, a state-dominance model predicts that reward or arousal enhance visual features irrespective of their relevance to the current goal. To test which of these models is consistent with the effects of reward and arousal on VPL, we trained three participant groups (Reward, Arousal and Control groups) over the course of 5 daily sessions on a VPL task, consisting of a Gabor patch presented in the upper left or lower right visual quadrant. The Gabor had one of two orientations (2.5 degree left or right tilted from vertical) and one of several contrast levels. Participants were instructed to categorize the Gabor based on contrast level, while maintaining central fixation (eye-tracking was conducted). The orientation of the Gabor was task-irrelevant. Unknown to the participants, monetary reward was paired 80% of the times with one of the orientations in the Reward group while an arousing beep was paired in the Arousal group. The reward group was instructed that reward was given when they maintained good fixation. No neuromodulatory signal was provided in the Control group. Before the first and after the final training session, participants performed an orientation discrimination task on different contrast levels in both quadrants. Participants’ performance decreased for the paired orientation in the Reward group, whereas performance enhanced for both the paired and unpaired orientations in the Arousal group. There were no performance changes in the Control group. These results indicate that the effects of reward on VPL are consistent with a goal-dominance model, whereas the effects of arousal on VPL are consistent with a state-dominance model.
Acknowledgements: Author Z.W. is funded by the Alexander von Humboldt Foundation
Talk 3, 11:15 am, 32.13
Plasticity in early visual cortex is modulated by feature salience in task-irrelevant visual perceptual learning
Markus Becker1 (), Jennifer Lubich1, Sebastian Frank1; 1University of Regensburg
Task-irrelevant features are often present while a relevant task is being performed. Previous results showed that visual perceptual learning (VPL) occurs for such task-irrelevant features when they are perceptually weak (near detection threshold). Neuronal changes associated with such task-irrelevant VPL have remained largely unknown and are the focus of the current research. We employed a design in which participants (young adults, n=12) performed a rapid-serial-visual-presentation (RSVP) task at screen center while simultaneously being exposed to coherent motion in one direction as a task-irrelevant feature in the visual periphery. Participants performed the task over the course of twelve daily behavioral exposure sessions. Furthermore, they performed the task inside the scanner while brain activation was measured with functional MRI before the first, after the sixth and after the final behavioral exposure sessions. Participants were randomly assigned to one of two training groups, which differed only in the salience of the task-irrelevant feature (either near threshold for coherent motion detection or highly salient). As a result of the repeated exposure, participants in the threshold exposure group improved discrimination sensitivity for the exposed coherent motion direction, indicative of task-irrelevant VPL. These changes in sensitivity were associated with increased activation in early visual cortical areas representing the task-irrelevant coherent motion in the visual periphery. Trends for these results were already found after six behavioral exposure sessions but they became stronger pronounced after twelve behavioral exposure sessions, indicating that task-irrelevant VPL occurs slowly. A retest session three months after the final behavioral exposure session showed that task-irrelevant VPL and associated activation changes in early visual areas were partially long-lasting. No changes in sensitivity and activation in early visual areas were found with suprathreshold exposure, indicating that task-irrelevant VPL and activation changes in early visual areas occur primarily when the task-irrelevant feature is exposed near detection threshold.
Acknowledgements: Funding: Deutsche Forschungsgemeinschaft (DFG): Emmy Noether Grant (Project Number 491290285)
Talk 4, 11:30 am, 32.14
Pupillometric signature of implicit learning
Paola Binda1 (), Chiara Terzo2, Marco Turi3,4, David C. Burr2; 1University of Pisa, 2University of Florence, 3University of Salento, 4Fondazione Stella Maris Mediterraneo
Far from being a mere reflection of ambient light, the diameter of our eye-pupils has been shown to track the contents of visual perception, the direction of attention and the occurrence of unexpected sensory events. Here we show that changes in pupil-size provide a reliable index of implicit learning, reflecting statistical structures even when they are neither consciously perceived nor within the focus of attention. We used a frequency-tagging temporal segmentation paradigm (Schwiedrzik and Sudmann, J.Neurosci 2020), where sequences of visual images (refreshed at 2 Hz) are displayed either in random order or in pairs, with odd-trial images reliably predicting even-trial images (pairs cycling at 1 Hz). Stimuli were either two-digit numbers, or arrays of lines: in the paired-images condition for arrays, the only information predicting even from odd trials was the numerosity of the array, as the arrangement and orientation of the lines was always randomly resampled on every trial. For both digits and arrays of lines, pupil diameter in N=8 observers oscillated at 1 Hz in the paired-images condition, tracking the statistical structure of the stimulus sequence. For the arrays, the oscillation emerged only when numerosity varied in steps larger than the discrimination threshold (suggesting a potential technique to measure numerosity acuity). Participants were never asked to consciously discriminate the paired sequences and were unaware of the difference between the paired and random conditions. The 1-Hz oscillation remained strong even when attention was directed to an irrelevant feature (the orientation of the lines, which was never predictive from odd to even trials). In summary, we extracted a pupillometric signature of neural prediction in paired images, providing a novel, objective, and seamless way to quantify the automatic and implicit structuring of sensory flow into meaningful units.
Acknowledgements: European Research Council (ERC): European Union’s Horizon 2020 research and innovation program, grant n. 801715 (PUPILTRAITS) and n. 832813 (GenPercept). Italian Ministry of University and Research: PRIN2017 program (grant n. 2017HMH8FA and n. 2017SBCPZY), FARE-2 (grant SMILY) and PNRR THE 8.9.1
Talk 5, 11:45 am, 32.15
TALK CANCELLED: How does attentional capture with statistical learning accelerate perception?
Abbey Nydam1 (), Jay Pratt1; 1University of Toronto
As we experience the visual world, we continually engage in incidental learning about statistical regularities. One of the outcomes of this learning is that objects appearing with statistical regularities are capable of automatically capturing attention. Yet in other work, reliable objects have led to repetition suppression and reduced attentional capture. The question we ask is: does attentional capture by regularity result in the prioritization of such objects in visual processing? To answer this question, we used a modification of the temporal order judgement task. In this task, two objects briefly appeared on either side of fixation, one slightly before the other (with various intervals between 10 ms and 200 ms), and then disappeared and the locations were masked. Observers indicated which object (left or right of fixation) appeared first. Unbeknownst to participants, various statistical regularities were embedded in the task (e.g., pairs or triads of objects occurring at one location). Across a series of experiments, we compared the point of subjective simultaneity for objects that were structured or random, and objects that were strongly predicted or weakly predicted. If learned statistical regularities prioritize visual information, then prior entry effects should be found for objects that are either structured or highly predicted compared with objects that are not structured or weakly predicted. This was indeed the case, indicating that not only do learned statistical regularities in the visual field capture attention, they also accelerate perception. These results help us understand how latent sources of selection, such as those based on implicit statistical predictions, can guide perception and attention.
Talk 6, 12:00 pm, 32.16
Inner retinal integrity correlates with preservation of fine direction discrimination in the blind-field early after V1 damage
Bryan Redmond1,2 (), Matthew Cavanaugh2, Berkeley Fahrenthold2, Jingyi Yang1,2, Krystel Huxlin2; 1University of Rochester School of Medicine & Dentistry, 2Flaum Eye Institute
Trans-synaptic retrograde degeneration in the early visual system follows damage to primary visual cortex (V1). Inner retinal shrinkage measured with optical coherence tomography (OCT) was reported in V1-damaged patients as early as 3 months, increasing out to 2 years post-stroke (Jindahra et al., 2012). We recently documented the presence of residual, conscious discrimination abilities in the blind-field of subacute (<6 months) V1-stroke patients. Here, we asked if maintenance of inner retinal structures underlies this preservation. Using a high-contrast, random dot stimulus, 22 CB participants (mean+/-SD: 3.3+/-1.2 months post stroke) were assessed for visual discrimination ability (% correct >72.5% and measurable direction difference threshold) in their perimetrically-defined blind field (<10 dB of luminance sensitivity on Humphrey Automated Perimetry). Thicknesses of the ganglion cell (GCL) and inner plexiform (IPL) layers of the retina were measured using a Spectralis HRA+OCT and contrasted between affected and unaffected segments of the para-foveal region out to 12˚ eccentricity. From these thicknesses, we computed a laterality index (LI) to indicate relative shrinkage of the affected regions in each patient. Blind-field discrimination abilities were found in seven of the 22 participants. LI:GCL/IPL averaged 0.008+/-0.017 across the entire cohort, -0.006+/-0.016 in preserved CB patients, and 0.014+/-0.014 in non-preserved patients. A significant difference in LI:GCL-IPL (unpaired t-test, p = 0.009) was found between these two groups. LI:GCL/IPL increased with time since stroke (r-squared = 0.3269, p = 0.0054), but seemed uncorrelated with the perimetrically-defined, blind-field size (r-squared = 0.1442, p = 0.081). Thus, early V1-stroke patients with preserved visual abilities exhibit less thinning of inner retinal layers than those without. Given the sensitivity of OCT imaging shown in the present experiment, we are now ideally placed to explore both the prognostic implications of inner retinal preservation for vision restoration, and potentially neuroprotective effects of behavioral interventions in this patient population.
Talk 7, 12:15 pm, 32.17
Competitive neurocognitive networks underlying visual statistical learning
Dezso Nemeth1 (), Teodora Vékony1; 1INSERM, CRNL, Lyon, France
Human perceptual learning depends on multiple cognitive systems related to dissociable brain structures. These systems interact not only in cooperative but sometimes competitive ways in optimizing performance. Previous studies showed that manipulations reducing the engagement of frontal lobe-mediated explicit, attentional processes could lead to improved performance in visual statistical learning. Here I present three studies in which we investigated the competitive relationship between statistical learning and frontal lobe-mediated executive functions. The first and second studies focus on functional brain connectivity during visual statistical learning measured by high-density EEG and fMRI. The results showed that weaker long-range connectivity from the dorsolateral prefrontal cortex (DLPFC) resulted in better visual statistical learning. The third study showed that inhibitory repetitive transcranial magnetic stimulation over the dorsolateral prefrontal cortex enhanced visual statistical learning and predictive processing. Our result sheds light not only on the competitive nature of brain systems in cognitive processes but also could have important implications for developing new methods to boost learning and memory.