VSS, May 13-18

Learning

Talk Session: Saturday, May 14, 2022, 8:15 – 9:45 am EDT, Talk Room 1
Moderator: Yuka Sasaki, Brown

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 4:07 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:15 am, 21.11

TALK 1 CANCELLED - Drawing in the mind’s eye: Developing targeted routines for assessing and enhancing visual ‘learning through drawing’ following treatment for congenital blindness.

Drawing provides a useful window into aspects of visual representation and the crosstalk between perceptual and motor systems. One challenge in studying how these skills develop lies in the temporally staggered timelines of visual versus fine motoric development in typically developing infants. Babies acquire significant visual sophistication within the first year, but begin to engage in drawing only at toddlerhood. However, our work with a unique group of children born blind and left to languish without treatment for several years allows for a closer merging of these two timelines. In our scientific and humanitarian initiative, Project Prakash, we identify and provide surgical sight treatment to such children. Here, we describe our work with longitudinal tests of visual-motor integration and reading/writing readiness. We created a series of assessments to track the developmental trajectory of basic tracing, copying, and drawing skill via both the haptic and visual domains. Our tasks address two related aspects of visual development: (a) the emergence of an internal representation of the visual world, and (b) translation of this representation onto a 2D space when drawing. I will present multiple analyses performed on this rich data set, including measures of recognizability, a semantic annotation platform to crowdsource labeling of meaningful strokes, and a survey for quantifying the multi-dimensional developmental trajectory of drawing, including perspective, occlusion, and gestalt representation. Overall, we find that while children’s drawings become more recognizable as they gain visual experiences, specific representational dimensions continue to show impairments. These limitations cannot be explained by delays in fine motor skills, as no such delays are found soon after treatment. I will introduce our journey to incorporate these assessments into a pilot educational program for newly sighted children, designed to support them as they learn to integrate vision and scaffold off the abilities they formed while blind.

Acknowledgements: NEI(NIH) R01 EY020517

Talk 2, 8:30 am, 21.12

Sculpting New Visual Concepts into the Human Brain

Marius Cătălin Iordan1 (), Victoria J.H. Rotvo1, Kenneth A. Norman1, Nicholas B. Turk-Browne2, Jonathan D. Cohen1; 1Princeton University, 2Yale University

Humans continuously learn through experience, both implicitly (e.g., statistical learning) and explicitly (e.g., instruction). As humans learn to group distinct items into a novel category, neural patterns of activity for those items become more similar to one another and, simultaneously, more distinct from patterns of other categories. We hypothesized that we could leverage this process using neurofeedback to devise a fundamentally new way for humans to acquire conceptual knowledge. Specifically, sculpting patterns of activity in the human brain that mirror those expected to arise through learning of new visual categories may lead to enhanced perception of the sculpted categories, relative to similar, control categories that were not sculpted. To test this hypothesis, we implemented a closed-loop system for neurofeedback manipulation using fMRI measurements recorded from the human brain in real-time (every 2s) and used this method to sculpt/create new neural categories for complex visual objects. After training, participants exhibited behavioral and neural biases for the sculpted, but not for the control categories, and we observed a significant positive correlation between the increase in behavioral discrimination and the increase of neural separation of the categories. Neural sculpting provides causal evidence (through direct experimental intervention) that distributed patters of activity evoked by complex visual stimuli can be formed de novo in high-level visual cortex to create categories that didn’t previously exist in the brain or behavior. The ability to sculpt new visual and conceptual distinctions in the human brain has broad relevance to many domains of cognition such as perception, decision-making, memory, and motor control. This also broadens the possibility for non-invasive clinical intervention in humans with fMRI (e.g., brain-machine interfaces, neuroprosthetics, neurorehabilitation) and hints at the distant possibility of sculpting more extensive knowledge or complex concepts directly into the human brain, bypassing experience and instruction.

Acknowledgements: John Templeton Foundation, Intel Corporation, NIH Award R01 MH069456.

Talk 3, 8:45 am, 21.13

Contextual learning and inference in perceptual learning

Gabor Lengyel1,2 (), Máté Lengyel1,3, József Fiser1; 1Central European University, 2University of Rochester, 3University of Cambridge

Recent studies established that perceptual learning (PL) is influenced by strong top-down effects and shows flexible generalization depending on context. However, current computational models of PL utilize feedforward architectures and fail to capture parsimoniously these context-dependent and generalization effects in more complex PL tasks. We propose a Bayesian framework that combines sensory bottom-up and experience-based top-down processes in a normative way. Our model uses contextual inference to simultaneously represent multiple learning-contexts with their corresponding stimuli. It infers the extent to which each context might have contributed to a given trial, and gradually learns the transitions between contexts from experience. In turn, correctly inferring the current context supports efficient neural resource allocation for encoding the stimuli expected to occur in that context, thus maximizing discrimination performance and driving PL. In roving paradigms, where multiple reference stimuli are intermixed across trials, our model explains a broad range of previously described learning effects: (a) disrupted PL when the references are interleaved trial-by-trial, and (b) intact PL when the references are separated into blocks, or when (c) they are interleaved across the trials but follow a fixed temporal order. Our model also provides new predictions for learning and generalization in PL. First, the amount of PL should depend on the extent to which the structure is learnt, predicting more PL in roving paradigms that use more predictable temporal structures between reference stimuli. Second, rather than depending solely on the low-level perceptual similarities of stimuli, generalization in PL should also depend on the extent to which higher-order structural knowledge about contexts (e.g. their transition probabilities) generalizes across different tasks. These results demonstrate that higher-level structure learning is an integral part of any perceptual learning process and that a joint treatment of high- and low-level information about stimuli is required for capturing learning in vision.

Acknowledgements: This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 726090 to M.L.) and the Wellcome Trust (Investigator Awards 212262/Z/18/Z to M.L.)

Talk 4, 9:00 am, 21.14

Sustained attention fluctuations impact visual statistical learning

Ziwei Zhang1 (), Monica Rosenberg1; 1The University of Chicago

Attention fluctuates between optimal and suboptimal states. How do these changes affect what we learn from our environments? In particular, do they affect the degree to which we learn visual regularities? This question seems nearly impossible to answer: Although we learn regularities across repeated pattern exposure, we may be attentive at one exposure but inattentive the next. To overcome this challenge, we designed a task in which visual regularities are presented contingent on attentional state. In an online study (N=150), participants performed a continuous performance task with shape stimuli (1200 trials, 800 ms/trial). They were instructed to press a button in response to shapes from a frequent (90%) but not infrequent category (10%). We measured correct-trial response times (RTs) in real-time, and inserted distinct shape triplets in the trial stream when RTs indicated that a participant was attentive (>1 SD above the participant’s mean RT) or inattentive (>1 SD below the participant’s mean) (deBettencourt et al., 2018; 2019). In other words, participants saw one sequence of three shapes when they were attentive (M=17 triplet repetitions) and another when they were inattentive (M=17.8). Participants next performed a task in which they responded to target shapes selected from the regular triplets. Demonstrating that participants learned regularities, we observed a main effect of intra-triplet position in the target detection task, such that shapes drawn from the third position of the regular triplets were detected faster than shapes from the earlier positions. Furthermore, we observed an interaction between attentional state and intra-triplet position, such that this RT facilitation was greater for the triplet encountered in the attentive vs. the inattentive state. Together these results demonstrate statistical learning for regularities that are not explicitly task-relevant and show, for the first time, consequences of sustained attention fluctuations for visual statistical learning.

Talk 5, 9:15 am, 21.15

The stabilization of visual perceptual learning during REM sleep involves reward-processing circuits

Takashi Yamada1 (), Tyler Barnes-diana1, Shazain Khan1, Luke Rosedahl1, Sebastian Frank1, Antoinette Burger1, Takeo Watanabe1, Yuka Sasaki1; 1Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI

It is well-known that sleep facilitates visual perceptual learning (VPL), but which circuits are involved in the facilitation is unclear. Early visual areas are implicated in the facilitation. For example, non-rapid eye movement (non-REM) sleep plays a role in performance enhancement by increasing plasticity in early visual areas, while REM sleep stabilizes learning by decreasing plasticity in early visual areas. However, we recently found that reward provided during training prolongs subsequent REM sleep and strengthens VPL after sleep, suggesting that reward-processing circuits also play a role in some aspects of the facilitation. Here, we investigated how the reward-processing circuits are involved in VPL facilitation during sleep. Subjects were trained on different texture discrimination tasks (TDTs) before and after sleep. The TDTs were designed to interfere with each other unless pre-sleep learning was stabilized during sleep. We measured the balance between excitatory and inhibitory neurotransmitter concentrations (E/I balance) in the ventromedial prefrontal cortex (vmPFC), part of reward-processing circuits during non-REM sleep and REM sleep and wakefulness as baselines by magnetic resonance spectroscopy. This was based on our previous finding that the E/I balance is associated with the degree of brain plasticity. Sleep stages were determined by polysomnogram simultaneously measured with E/I balances. We found that the performance of both pre-sleep and post-sleep TDTs improved, indicating that pre-sleep learning was stabilized during sleep. Additionally, the E/I balance in the vmPFC during REM sleep decreased relative to baselines, and the amount of decrease was correlated with the degree of stabilization of pre-sleep learning. However, no such changes occurred during non-REM sleep. These results suggest that the stabilization of VPL during REM sleep involves the vmPFC as part of reward-processing circuits as well as early visual areas. Future research will investigate how the vmPFC interacts with early visual areas during sleep and facilitates VPL.

Acknowledgements: NIH (R01EY031705, R01EY019466, R01EY027841, P20GM13974), KAKENHI (JP20KK0268).

Talk 6, 9:30 am, 21.16

Subjective Judgments of Learning Reveal Conscious Access to Stimulus Memorability

Joseph M. Saito1 (), Matthew Kolisnyk2, Keisuke Fukuda1,3; 1University of Toronto, 2Western University, 3University of Toronto Mississauga

Visual stimuli are not created equal; some are consistently remembered across observers while others are consistently forgotten. This stimulus memorability phenomena highlights the existence of intrinsic stimulus properties that can outweigh individual differences in visual cognition and predict subsequent memory performance. While several contributing factors have been identified, much about stimulus memorability remains unknown, including whether observers are aware of these intrinsic stimulus properties (e.g., Bainbridge et al., 2013; Isola et al., 2013). Here, we assessed participants’ conscious access to stimulus memorability as they encoded 150 real-world objects (Experiment 1) or human faces (Experiment 2) into visual long-term memory. For each stimulus, participants provided a subjective judgment of learning (JOL) that indicated how likely they were to remember that stimulus during later recognition testing. If participants have conscious access to stimulus memorability during encoding, we should expect that (1) JOLs made for a given stimulus would be consistent across participants and that (2) group consistency in JOLs would be predictive of group consistency in memory performance (i.e., stimulus memorability). For a given studied image, we found group consistency in JOLs (mean split-half correlation = 0.811, Experiment 1, 0.458, Experiment 2) that predicted stimulus memorability (r = 0.682, Experiment 1, 0.596, Experiment 2), suggesting that participants’ subjective JOLs were influenced by intrinsic stimulus properties that predicted parallel patterns in objective encoding success. Interestingly, however, participants’ access to stimulus memorability was not comprehensive. Residual memory performance for a given stimulus was still consistent across participants after regressing out JOLs (mean split-half correlation = 0.487, Experiment 1, 0.352, Experiment 2), revealing group consistency in unanticipated remembering and forgetting, when JOLs underestimated and overestimated stimulus memorability, respectively. Together, these findings demonstrate that some—but not all—aspects of stimulus memorability are consciously accessible to observers while encoding visual information into visual long-term memory.

Acknowledgements: This work was supported by an NSERC Discovery Grant awarded to KF (RGPIN-2017-06866)