Development: Infancy
Talk Session: Saturday, May 20, 2023, 2:30 – 4:15 pm, Talk Room 1
Moderator: Lisa Oakes, UC Davis
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 2:30 pm, 24.11
Visual experience drives the development of novel and reliable visual representations from endogenously structured networks
Sigrid Trägenap1 (), David E. Whitney2, David Fitzpatrick2, Matthias Kaschube1,3; 1Frankfurt Institute for Advanced Studies, 2Department of Functional Architecture and Development of Cerebral Cortex, Max Planck Florida Institute for Neuroscience, Jupiter, Florida, USA, 3Goethe University Frankfurt, Department of Computer Science, Germany
Cortical circuits embody remarkably reliable neural representations of sensory stimuli that are critical for perception and action. The fundamental structure of these network representations is thought to arise early in development prior to the onset of sensory experience. However, how these endogenously generated networks respond to the onset of sensory experience, and the extent to which they reorganize with experience remains unclear. Here we examine this ‘nature-nurture transform’ using chronic in vivo calcium imaging to probe the developmental emergence of the representation of orientation in visual cortex of the ferret, a species with a well-defined modular network of orientation-selective responses. At eye opening, visual stimulation of endogenous networks evokes robust modular patterns of cortical activity. However, these initial evoked activity patterns are strikingly different from those in experienced animals, exhibiting a high degree of variability both within and across trials that severely limits stimulus discriminability. In addition, visual experience is accompanied by a number of changes in the structure of the early evoked modular patterns including a reduction in dimensionality and a shift in the leading pattern dimensions indicating significant network reorganization. Moreover, these early evoked patterns and their changes are only loosely constrained by the endogenous network structure of spontaneous activity, and spontaneous activity itself reorganizes considerably to align with the novel evoked patterns. Based on a computational network model whose predictions closely match the biology, we propose that the initial evoked activity patterns reflect novel visual input that is only poorly aligned with the endogenous networks and that highly reliable visual representations emerge from a realignment of feedforward and recurrent networks that is optimal for amplifying these novel patterns of visually driven activity.
Talk 2, 2:45 pm, 24.12
Slow change: An analysis of infant egocentric visual experience
Saber Sheybani1 (), Zoran Tiganj1, Justin N. Wood1, Linda B. Smith1; 1Indiana University Bloomington
Visual perception emerges from a cortical hierarchy that extracts increasingly complex features, from edges to categories. Considerable computational and neural research suggests that the visual system is biased to extract slowly changing features in the input. However, little is known about the visual statistics of infant experience during early stages of receptive field formation. Here we provide evidence on the rate of change in low-level visual features and semantic features in infant everyday experience. Infants (2 to 12 months of age, n = 27) wore head cameras at home, collecting 120 hours of egocentric videos. We measured the rate of change at three levels of stimulus description: 1) raw pixels, 2) edge features (GIST, a measure of the edges at various orientations and scales), and 3) semantic features (derived from a trained CNN object classifier). For all measures, we calculated the Euclidean distance between the vector descriptions of image pairs, at a series of time lags. The distribution of distances was unimodal for all lags, with the mode increasing with lag. We then fit an exponential curve to the mode as a function of lag. We report the time constants of the exponential fits as a measure of the time scale of change. At all three levels of stimulus description, infant visual experiences changed slowly (pixels = 1.3s; gist = 1.4s; semantic = 1.9s). The rate of change for the youngest infants (2 to 4 months) was particularly slow (pixels = 1.8s; gist = 2.4s; semantic = 3.2s), especially for the edge level and semantic level. These results provide new evidence on the temporal properties of early experience and inform current theories of both unsupervised learning and receptive field formation. The findings also suggest that human altricial motor development may play a functional role in constraining early visual experiences.
Acknowledgements: This research was supported by NIH grant 1R01EY032897 to Linda Smith and T Rowan Candy. The authors also acknowledge the Indiana University Pervasive Technology Institute for providing supercomputing and storage resources, with partial support from Lilly Endowment, Inc.
Talk 3, 3:00 pm, 24.13
Influences of the home visual environment on infant attention: insights from remote webcam eye tracking
Denise M Werchan1 (), Moriah E Thomason1, Natalie H Brito2; 1NYU Langone Health, 2New York University
Rationale: Infant visual attention is a foundational information-gathering mechanism that shapes learning and higher-order cognition (Colombo, 2001). The majority of knowledge on visual attention development has been drawn from small samples of infants assessed in artificial laboratory settings. These contexts drastically differ from the visually-rich environments that infants’ experience in their day-to-day life. Here we examine how naturalistic variations in the complexity of the home visual environment impacts infant attention. We examine this question by measuring infant looking behavior in the home using OWLET – a novel Online Webcam-Linked Eye Tracker (Werchan, Thomason, & Brito, 2022). Method: 140 six-month-old infants were enrolled in an ongoing remote longitudinal study. Images of families’ homes were recorded and rated for visual complexity on a 7-point Likert scale. OWLET was used to measure infant looking behavior during a standard attention task adapted for remote testing (Gustafsson et al., 2021). Results: We first evaluated the validity of OWLET for assessing infant attention in the home. Our results indicated robust correlations between OWLET measures and parent-report measures of infant attention, rs > .22, ps < .02. We then examined whether the complexity of the home visual environment predicted individual differences in attention, using a composite measure of attention derived through confirmatory factor analysis with infants’ looking durations, gaze patterns, and regulatory capacity as indicators. Controlling for family socioeconomic status, results indicated that greater home visual complexity predicted better attention, β = .20, p = .02. Conclusion: This work demonstrates the validity of webcam eye tracking for assessing infant attention in the home. Importantly, these findings also suggest that variations in the visual complexity of the home predict infant attention, above and beyond effects of socioeconomic factors. These results provide insight into how early sociocultural contexts shape attention development, with potential implications for subsequent learning and cognition.
Talk 4, 3:15 pm, 24.14
The statistics of infants’ natural visual experience are shaped by motor development.
Zachary Petroff1 (), T. Rowan Candy1, Kathryn Bonnen1; 1Indiana University
During the first two years of life, human infants develop the ability to coordinate their head/eye movements and move through the world, first by rolling, then crawling and eventually walking. These changing motor abilities have an impact on their visual experience. We examined how the statistics of visual experience change over the course of motor and visual development. We analyzed video data gathered from cameras mounted on the heads of infants (N=89, age=1 to 26 months) in their home environment. Image statistics were calculated for each video frame, including RMS contrast, orientation, spatial frequency, and contrast energy. We also measured the frequency of large-head-movements. Using deep learning, and epi-polar geometry, we reconstructed the angular camera movement between video frames. We treat these estimates as an approximation of the head movements. Most notably, between 1 and 11 months, there was an increase in the occurrence of large-head-movements, followed by a slow decrease through 26 months. The global RMS contrast was uniform across age groups, but the concentration of RMS contrast towards the center of the video frames was more pronounced in older infants, suggesting that older infants select high-contrast information as they become more capable of manipulating their bodies and their environment. We saw no consistent difference in the orientation content across age. This was counter to our hypothesis that the typical biases (horizontal/vertical) in orientation statistics in natural images would emerge/tighten as infants developed postural stability. However, 26 months is still relatively early in the development of walking and postural control. Future work will include comparable data from older children or adults to determine if the orientation content changes with this increased stability or if the variability due to infant postural instability is small relative to the natural variability of orientation content due to the self- motion observed in adults.
Talk 5, 3:30 pm, 24.15
Evaluation of Graph-Based Visual Saliency model using infant fixation data
Brianna K. Hunter1 (), Shannon Klotz1, Michaela DeBolt1, Steven Luck1, Lisa Oakes1; 1University of California, Davis
Infant researchers are increasingly relying on computational models of bottom-up visual saliency, such as Graph-Based Visual Saliency (GBVS; Harel et al., 2006), to better understand the development of visual attention. GBVS predicts where an observer will look by decoding low-level features (color, intensity, orientation) of images and computing an overall map based on a weighting of these feature properties. The resulting maps are thought to reflect the distribution of the physical saliency of an image. However, GBVS was designed to predict adult fixations, and therefore, it is unclear the extent to which this model can reliably predict infant fixations or approximate infant visual saliency processing. We recorded eye gaze from 4- (N = 19), 6- (N = 21), and 10-month-old infants (N = 23) and adults (N = 24) as they viewed up to 48 naturalistic scenes from the MIT Saliency Benchmark Project (Judd et al., 2012). Correlations between each participant’s fixation density maps for each scene and GBVS saliency map were higher for adults compared to infants, indicating poorer GBVS performance for infant fixation data. Maps constructed for each of the individual channels (color, intensity, orientation) revealed that eye gaze was best predicted by orientation. Although GBVS performance did not increase over infancy, comparison of the GBVS-fixation density correlations to the noise ceiling (i.e., leave-one-out fixation density correlations) revealed that at 4 months, physical salience as measured by GBVS–and orientation in particular–accounted for nearly all explainable variation in eye gaze. However, the proportion of explainable variance accounted for by physical salience dramatically decreased across infancy. We suggest that young infants’ limited visual acuity and cortical development may result in qualitatively different processing of physical salience in naturalistic scene-viewing tasks compared to adults. Future work will explore how differences in scene properties (e.g., entropy/clutter) may relate to GBVS performance.
Acknowledgements: This work was supported by NIH grants R01EY030127 and R01 EY022525
Talk 6, 3:45 pm, 24.16
Infants are sensitive to the Edge Orientation Entropy of Natural Scenes
Philip McAdams1, Sara Svobodova1, Taysa-Ja Newman1, Kezia Terry1, Alice Skelton1, Anna Franklin1; 1Sussex Colour Group & Baby Lab, School of Psychology, University of Sussex, UK
Regularities in edge orientations aid efficient recognition and categorization of scenes and objects (Geisler, 2001; Perrinet & Bednar, 2015; Sigman et al., 2001). The spatial distribution and relationships of oriented edges (Edge Orientation Entropy, EOE; Redies et al., 2017) also predicts aesthetics, with greater preference for some image types when edges are more evenly distributed across orientations (e.g., Grebenkina et al., 2018). Here, we investigated whether the developing visual system is sensitive to EOE, by measuring infants’ visual preferences for stimuli previously used to identify the relationship between EOE and adult aesthetics (e.g., Grebenkina et al, 2018). Stimuli were greyscale photos of simple and ornamental building facades equalised for luminance. A set of 24 oriented Gabor filters were used to extract the edge orientations and to calculate EOE. Every stimulus was paired with every other stimulus, and each participant saw a random selection of 50 image pairs whilst eye-movements were recorded. Twenty-nine 4-9-month old infants took part, and 29 adults were also asked to look freely at the images. Infants looked significantly longer the greater the EOE, with over half of the variance in infant looking time predicted. Infants and adults both looked longest at images where all edge orientations are about equally likely to occur (high 1st-order EOE), and at images with low correlation of edge orientations across the image (high 2nd-order EOE). Infant looking time explained over half of the variance in adult looking time and adult pleasantness ratings taken from Grebenkina et al. (2018). Our results suggest that even as young as 4-months of age, infants’ spatial vision is sensitive to EOE and is biased to natural scenes where edges are more evenly distributed across orientations. We also tentatively suggest that high EOE is a ‘perceptual primitive’ of aesthetics, at least for some types of stimuli.
Acknowledgements: Funding from European Research Council (COLOURMIND, 772193) and Etta Loves.
Talk 7, 4:00 pm, 24.17
What happens to change-detection if you take away the task? Assessing adult and infant fixation preferences while passively viewing change-detection arrays
Shannon Ross-Sheehy1 (), Victoria Jones1, Esther Reynolds1; 1University of Tennessee
Canonical adult change-detection tasks utilize button-press responses, and memory is inferred based on accuracy. However, infant change-detection tasks must rely on passive visual responses to infer change-detection, making between-age comparisons difficult. Importantly, recent work revealed adults and infants scanned 4-item change-detection arrays similarly, dwelling longer on the color-changed circle (i.e., change-preference); however only if they had fixated that circle during the sample array (Eschman & Ross-Sheehy, in press). This is surprising, as adults should have been able to remember all four circles. These results raise two questions: First, do passive, free-viewing tasks elicit the same memory/performance as tasks that incorporate a button-press? Second, do these effects persist if arrays exceed capacity? To examine this, 11-mo-old infants and adults (n=15 each) were tested in a passive change-detection task, and adults completed an additional block of active trials (i.e., button-press to indicate same/different). Gaze was sampled at 500Hz using EyeLink 1000+. Change-detection arrays consisted of colored circles (ss3, ss6, ss9), each trial included 1500ms sample array, 500ms retention interval, and 3000ms test array. Circle colors either stayed the same from sample to test (No-Change) or varied by one color based on scanning during the sample array (N-back1=last fixated circle, N-back2=second-to-last fixated circle, Change-Other=non-fixated circle). Results revealed higher change-preference for adults, F(1,28)=9.198, p=.005. However, both ages showed condition effects, F(2,56)=88.48, p<.001, with higher change-preference for N-back1 than N-back2 (p=.006) and for N-back2 than Change-Other (p<.001). Change-preference was also higher for larger set sizes, F(2,56)=8.95, p<.001, suggesting visual load influenced scanning and/or encoding. Importantly, adult change-preference did not differ for passive and active blocks (all ps>.05.), although additional analyses revealed more scanning during active block, F(1,14)=6.141, p=.027. This suggests that adding an explicit response altered scanning, but not circle preference during the test array. Additional performance measures and implications for active/passive task designs discussed.