V-VSS, June 1-2

Neural Mechanisms and Models I

Talk Session: Thursday, June 2, 2022, 8:30 – 9:45 am EDT, Zoom Session

Times are being displayed in EDT timezone (Florida time): Friday, September 30, 1:13 pm EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 8:30 am, 81.71

An ecological model of correspondences between colour and sound

Christoph Witzel1 (), Gesine Blank2, Nedim Goktepe3; 1University of Southampton, 2Justus-Liebig-Universität, 3Philipps-Universität

Cross-modal correspondences might give insight into how different kinds of perceptual information, such as colour and sound, are combined to make sense of the world. Inspired by previous work on colour preferences (Palmer & Schloss, PNAS, 2010), we investigated whether colour-sound correspondences can be predicted by shared associations with objects and phenomena in the environment. With different participant samples, we measured (1) correspondences between colours and pitch, (2) object associations with colours, and (3) object associations with pitch. We assembled a set of 24 colours to include typical and nontypical colours of basic colour terms. To determine colour-pitch associations, observers adjusted the pitch of pure tones to best match each colour. Loudness was controlled based on data from a preliminary measurement of loudness matches across pitch. To establish colour-object associations, one sample of participants produced associations with each of the 24 colours. Another sample of participants then rated how well those associations match the colours. To determine object-pitch associations, object concepts were presented, and participants adjusted the pitch that best corresponds with each object or phenomenon. We predicted colour-pitch associations through a quantitative model that combined object-colour and object-pitch associations. For this model, we calculated the weighted average of the object-pitch associations for each colour with the weights being the strengths of the object-colour associations. This model predicted more than half of the variance of the measured colour-pitch associations across the 24 colours. The success of the ecological model supports the idea that cross-modal correspondences are related to objects and phenomena in our environment. We suggest that different kinds of perceptual information are combined in supramodal categories that perceptually identify meaningful objects and phenomena in the environment.

Acknowledgements: Deutsche Forschungsgemeinschaft Sonderforschungsbereich (SFB) TRR 135 project C2.

Talk 2, 8:45 am, 81.72

Not so fast: Limited validity of deep convolutional neural networks as in silico models for human naturalistic face processing

Guo Jiahui1 (), Ma Feilong1, Matteo Visconti di Oleggio Castello2, Samuel A. Nastase3, James V. Haxby1, M. Ida Gobbini4,5; 1Center for Cognitive Neuroscience, Dartmouth College, NH, USA 03755, 2Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA 94720, 3Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA 08544, 4Cognitive Science, Dartmouth College, NH, USA 03755, 5Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale, Università di Bologna, Bologna, Italy 40138

Deep convolutional neural networks (DCNNs) trained for face identification can rival and even exceed human-level performance. The relationships between internal representations learned by DCNNs and those of the primate face processing system are not well understood, especially in naturalistic settings. We developed the largest naturalistic dynamic face stimulus set in human neuroimaging research (700+ naturalistic video clips of unfamiliar faces) and used representational similarity analysis to investigate how well the representations learned by high-performing DCNNs match human brain representations across the entire distributed face processing system. DCNN representational geometries were strikingly consistent across diverse architectures and captured meaningful variance among faces. Similarly, representational geometries throughout the human face network were highly consistent across subjects. Nonetheless, correlations between DCNN and neural representations were very weak overall—DCNNs captured 3% of variance in the neural representational geometries at best. Intermediate DCNN layers better matched visual and face-selective cortices than the final fully-connected layers. Behavioral ratings of face similarity were highly correlated with intermediate layers of DCNNs, but also failed to capture representational geometry in the human brain. Our results suggest that the correspondence between intermediate DCNN layers and neural representations of naturalistic human face processing is weak at best, and diverges even further in the later fully-connected layers. This poor correspondence can be attributed, at least in part, to the dynamic and cognitive information that plays an essential role in human face processing but is not modeled by DCNNs. These mismatches indicate that current DCNNs have limited validity as in silico models of dynamic, naturalistic face processing in humans.

Acknowledgements: This work was supported by NSF grants 1607845 (J.V.H) and 1835200 (M.I.G).

Talk 3, 9:00 am, 81.73

Temporal dynamics of neural ensemble coding of remembered target location in the primate prefrontal cortex

MILAD KHAKI1 (), NASIM MORTAZAVI1, MEGAN ROUSSY1, ADAM SACHS2, JULIO MARTINEZ-TRUJILLO1; 1UNIVERSITY OF WESTERN (ONTARIO), 2UNIVERSITY OF OTTAWA

Neurons in the primate lateral prefrontal cortex (LPFC) encode and maintain working memory (WM) representations in the absence of external stimuli. Neural computations underlying spatial WM in primates are traditionally studied using highly controlled tasks consisting of simple 2D visual stimuli and require a saccadic response. Hence, there is little known about how populations of LPFC neurons may maintain and transform 3D representations of space for animals to navigate towards remembered object locations. To explore this issue, we created a spatial WM task that takes place in a 3D virtual environment. This task presents a target in one of nine virtual locations. The subject is required to navigate to the remembered location by using a joystick after a two-second delay. Neural recordings were conducted in two male rhesus macaques using two 10×10 Utah arrays located in the LPFC (area 8A), resulting in 3847 neurons. We decoded target location on a single-trial basis using a novel high-efficiency classification technique. The method resulted in high decoding accuracy using a minimum number of neurons containing the highest target-specific information. In ensembles of 8-12 neurons, decoding accuracy ranged from 60%-90% (chance = ~11). We determined how neural ensembles encode and maintain information about target locations in three-dimensional space during each trial. Our results demonstrate that ensembles of 2-15 neurons in a group represent each of the nine selected targets that exist during the trial. Ensembles remain consistent over multiple trials of each session, and the specific target of each trial is decoded with 40% to 60% accuracy over chance. These results indicate that in addition to the information encoded in single-neuron activity, temporal dynamics of groups of neurons consistently interacting with each other is also informative and can be used for decoding.

Talk 4, 9:15 am, 81.74

Automatic adjustment of the neural orientation space in astigmatic vision

Sangkyu Son1,2 (), Hyungoo Kang3, Joonyeol Lee1,2; 1Center for Neuroscience Imaging Research, 2Sungkyunkwan University, 3Catholic Kwandong University

In everyday life, the brain deals with various visual impairments, including astigmatism, which damages the perception of a particular orientation due to its meridian-specific blurring characteristics. We tend to suffer less from the optical distortion after being exposed to astigmatism chronically, but little is known about the neural mechanisms. Therefore, the current study investigated how the brain recovered orientation information from distorted retinal input under astigmatism. We asked participants with normal vision to report the perceived orientation of the briefly presented tilted Gabor stimuli when astigmatism was transiently induced. In comparison, the participants with chronic astigmatism performed the identical task with their astigmatism. Then, we estimated the neural orientation tuning responses from the simultaneously recorded EEG activity across the two groups of people. Under the induced astigmatism, orientation tuning responses were severely skewed according to the meridian-specific astigmatic optical blur. In contrast, the skew of the neural orientation responses was far less severe under chronic astigmatism, even if the eye’s refractive errors were similar. When the eye’s refractive error was fully corrected, participants with chronic astigmatism showed an inverse bias in orientation perception, suggesting sustained neural mechanisms that automatically compensate for the retinal distortion. Consistently, neural orientation responses were enhanced at the optically blurred axis but reduced at the orthogonal to the blurred axis after long-term exposure. This automatically counteracts to the distortion that weakens the orientation information at the blurred axis caused by astigmatism. Our computational model further confirmed that the amount of compensation in chronic astigmatism was well correlated with the push-pull gain modulation of neural orientation responses. Our novel evidence suggests the involvement of mechanical neural compensation counteracting orientation-specific retinal aberration in astigmatism, which provides a practical guide in clinical situations.

Acknowledgements: This work was supported by Institute for Basic Science Grant (IBS-R015-D1).

Talk 5, 9:30 am, 81.75

Drawing in the mind’s eye: Developing targeted routines for assessing and enhancing visual ‘learning through drawing’ following treatment for congenital blindness.

Sharon Gilad-Gutnick1 (), Anna Musser1, Matt Groth1, Michal Fux2, Pragya Shah3, Priti Gupta4, Pawan Sinha1; 1Massachusetts Institute of Technology, 2Tufts University School of Medicine, 3Shroff Charitable Eye Hospital, 4Indian Institute of Technology, Delhi

Drawing provides a useful window into aspects of visual representation and the crosstalk between perceptual and motor systems. One challenge in studying how these skills develop lies in the temporally staggered timelines of visual versus fine motoric development in typically developing infants. Babies acquire significant visual sophistication within the first year, but begin to engage in drawing only at toddlerhood. However, our work with a unique group of children born blind and left to languish without treatment for several years allows for a closer merging of these two timelines. In our scientific and humanitarian initiative, Project Prakash, we identify and provide surgical sight treatment to such children. Here, we describe our work with longitudinal tests of visual-motor integration and reading/writing readiness. We created a series of assessments to track the developmental trajectory of basic tracing, copying, and drawing skill via both the haptic and visual domains. Our tasks address two related aspects of visual development: (a) the emergence of an internal representation of the visual world, and (b) translation of this representation onto a 2D space when drawing. I will present multiple analyses performed on this rich data set, including measures of recognizability, a semantic annotation platform to crowdsource labeling of meaningful strokes, and a survey for quantifying the multi-dimensional developmental trajectory of drawing, including perspective, occlusion, and gestalt representation. Overall, we find that while children’s drawings become more recognizable as they gain visual experiences, specific representational dimensions continue to show impairments. These limitations cannot be explained by delays in fine motor skills, as no such delays are found soon after treatment. I will introduce our journey to incorporate these assessments into a pilot educational program for newly sighted children, designed to support them as they learn to integrate vision and scaffold off the abilities they formed while blind.

Acknowledgements: NEI(NIH) R01 EY020517