Motion: Neural mechanisms, models, perception
Talk Session: Monday, May 22, 2023, 8:15 – 9:45 am, Talk Room 2
Moderator: Alan Stocker, University of Pennsylvania
Schedule of Events | Search Abstracts | Symposia | Talk Sessions | Poster Sessions | Abstract Numbering
Talk 1, 8:15 am, 41.21
Laminar fMRI using spin-echo BOLD reveals feedback and feedforward representations in the human primary visual cortex
Royoung Kim1,2 (), SoHyun Han1,2, Won Mok Shim1,2; 1Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea, Republic of., 2Sungkyunkwan University (SKKU), Suwon, Korea, Republic of.
Ultra-high field fMRI provides an opportunity to examine neural activity across cortical layers informing the directionality of neural signalings, such as feedforward and feedback. The gradient-echo (GE) BOLD signal has limited spatial specificity due to its sensitivity to draining veins at the cortical surface, whereas the spin-echo (SE) BOLD signal is sensitive to small vessels close to neural activity, leading to high spatial specificity. Here, we examined the layer-dependent neural activity of stimulus- and internally-driven representations in the human primary visual cortex (V1) using GE-BOLD and SE-BOLD signals. We acquired GE-BOLD and SE-BOLD signals simultaneously while participants viewed apparent motion (AM) stimuli. The AM stimuli comprised alternating presentations of two gratings whose orientations were orthogonal to each other. We localized regions of interest (ROIs) in V1 corresponding to the stimuli’s location and the mid-point between them, where actually presented and internally interpolated orientations are represented, respectively. The results showed that in the stimulus ROIs, GE-BOLD signal increased toward the superficial layers whereas this trend was less pronounced with SE-BOLD signal. On the other hand, in the internally driven ROIs, responses were absent in GE-BOLD signals whereas significant responses were found across layers in SE-BOLD signals. We also reconstructed orientation represented in each layer using an encoding model. With GE-BOLD signal, we found the highest orientation selectivity in the superficial layers for both ROIs. In contrast, with SE-BOLD signals, we observed distinct layer-dependent orientation responses for different ROIs: in the stimulus ROI, higher orientation selectivity was shown in the middle layers, where feedforward signals are dominant, whereas in the internally driven ROIs, orientation selectivity was higher in the superficial layers, where feedback signals are dominant. Our results suggest that SE-BOLD offers high spatial specificity across cortical layers, which enables us to distinguish internally driven feedback representation from stimulus-driven feedforward representation.
Acknowledgements: This work was supported by the IBS-R015-D1, NRF-2019M3E5D2A01060299, NRF-2019R1A2C1085566 and the Fourth Stage of Brain Korea 21 Project in Department of Intelligent Precision Healthcare, Sungkyunkwan University (SKKU).
Talk 2, 8:30 am, 41.22
Temporal evolution processes of a motion-induced position shift ride neural theta oscillations
Ryohei Nakayama1, Kaoru Amano2, Ikuya Murakami1; 1Department of Psychology, The University of Tokyo, 2Graduate School of Information Science and Technology, The University of Tokyo
Nakayama and Holcombe (2021) found that on a dynamic noise background, the perceived disappearance location of a moving object is shifted in the direction of motion, but not on a static noise background. The present study investigates the temporal evolution processes of this motion-induced position shift. In a psychophysical experiment, the amount of the shift was estimated by a judgement task of the disappearance location of a moving object relative to a flash presented with variable spatiotemporal offsets across trials. The position shift was zero if the flash coincided with the moving object’s disappearance, after which the position shift gradually evolved in the pre-disappearance motion direction until ~120 ms, implying a relatively sluggish evolution process. In an EEG experiment, the amount of the shift was estimated on every trial by an adjustment task of the locations of stationary objects to be matched with the disappearance locations of moving objects. The amount of the position shift correlated with parietal theta phase (3-5 Hz) for several hundred milliseconds before the disappearance (~-600-0 ms), and also with theta power after the disappearance (~0-400 ms). ERP analyses further revealed a correlation between the position shift and anterior late negativity (~300-600 ms). The overall results suggest that the theta phase predicts the temporal evolution period of a motion-induced position shift, and that the theta power increases during the temporal evolution. The anterior late negativity may represent prediction error signals for ceasing the temporal evolution.
Acknowledgements: Supported by JSPS KAKENHI 21K13745 to RN and 18H05523 to IM
Talk 3, 8:45 am, 41.23
Observed social touch is processed in a rapid, feedforward manner: an EEG-fMRI fusion study
Haemy Lee Masson1 (), Leyla Isik1; 1Johns Hopkins University
Observing social touch evokes a strong social-affective response. Our ability to extract the social-affective meaning of observed touch is supported by enhanced communication between brain networks, including social brain regions and somatosensory cortex. Yet, the direction of information flow across these networks and the overall neural dynamics of these processes remain unknown. The current study uses electroencephalography (EEG) to uncover how representations unfold spatial-temporally in the brain during touch observation. Twenty participants watched 500 ms video clips showing social and non-social touch during EEG recording. Representational similarity analysis reveals that EEG neural patterns are explained by visual features beginning at 90 ms post video onset. Social-affective features are processed shortly after, explaining neural patterns beginning at 150 ms. Next, we tracked the spatial-temporal neural dynamics by combining the EEG data with fMRI data from our prior study. We examined information flow across three key brain regions: early visual cortex (EVC), temporoparietal junction/posterior superior temporal sulcus (TPJ/pSTS), and somatosensory cortex. We find that neural information first arises in EVC 50 ms post video onset, then is processed by TPJ/pSTS at 110 ms, and finally somatosensory cortex at 190 ms. Lastly, variance partitioning analysis reveals that EEG neural patterns are uniquely explained by EVC 94 ms post video onset, then by TPJ/pSTS at 190 ms. EEG signals in TPJ/pSTS contain information about sociality of the video clips. Importantly, somatosensory cortex does not explain any unique variance but shares variance with TPJ/pSTS in explaining the EEG data. These results suggest that social touch is processed quickly by the brain, within the timeframe of feedforward visual processes. Social-affective meaning of observed touch is first extracted by social vision and followed by the later involvement of somatosensory simulation. This fast processing may underlie our ability to quickly and effectively use social touch for interpersonal communication.
Talk 4, 9:00 am, 41.24
What causes motion silencing
Qihan Wu1 (), Jonathan I. Flombaum1; 1Johns Hopkins University
When many dots group to form a rotating ring, color changes among them become unnoticed, changes otherwise salient when the ring remains stationary. This ‘Motion Silencing (MS)’ illusion is a striking failure of change perception. Why does MS occur? We sought to distinguish between the possibility that the illusion is caused by lower-level limitations on processing and the possibility that it reflects uncertainty among higher-level inferential mechanisms. In Experiment 1, we developed a search-inspired methodology to quantify the illusion. Participants saw two side-by-side rings of multicolored dots, in one of which the dots changed color rapidly. The task was to report which ring included the changes. When both rings were stationary, latency to respond was fast. When the rings rotated, latencies were longer and increased with rotation speed, thereby revealing the presence of MS: the rotating and changing ring seeming to observers like the one that rotates without changing. We employed this method to identify two conditions where MS does not obtain despite dot motion. In Experiment 2, dots were arranged in a hollow square. Significantly less silencing obtained when the dots translated around the square perimeter compared to when the square rotated around its center. In Experiment 3, dots were arranged in a column. When dots translated in opposite horizontal directions, the resulting impression of a rotating cylinder produced strong MS; but nearly none obtained when all the dots translated in one direction. These experiments imply limits to the motion conditions that produce MS, with strong silencing specifically for rotation. By contrasting stimuli with nearly identical lower-level properties, these experiments suggest that silencing cannot be the consequence of those properties or related processing limitations. We suggest that silencing arises in the process of visual inference, perhaps specifically during inferences about whole object rotation.
Talk 5, 9:15 am, 41.25
Knowledge of other’s biomechanical constraints shapes movement perception
Antoine Vandenberghe1 (), Gilles Vannuscorps1,2; 1Psychological Sciences Research Institute, Université catholique de Louvain, Belgium, 2Institute of Neuroscience, Université catholique de Louvain, Belgium
After seeing a moving object suddenly disappear, observers typically mislocate its final position to where that object would have been a few milliseconds later. This “forward displacement” (FD) is thought to reflect online predictions, by the visual system, of the likely future position of moving objects. Here, I will present the results of three studies that collectively demonstrate that FD elicited by body movement perception is modulated by implicit, unconscious, knowledge of actor-specific biomechanical constraints. In these studies, participants watched videos of two actors performing rotations of the right shoulder. In 80% of the trials (familiarization trials) videos depicted movements directed towards the body. Movements of the “flexible” actor started far from the body and movements of the “rigid” actor started closer from the body, reflecting the two actors’ different flexibility. In the remaining 20% of the trials (trials of interest), both actors performed the same movement directed away from the body. This movement was such that it would have been easy to continue for the “flexible” actor, but impossible to continue for the “rigid” actor. Participants had to indicate whether the arm of the actor depicted on a picture displayed shortly after the video disappeared was at the same position as at the end of the video. In the first study, participants were explicitly told that one actor was flexible and one was rigid. In the two other ones, they learned this information implicitly, through familiarization trials. In all three experiments, analyses of responses to the trials of interest indicated that there was more FD when participants observed the flexible than the rigid actor, and the effect was similar for participants who reported that they did not consciously notice a difference of flexibility between actors. Thus, unconscious knowledge of actor-specific biomechanical abilities affects the perceptual prediction of their movements.
Talk 6, 9:30 am, 41.26
Heading estimation from optic flow is Bayesian but strongly modulated by the size of the experimental response range
Linghao Xu1 (), Qi Sun2,3, Alan Stocker4; 1Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, U.S.A, 2Department of Psychology, Zhejiang Normal University, Jinhua, P.R.C, 3Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, P.R.C, 4Department of Psychology, University of Pennsylvania, Philadelphia PA, U.S.A
Humans can determine their heading accurately from optic flow. However, previous studies have shown that heading directions are sometimes underestimated (e.g., Sun et al. 2020) yet other times overestimated (e.g., Cuturi/MacNeilage, 2013) despite using similar optic flow stimuli. Here we show that these contrasting findings do not reflect actual differences in heading perception but are caused by experimental differences in how participants reported their estimates. We ran a psychophysical experiment where participants were virtually translating in a 3D dot-cloud space with different heading directions. On each trial, an optic flow display was presented indicating a heading direction uniformly sampled from a range of +/-33 deg. After stimulus presentation, participants reported their perceived headings by adjusting a probe on a circle. Three conditions were tested that only differed in the size of the response range within which participants were able to report their estimates (80, 160, 240 deg on the circle). We found that heading estimates were proportionally scaled with the size of the response range such that they were overestimated in large range conditions but underestimated in the smallest range condition. We also derived a Bayesian observer model to quantitatively characterize participants’ estimation behavior. The model assumes efficient sensory encoding that reflects the neural coding accuracy in area MSTd (Gu et al. 2010). Furthermore, it assumes estimates that are linearly scaled for each range condition. We found that this model quantitatively well predicted participants’ estimates both in terms of mean and variance. Our results imply that participants heading percepts are identical under all three response conditions and are well explained with a Bayesian observer model. The differences in reported estimates can be solely attributed to different sizes of the response range and are fully explained by a linear mapping from the percept to the probe response.
Acknowledgements: National Natural Science Foundation of China, China (No. 32200842) to Dr. Qi Sun.