Talk 1, 8:15 am
Does perspective-distortion modulate the temporal tuning of symmetry responses?
Symmetry is a highly salient feature in both natural and man-made environments. Numerous species are sensitive to symmetry, and symmetry is thought to be an important cue for visual tasks, including viewpoint-invariant representation of objects, detection of regularity and structure, and mate selection. However, although symmetries are common in natural and artificial objects and scenes, they are subject to perspective-distortion and thus rarely give rise to symmetrical patterns on the retina during natural vision. Here, we build on previous studies showing that perspective-distortion makes symmetry responses weaker and more task-dependent (Makin et al., 2014; Keefe et al., 2018) by investigating the effect of perspective-distortion on the temporal tuning of symmetry responses. We used novel, naturalistic 3D objects that had reflection symmetry over a vertical axis. The objects were procedurally generated along with well-matched control objects without any symmetries and then rendered to produce images in which object symmetries are either present in the image-plane or perspective-distorted. We measured visual system responses to image-plane and perspective-distorted symmetry using high-density EEG with a Steady-State Visual Evoked Potentials (SSVEPs) paradigm in which images of symmetrical objects alternate with images of control objects. This makes it possible to isolate symmetry-specific brain activity in the odd harmonics of the stimulation frequency. To investigate the temporal tuning of these responses, we used seven different stimulation frequencies in different conditions, between 1 and 10 Hz. We collected data from 30 participants with normal or corrected-to-normal visual acuity. We found that for both image-plane and perspective-distorted symmetry, responses peak at 2 Hz and are much reduced at higher frequencies across electrodes over occipital and temporal cortex. Response amplitudes were generally higher for image-plane symmetry, but surprisingly, the spatial tuning was not strongly modulated by perspective-distortion. Further investigations will determine how distinct visual regions may contribute to these results.
Acknowledgements: This work was supported by the Vision Science to Applications (VISTA) program funded by the Canada First Research Excellence Fund (CFREF, 2016–2023) and by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada awarded to PJK.
Talk 2, 8:30 am
Magnitude of Middle Temporal N300 Reflects Contour Fidelity in Contour Integration
Contour integration (CI) reflects the ability of the visual system to bind individual elements into a global coherent shape. Previous neuroimaging studies of CI compared behavioral and neural data using stimuli with and without contours. However, these studies had multiple potential confounding variables, including the degree of visual awareness, temporal prediction, and task-relevance. To address these, we presented an array of 27x15 line-segments (each 2x0.5°) that changed their orientations independently and randomly at a rate of 15 Hz. Each trial comprised 25 frames (1.67 s). During one of these frames, the orientations of 12 line-segments were aligned to form a contour (outline of a box with 3 segments on each side), at 6° to the right or left of fixation. Subjects (n=10) fixated on a central dot throughout the trial and indicated whether the contour was on the right or left. EEG signals were collected and analyzed along with task performance. In Experiment 1, we varied the onset timing of the contour to test the effects of temporal prediction. Results showed that ERPs including Frontal P200, Occipital N200 and P400, Parietal P400 and contralateral N200, as well as Middle Temporal (MT) N300 were synchronized with the onset timing of the contour. In Experiment 2, we fixed the onset timing of the contour but varied its fidelity by adding various levels of random orientation jitter to the line segments that formed the contour. Results showed that among all the ERPs observed in Experiment 1, only the magnitudes of MT N300 and Parietal P400 were dependent on contour fidelity. As reported in the literature, Frontal P200 and Posterior P400 are likely to reflect awareness/attention and task-related-efforts, respectively, and Posterior N200 is likely to be a correlate of visual phenomenal consciousness. We therefore propose MT N300 as a neural correlate of contour integration.
Acknowledgements: NIH Grant EY030253
Talk 3, 8:45 am
Perceptual Grouping with Latent Noise
Humans effortlessly group elements into objects and segment them from the background and other objects without supervision. For example, the black and white stripes of a zebra are grouped together despite vastly different colors. A thorough theoretical and empirical account of perceptual grouping is still missing – Deep Neural Networks (DNNs), which are considered leading models of the visual system still regularly fail at simplistic perceptual grouping tasks. Here, we propose a counterintuitive unsupervised computational approach to perceptual grouping and segmentation: that they arise because of neural noise, rather than in spite of it. We show that adding noise in a DNN enables the network to separate objects and to segment images even though it was never trained on any segmentation labels. To test whether the models exhibit perceptual grouping, we introduce the Good Gestalt (GG) datasets – six datasets based on a century of Gestalt principles specifically designed to test perceptual grouping. These include illusory contours, closure, continuity, proximity, and occlusion. Our DNN using neural noise finds the correct perceptual groups while other control models, including state-of-the-art segmentation models, fail at these critical tests. We further show that our model performs well with remarkably low levels of noise, and requires only few successive time steps to compute. Using simplifying but realistic assumptions from optics, we are also able to mathematically link our model’s perceptual grouping performance to image statistics. Together, our results suggest a novel unsupervised segmentation method requiring few assumptions, a new explanation for the formation of perceptual grouping, and a novel benefit of neural noise.
Acknowledgements: BL was supported by the Swiss National Science Foundation grant n. 176153 "Basics of visual processing : from elements to figures".
Talk 4, 9:00 am
Forging a head: How internal axes and external visual elements determine a shape’s perceived facing direction
Human perceivers are very sensitive to which way others are facing, with head and gaze cues capturing attention (when directed at us), orienting attention (when directed elsewhere), and even influencing downstream judgments about others’ social traits. But what causes us to see a shape as directed in the first place? Does the perception of a shape’s facing direction depend mainly on its intrinsic structure — or might it also be influenced by spatial context? In Experiment 1, observers briefly viewed a randomly oriented oval, and afterward used a circular slider to report which way they saw it facing. A dot was always drawn near the oval — aligned with either its long or short symmetry axis. Observers were biased to see the oval as facing toward the dot, but this effect was much stronger when the dot was aligned with the oval’s long (vs. short) symmetry axis, indicating that external elements interact with a shape’s internal structure to determine its perceived facing direction. How automatic is this association between long-axis alignment and ‘towardness’? In Experiment 2, participants saw the same displays, but now made speeded keypresses to indicate whether the oval’s long or short axis was aligned with the dot. In one block of trials, they pressed an anterior (further forward) key to report long-axis alignment, and a posterior (further back) key to report short-axis alignment. In another block, they responded with opposite key-mappings. Participants responded faster in the block where an anterior key was paired with long-axis alignment and a posterior key with short-axis alignment, suggesting an automatic bias to see long-axis alignment as facing towards. We conclude that the perception of facing direction is driven by the interaction of internal structure and external context, in a way which indicates the particular salience of the long symmetry axis.
Talk 5, 9:15 am
Phenomenological Contraction does not depend on explicit cues to occlusion
Just as our visual system enables us to experience the things we are looking at, so too does it allow us to experience aspects of those things that are not visible, as is the case when an object is partially occluded from view. Our research focuses on understanding how this ‘seeing what is not there’ is accomplished. While amodal completion is a well-studied process that allows us to experience partially occluded objects as complete, a lesser known phenomenon is that amodally completed parts of objects tend to appear smaller than identically sized counterparts that are fully visible. This phenomenological contraction was first described by Kanizsa and has not been studied nearly as much as the mechanisms behind amodal completion itself. We developed a paradigm using a stimulus composed of two triangles, one partially occluding the other that allowed us to quantify phenomenological contraction by measuring the mislocalization of the occluded triangle’s vertex. In our previous work we found the contraction is independently influenced by the size of the occluded object, corner angle, and the level of occlusion. Our current research seeks to determine whether such mislocalization/contraction is dependent on the presence of explicit occlusion. Across three experiments our approach was to replicate our previous experiments while presenting only partial contours of a single triangle without an explicit occluder. We again found independent influences of corner angle and level of occlusion (how much contour was visible) even when the partial contour consisted of only two line segments (lacking the base of the triangle). Based on our data we conclude that the phenomenological contraction arises from mechanisms of interpolation and/or extrapolation that are largely independent of explicit cues to occlusion and as such be considered as separate from amodal completion.
Talk 6, 9:30 am
Investigating the breadth and strength of perceptual control of Illusory Apparent Motion
Recently, a stimulus called Illusory Apparent Motion (IAM) was discovered by Davidenko et al. (2017) wherein pixel textures randomly refreshing at a rate of 1.5 Hz generate the appearance of coherent apparent motion. IAM is a maximally ambiguous multistable stimulus that observers may perceive as moving coherently in a countless number of patterns (e.g., translation, shear, rotation, expansion-contraction). The current set of studies explores observers’ ability to perceptually control the appearance of IAM. The first two experiments used paradigms similar to those used with other multistable stimuli. Experiment 1 (n = 99) used a motion-priming persistence task, based on the methods of Davidenko et al. (2017), while experiment 2 (n = 76) used a dynamic report task with no priming, based on the methods of Kohler et al. (2008). In both experiments, participants successfully controlled translational motion by ‘changing’ or ‘holding’ their percepts, indicating that observers are capable of perceptually controlling IAM, similar to other multistable stimuli. Having established this, Experiment 3 (n = 43) explored the breadth of participants’ ability to perceive and control motion in IAM by testing them on 14 types of translational, shear, rotating, and expanding-contracting motion patterns. Participants were able to perceive a wide variety of motion patterns but were limited in the motion patterns they could control. Finally, Experiment 4 (n = 82) aimed to quantify the influence of perceptual control in biasing perceptions of IAM by presenting participants with a motion nulling signal (at above and below each participant’s perceptual threshold) while they attempted to control the motion. We were successful in quantifying the strength of perceptual control of IAM relative to low-level motion signals. Collectively, these studies provide evidence for the breadth and strength of observers’ ability to perceptually control IAM.