VSS, May 13-18

Perception and Action

Talk Session: Sunday, May 15, 2022, 5:15 – 7:15 pm EDT, Talk Room 2
Moderator: Mike Landy, NYU

Times are being displayed in EDT timezone (Florida time): Wednesday, July 6, 3:25 am EDT America/New_York.
To see the V-VSS schedule in your timezone, Log In and set your timezone.

Search Abstracts | VSS Talk Sessions | VSS Poster Sessions | V-VSS Talk Sessions | V-VSS Poster Sessions

Talk 1, 5:15 pm, 35.21

Robust changes in confidence efficiency during post-decision time windows

Tarryn Balsdon1,2, Valentin Wyart2, Pascal Mamassian1; 1Ecole Normale Superieure and CNRS, 2Ecole Normale Superieure and INSERM

Perceptual decisions are accompanied by feelings of confidence that reflect decision validity. Though these feelings of confidence rely on perceptual evidence, dissociations between confidence and perceptual sensitivity are common. One explanation for these dissociations is that confidence utilises ongoing processing after the completion of perceptual decision processes (Pleskac and Busemeyer, 2010, Psych Rev). Here we demonstrate causal evidence for this claim by showing robust differences in confidence efficiency depending on the duration of post-decision time windows. We measured confidence efficiency using a forced-choice design: human observers chose which of two consecutive perceptual decisions was more likely to be correct. Post-decision time pressure was manipulated (whilst leaving stimulus presentation duration constant) by forcing observers to wait to enter their response, or cueing them to respond almost immediately (leaving limited time for ongoing processing before the next trial). This manipulation had limited effects on perceptual sensitivity, but large effects on confidence efficiency. The effect on confidence efficiency depended on the level of processing of the perceptual decision. For high-level perceptual decisions (discriminating the direction of gaze of an avatar face), confidence efficiency benefitted from additional time. But for low-level perceptual decisions about the same stimuli (discriminating the relative contrast of the eyes’ irises), confidence efficiency diminished with time. Over five experiments, we demonstrate the effect of perceptual decision-level within-subjects (Exp. 1 and 2) and the effect of time pressure within-subjects (Exp. 3 and 4). In Experiment 5, we generalise these findings to biological motion stimuli. Robust differences in confidence efficiency can be generated within-subjects, independently of perceptual sensitivity, by manipulating post-decision time windows. These results suggest that confidence strongly relies on the post-decisional processing of ongoing internal representations, that quickly degrade for low-level perception.

Acknowledgements: This project was supported by funding from “FrontCog” ANR-17-EURE-0017, INSERM (Inserm U960), the CNRS (CNRS UMR 8248) and ANR-18-CE28-0015 grant ‘VICONTE’.

Talk 2, 5:30 pm, 35.22

An analysis method for continuous psychophysics based on Bayesian inverse optimal control

Dominik Straub1 (), Constantin A. Rothkopf1; 1TU Darmstadt

Psychophysical methods are the gold standard in vision science because they provide precise quantitative measurements of the relationship between the physical world and mental processes. This is due to the combination of highly controlled experimental paradigms such as forced-choice tasks and rigorous mathematical analysis through signal detection theory. However, they require a large number of tedious trials involving binary responses, preferably by highly trained participants. A recently developed approach, named continuous psychophysics, abandons the rigid trial structure and replaces it with continuous behavioral adjustments to dynamic stimuli, for example tracking tasks (Bonnen et al., 2015). Because these continuous tasks are more intuitive and require much less time, they promise experiments with untrained participants and more efficient data collection. What has precluded wide adoption of continuous psychophysics is that current analysis methods based on ideal observers recover perceptual thresholds an order of magnitude larger compared to equivalent forced-choice experiments. This discrepancy can be explained by additional sources of variability in these tasks as a result of continuous actions involving motor-variability and internal behavioral costs, which classical psychophysics eliminates by experimental design. Here, we account for these factors by modeling a continuous target tracking task using optimal control under uncertainty. To infer parameters from observed data, we invert the model using Bayesian inverse optimal control. We show via simulations and on previously published data that this allows estimating perceptual thresholds in closer agreement with classical psychophysics compared to previous analyses based on ideal observers. Additionally, our method estimates participants’ action variability, internal behavioral costs, and possibly mistaken assumptions about the stimulus dynamics. Taken together, we introduce a computational analysis framework for continuous psychophysics and provide further evidence for the importance of including sensory and acting uncertainties, subjective beliefs, and the intrinsic costs of behavior, even in experiments seemingly only investigating perception.

Acknowledgements: We thank Kathryn Bonnen and Lawrence Cormack for sharing their behavioral data. This research was supported by “The Adaptive Mind”, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art.

Talk 3, 5:45 pm, 35.23

Eye-movements during active sensing suffer from a confirmation bias

Ralf M Haefner1 (), Sabyasachi Shivkumar1, Ankani Chattoraj1, Yong Soo Ra2; 1Brain & Cognitive Sciences, Center for Visual Science, University of Rochester, 2Seoul National University

Human decision-making suffers from a range of biases, with the confirmation bias being one of the most ubiquitous ones (Nickerson 1998). Studying it in the context of perceptual decision-making using psychophysical experiments allows for robust insights based on thousands of trials, free of many confounds present in higher level cognitive contexts. Recent work showed that one type of confirmation bias - a biased interpretation of new information underlies the overweighting of evidence presented early in a trial (Lange et al. 2021). Here, we asked whether another type of confirmation bias - a biased seeking of new evidence - occurs in the context of an active sensing task involving saccades. Prior perceptual studies found no such biases (Najemnik & Geisler, 2005; Yang et al. 2016). We designed a new gaze-contingent task that required the observer to collect new sensory information by making saccades to peripheral targets. Our task design gives us precise control over the stimulus present in fovea and periphery and allows us to compute the frequency with which saccades are made to peripheral targets that agree with the observer's belief. We found that 14/16 observers were more likely to saccade to peripheral locations that they expected would yield new information that agreed with their current belief (12/16 individually statistically significant). Interestingly, the ideal observer for our task also has a small confirmation bias. However, the empirical bias of most observers was substantially larger. We could quantitatively account for the data by assuming that observers employed an approximate Bayesian active sensing strategy. Model comparison revealed that human observers deviated from the ideal observer in terms of model mismatch, and in terms of approximate computations. Interestingly, the data implies that the 'sensory computations' required by the Bayesian observer are more precise than its 'cognitive computations', as suggested by prior work.

Talk 4, 6:00 pm, 35.24

What are the neural correlates of perceptual awareness? Evidence from an fMRI no-report masking paradigm

Elaheh Hatamimajoumerd1,2 (), N. Apurva Ratan Murty2, Michael Pitts3, Michael Cohen1,2; 1Amherst College, 2Massachusetts Institute of Technology, 3Reed College

What are the neural correlates of perceptual awareness? To answer this question, numerous studies have examined the differences in neural activity elicited by visible and invisible stimuli. In virtually all of these studies, observers are asked to report the contents of their experience to objectively confirm that they perceived the critical stimulus (e.g., “I saw an object”). When a stimulus is not perceived, however, observers can only provide a random guess. Therefore, it is not clear if the neural responses evoked by a perceived stimulus are associated with conscious perception or with post-perceptual processes involved in reporting that stimulus (e.g., remembering the target, planning a response, etc.). To separate neural correlates of awareness from neural correlates of post-perceptual processing, we used a novel no-report visual masking paradigm while participants were scanned using fMRI. In the report condition, participants indicated whether or not they saw each stimulus. In the no-report condition, they did not make such reports. With univariate analyses, we replicated prior results in the report condition showing that visible stimuli elicit widespread activation across the ventral pathway and fronto-parietal regions. In the no-report condition, the amount of activation in fronto-parietal regions dropped significantly relative to the report condition. However, although this fronto-parietal activation was severely attenuated in the no-report condition, there was still significantly more activation for visible than invisible stimuli in these regions. Similarly, using multivariate analyses, we found a significant drop in decoding of target visibility in the no-report compared to the report condition, but decoding accuracy still remained significantly above-chance in fronto-parietal regions. Together, these results highlight the importance of distinguishing perceptual awareness from post-perceptual processes and suggest that a smaller, more circumscribed subset of the fronto-parietal network may play a crucial role in conscious awareness even after minimizing the influence of these post-perceptual processes.

Talk 5, 6:15 pm, 35.25

Prospective and Retrospective Cues for Sensorimotor Confidence in a Reaching Task

Marissa H. Evans1 (), Shannon M. Locke2, Michael S. Landy1,3; 1Department of Psychology, New York University, 2Laboratoire des systèmes perceptifs, CNRS & École normale supérieure, Paris, France Fyssen Foundation, Alexander von Humboldt Foundation , 3Center for Neural Science, New York University

On a daily basis, humans interface with the outside world using judgments of sensorimotor confidence, constantly evaluating our actions for success. We ask, what sensory and motor-execution cues are used in making these judgements and when are they available? Prospective cues available prior to the action (e.g., knowledge of motor noise and past performance), and retrospective cues specific to the action itself (e.g., proprioceptive measurements and uncertainty) provide two timepoints at which to assess sensorimotor confidence. We investigated the inputs available at these two timepoints in a task in which participants reached toward a visual target with an unseen hand and then made a continuous judgment of confidence about the success of their reach. The confidence report was made by setting the size of a circle centered on the reach-target location, where a larger circle reflects lower confidence. Points were awarded if the confidence circle enclosed the true endpoint, with fewer points returned for larger circles. This incentivized attentive reporting and accurate reaches to maximize the score. We compared three Bayesian-inference models of sensorimotor confidence based on either prospective cues, retrospective cues, or both sources of information to maximize expected gain (i.e., an ideal observer). Each participant’s motor and proprioceptive noise were fit based on a motor-awareness task: participants reached repeatedly to a fixed target and reported the perceived endpoint. Our findings showed two distinct strategies: participants either performed as ideal observers, using both prospective and retrospective cues to make the confidence judgment, or relied solely on prospective information, ignoring retrospective cues. Thus, participants make use of retrospective cues in a motor-awareness task, but these cues are not always included in the computation of sensorimotor confidence.

Acknowledgements: Funding: NIH EY08266

Talk 6, 6:30 pm, 35.26

Perceptual modulation over the gait-cycle: vision-in-action in virtual reality

Matt Davidson1 (), Robert Keys1, Frans Verstraten1, David Alais1; 1University of Sydney

Nearly everything we know about visual perception comes from tightly controlled environmental settings in the tradition of stationary, seated laboratory experiments. Arguably however, this traditional approach can never provide a complete account of how vision may operate in ecologically valid conditions, such as during dynamic activity in immersive environments. Advances in virtual-reality (VR) technology now enable the tightly-controlled presentation of immersive environments, and to complement traditional measures from psychophysics with records of movement kinematics. Here we present data from two-experiments showing how the accuracy and sensitivity of visual perception varies as a function of the gait-cycle. Participants were engaged in steady-state walking while tracking a floating target, which advanced at a constant comfortable walking speed into the foreground. We capitalised on continuous psychophysics and position tracking, to record a frame-by-frame tracking-response at the presentation rate of the target stimulus. In a first experiment, participants minimised the distance between their dominant hand and the floating target, and the error between the time-series of target position and tracking response were quantified over the gait-cycle. We observed a sinusoidal rhythm in tracking error, which peaked at the ascending phase of the gait-cycle, before rapidly returning to baseline. In the second experiment, participants monitored the target for brief increases in contrast. We observed clear differences in visual sensitivity and detection accuracy over the gait-cycle, with preferential phases for target detection within participants. These results illustrate how the most common of everyday actions influences perception, and evinces the utility of VR technology to broaden our understanding of visual information processing.

Talk 7, 6:45 pm, 35.27

The relationship between gaze and foot placement is shaped by the visual discriminability and availability of footholds in an overground Augmented Reality stepping stone task

Trenton Wirth1 (), Jonathan Matthis1; 1Northeastern University

Successful human locomotion over complex terrain requires precise coupling between gaze and foot placement. Matthis, Yates, & Hayhoe (2018) found that walkers adapted distinct gaze strategies that were tuned to the demands imparted by locomotion over different terrains (e.g., flat ground vs. a rocky creek bed). However, because these were quasi-observational experiments in a natural environment, it is difficult to discern the specific aspects of the terrain that were driving these different strategies. Here, we used an Augmented Reality (UniCAVE) projected ground plane (~2mx10m) to parametrically control the visual discriminability and availability of footholds in an overground stepping stone task. Using a binocular eye tracker (Pupil Labs) and a spatiotemporally synchronized, marker-based, full body motion capture system (Qualisys) we estimate participants 3d gaze and calculate gaze-ground intersection as they traverse the 10m path, comprising pseudo-randomly distributed footholds (represented by Landolt C’s; diameter 110mm, line thickness 32mm) and distractors (represented by visually similar O’s). We manipulated visual discriminability and availability of footholds to discern the aspects of terrain that drive the emergence of different gaze-foothold strategies. Participants walked over six terrains, consisting of two levels of visual discriminability (controlled by manipulating the size in the C gap; 6mm vs. 36mm) crossed with three levels of foothold/distractor ratio (3:1, 1:1, and 1:3). With 21 trials for each of the 7 conditions (six terrains plus a free walking control condition) we record 147 trials (approximately 1960 steps) per participant. We mapped the gaze-foothold relationships from the six terrain conditions to those observed in natural environments (Matthis, Yates, & Hayhoe, 2018) to specify how the visual discriminability and availability of footholds result in different gaze strategies. We further explored whether saccades are driven by the biomechanically specified and energetically optimal footholds vs. the likelihood of finding potential footholds based on peripheral vision.

Acknowledgements: NIH NEI R00-EY028229

Talk 8, 7:00 pm, 35.28

Step by step - Walking shapes visual space

Michael Wiesing1 (), Eckart Zimmermann2; 1Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany, 2Institute for Experimental Psychology, Heinrich Heine University Düsseldorf, Universitätsstr. 1, 40225 Düsseldorf, Germany

Visual depth perception is mostly understood as a purely visual problem including oculomotor processes. Yet, any neural spatial map must be provided with information about how the internal space scales external distances. Here, we show that the distance people walk to reach a target calibrates the visual perception of that distance. We used virtual reality to track physical walking distances, while simultaneously manipulating visual optic flow in a realistic, ecologically valid virtual environment. Participants walked toward a briefly flashed target located 2.50 m in front of them. Unbeknownst to the participant, we manipulated the optic flow during walking. As a result, participants overshot the target location in trials in which optic flow was reduced and undershot it, when the optic flow was increased. After each walking trial, participants visually localized an object presented in front of them. We found a serial dependence between the optic flow speed and subsequent distance judgements. This serial dependence could be driven either purely visually, i.e., by the optic flow or by the travel distance. To disentangle both factors, two follow-up experiments were conducted. In the first experiment, instead of walking, participants controlled their movement via a thumb stick. Again, we observed that travel distances were modulated by the manipulated optic flow, but not visually perceived distances. Finally, we isolated the physical walking by eliminating optic flow during walking. We did not observe any serial dependence on travel distances or subsequent distance judgements. In conclusion, our data reveal that visual depth perception is embodied and calibrated every time we walk toward a target. Linking depth perception directly to the walking travel distance provides a computationally efficient means of calibration. Since the sensorimotor system constantly monitors movement performance, these signals can be used with the only extra neural cost of being fed back to visual areas.

Acknowledgements: Supported by European Research Council (project moreSense grant agreement n. 757184).