Decision Making

Talk Session: Tuesday, May 21, 2024, 5:15 – 7:15 pm, Talk Room 2
Moderator: Constantin Rothkopf, TU Darmstadt

Talk 1, 5:15 pm, 55.21

Investigating the role of long-term perceptual priors in confidence

Marika Constant1 (), Elisa Filevich2, Pascal Mamassian3; 1Humboldt-Universität zu Berlin, 2University of Tübingen, 3École Normale Supérieure, PSL University, CNRS, Paris, France

According to Bayesian models, both our perceptual decisions and our confidence about those decisions are based on the integration of incoming sensory information with our prior expectations. These models therefore assume that priors influence confidence and decisions in the same way, and to the same extent. While asymmetries have been found in the influence priors have on decisions versus confidence, challenging this assumption, those results were obtained with high level cognitive priors that were induced in the task context. It remains unclear whether this generalises to long-term, perceptual priors. Here, we investigated the influence of a low-level prior, namely the slow-motion prior, on confidence. Stimuli were parallel line segments in motion for which the slow-motion prior biases the perceived direction to be perpendicular to the line orientations. Observers had to decide whether the motion direction was clockwise or counterclockwise relative to a reference, and after two such decisions, judge which decision was more likely to be correct. We contrasted two conditions – one where the percept was dominated by the prior, and another where incoming sensory information was dominant. We then assessed which of these conditions participants were more likely to judge as their more confident decision. We found a confidence bias favouring the prior-dominant condition, even when accounting for differences in perceptual decision performance. This suggests that priors impact confidence more strongly than they do perceptual decisions, even in cases of low-level, perceptual priors. Further computational modelling indicates that this effect may be best explained by confidence using the degree of prior-congruent information as an additional cue, above and beyond the posterior evidence used in perceptual decisions. We propose that participants have a metacognitive bias to incorporate confirmatory evidence in favour of their own prior expectations, even when these priors are low-level and participants are unaware of them.

Acknowledgements: This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 337619223 / RTG2386, a Freigeist fellowship from the Volkswagen Foundation, number 9D035-1, and an EC grant HORIZON-MSCA-2022-DN-01 “CODE”.

Talk 2, 5:30 pm, 55.22

Bayesian inference by visuomotor neurons in prefrontal cortex

Robbe Goris1 (), Thomas Langlois1, Julie Charlton1; 1UT Austin

Perceptual interpretations of the environment emerge from the concerted activity of neural populations in decision-making areas downstream of sensory cortex. When the sensory input is ambiguous, perceptual interpretations can be biased by prior beliefs that reflect knowledge of environmental regularities. These effects are examples of Bayesian reasoning, an inference method in which prior knowledge is leveraged to optimize decisions. However, it is not known how decision-making circuits combine sensory signals and prior beliefs to form a perceptual decision. To address this, we studied neural population activity in the prefrontal cortex of two macaque monkeys trained to report perceptual judgments of ambiguous visual stimuli under different prior statistics. Monkeys judged whether a visual stimulus was oriented clockwise or counterclockwise from vertical and communicated their decision with a saccadic eye movement towards one of two visual targets. The meaning of each response option was signaled by the target's orientation (clockwise vs counterclockwise) and was unrelated to its spatial position. Because the spatial configuration of the choice targets varied randomly from trial to trial, changes in prior stimulus statistics biased the animals' perceptual reports, but not the overt motor responses. We analyzed the component of the neural population response that specifically represents the formation of the perceptual decision (the decision variable, DV), and found that its dynamical evolution reflects the integration of sensory signals and prior beliefs. The DV’s initial value before stimulus onset reflects the prior belief in the future state of the sensory environment, while the dynamic range of the DV's ensuing excursion reflects the relative influence of the incoming sensory signals. These findings reveal how prefrontal circuits integrate prior stimulus expectations and incoming sensory signals at the behaviorally relevant timescale of the single trial, thus exposing a general mechanism by which prefrontal circuits can execute Bayesian inference.

Talk 3, 5:45 pm, 55.23

Different stimulus manipulations produce dissociable confidence-accuracy relationships

Herrick Fung1 (), Dobromir Rahnev1; 1Georgia Institute of Technology

A central goal in visual metacognition is to uncover the underlying computations that give rise to our sense of subjective confidence. Achieving this goal necessitates an understanding of how confidence changes in response to various manipulations. However, existing studies have predominately relied on a single stimulus manipulation under the tacit assumption that different manipulations are likely to have equivalent effects on confidence. Here, we test this assumption by including four distinct stimulus manipulations within a single experiment. Subjects judged the orientation (clockwise vs. counterclockwise from 45°) of Gabor patches. The stimuli varied in (1) size (2.5, 5, and 7.5° visual angle), (2) duration (33, 100, and 500 ms), (3) noise contrast (.1, .75, and .9), and (4) orientation (T/2, T, 2T, where T is the individualized threshold obtained by a staircase procedure). We found that the four manipulations produced vastly different effects on accuracy and confidence. Specifically, the size and noise contrast manipulations had a small effect on accuracy but a substantial effect on confidence. Conversely, the orientation manipulation greatly affected accuracy but had only a modest influence on confidence. The orientation manipulation stood out in yet another aspect: it was the only manipulation for which confidence for incorrect trials was higher for the more difficult compared to the easier conditions. The remaining three manipulations exhibited the opposite pattern. We speculate that these effects were driven by orientation being the only manipulation not immediately obvious to the observers. These results clearly demonstrate that different stimulus manipulations yield extensive differences in the confidence-accuracy relationship. Our findings challenge prominent models of confidence that assume a single, stereotypical relationship between confidence and accuracy.

Acknowledgements: This work was supported by the National Institute of Health (award: R01MH119189) and the Office of Naval Research (award: N00014-20-1-2622).

Talk 4, 6:00 pm, 55.24

Direct precision manipulations of mental representations in visual working memory drive serial dependence

Sabrina Hansmann-Roth1; 1University of Iceland

Our behavior is heavily influenced by previous information. Work in the field of serial dependence has investigated how the combination of past and present information affects perception and cognition. These studies revealed strong attractive biases towards previously seen stimuli, especially when stimuli are uncertain. Here, for the first time we directly manipulated the uncertainty of mental presentations instead of the uncertainty in the stimulus, through an intermediate task that observers conducted during the retention interval. Participants were presented with differently oriented Gabors and had to reproduce their orientation. While memorizing the orientation, they judged whether two stimuli were identical. This intermediate task varied in the type of stimuli observers were presented with: They compared the size of circles (that contain no orientation information), the length of differently oriented lines or the orientation of lines, and a control condition was added with no intermediate task. These manipulations allowed for a detailed assessment of the role of load and inter-item competition in memory on the precision of the encoded Gabor and subsequently, on the strength of serial dependence. In line with the variable-precision model the mere presence of an intermediate task decreased the precision of the memorized Gabor orientation, enhancing the attractive bias towards past information. Inter-item similarity between the memorized Gabor and the stimuli from the inter-item task further influenced serial dependence: If the intermediate task also required the memorization of orientation, serial dependence was even stronger than for stimuli that contained no orientation information. These results provide novel evidence of the role of working memory on serial dependence. As the precision of individual representations in memory degrades, a greater weight is placed on previous information to make the correct inferences. Moreover, inter-item similarity also leads to a decrease in precision and as a result to an increase in serial dependence.

Talk 5, 6:15 pm, 55.25

Enhanced Metacognition in Individuals with Autism Spectrum Disorder When Integrating Sensory Evidence and Rewards, but Not Prior Knowledge

Laurina Fazioli1 (), Bat-Sheva Hadad1, Rachel Denison2, Amit Yashar1; 1University of Haifa, 2Boston University

Background: Autism Spectrum Disorder (ASD) is a group of neurodevelopmental disorders with complex and diverse impacts on cognition and behavior. Sensory symptoms are increasingly recognized as a core phenotype of ASD, yet the interrelation of these symptoms with cognitive processes remains poorly understood. At the intersection of perception and cognition is the process of perceptual confidence – the ability to evaluate the accuracy of one's own sensory experiences. However, few studies have explored perceptual confidence in ASD. Objective: This study aims to investigate the differences in perceptual metacognitive abilities between individuals with ASD and neurotypical (NT) controls. Using a Bayesian framework, we quantitatively assess how individuals with ASD integrate prior knowledge, sensory evidence, and reward in tasks requiring judgments of perceptual confidence. Method: Two groups of participants, ASD (n = 52) and NT (n = 93), performed an orientation categorization task, designed to evaluate each Bayesian component independently. We manipulated priors, sensory evidence, and reward by varying base rate, stimulus contrast, and a point system, respectively. Participants simultaneously reported the category orientation distribution of a Gabor stimulus and their perceptual decision confidence (four-level rating) by pressing one of eight keys. Results: Individuals with ASD showed enhanced metacognitive accuracy in experiments manipulating sensory evidence and reward, but not in the prior experiment. Furthermore, the type 2 decision criteria (i.e., probability of giving a high-confidence rating) was influenced by the manipulation of prior knowledge to the same extent between the two groups. Conclusions: Our study uncovers an important difference: enhanced metacognitive judgment abilities in individuals with ASD, specifically when integrating sensory evidence and rewards, but not in the context of prior knowledge. This reveals a key difference in how individuals with ASD reflect on and interpret their own perceptual processes.

Talk 6, 6:30 pm, 55.26

Metacognitive Monitoring of the Visual System in Sustained Attention

Cheongil Kim1 (), Sang Chul Chong1; 1Yonsei University

The state of the human visual system undergoes moment-to-moment fluctuations due to various neurocognitive factors, such as mind wandering and vigilance. To deal with this instability in the visual system through timely intervention (e.g., controlling attention and taking a rest), monitoring the state of the visual system might be crucial. In this study, we investigated whether and how people can monitor the state of their own visual system during sustained attention tasks. Participants were required to report the orientation (Experiment 1) or presence (Experiment 2) of a Gabor target every two seconds with a confidence judgment for their response. We presumed that if participants could monitor the state of the visual system, confidence judgments would accurately track task performance fluctuations. In Experiment 1, we observed a positive correlation between orientation discrimination performance and confidence, supporting accurate metacognitive monitoring. Experiment 2 aimed to elucidate the mechanism of metacognitive monitoring: direct monitoring of the state of the visual system (e.g., current states of attention and vigilance) versus indirect monitoring based on the visibility of a target. To address this, we employed a target detection task. Specifically, in detection, confidence judgments can be informed by target visibility in judgments about target presence (e.g., high confidence for high visibility) but not target absence. Therefore, if participants monitor only target visibility, their confidence judgments would correlate with performance fluctuations for target-present responses but not for target-absent responses. We observed a positive correlation between detection performance and confidence for target-present responses, but no correlation for target-absent responses. These results suggest that, in sustained attention, metacognitive monitoring of the visual system relies on the visibility of a target, rather than the state of the visual system itself.

Acknowledgements: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2022R1A2C3004133).

Talk 7, 6:45 pm, 55.27

People take Newtonian physics into account in sensorimotor decisions under risk

Fabian Tatai1,2 (), Dominik Straub1,2, Constantin A. Rothkopf1,2; 1Institute of Psychology, Technical University Darmstadt, 2Centre for Cognitive Science, Technical University Darmstadt

People skillfully manipulate objects on a daily basis, despite uncertainties in both their perceptual inferences and action outcomes. As actions lead to consequences, every movement subject to uncertainty becomes a decision under risk. Such sensorimotor decisions have been shown to follow the predictions of expected utility theory contrary to economic decisions, which systematically fail to maximize expected gains. However, as object manipulations are inescapably governed by the laws of physics, the question arises, how people act under such circumstances. Here, participants slid pucks to targets for gains and losses within a virtual environment, enabling the subjects to interact with an actual standard hockey puck while viewing its trajectory through a head-mounted display. As this novel setup enables subjects to interact with a real-world object through the use of motion capturing, we ensure that subjects have an immersive, naturalistic experience while playing our puck sliding game. In this task, variability inherent in sensorimotor control interacts with the physical relationships governing objects’ kinematics under the influence of friction embedded in an economic decision. Therefore, our task features a unique interaction between three cognitive faculties: 1. Economic decision-making, 2. Sensorimotor control and 3. Intuitive physics. We construct an ideal actor model based on statistical decision theory including the kinematics of sliding and show that subject behavior is in coherence with its predictions. Taken together, this demonstrates that subjects use their sensorimotor uncertainty and its interaction with physical relationships and economic demands of the task in guiding their actions.

Acknowledgements: This research was supported by 'The Adaptive Mind', funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art. We additionally acknowledge support by the European Research Council (ERC; Consolidator Award 'ACTOR'-project number ERC-CoG-101045783).

Talk 8, 7:00 pm, 55.28

Unraveling the Intricacies of Human Visuospatial Problem-Solving

Markus D. Solbach1 (), John K. Tsotsos1; 1York University

Computational learning of visual systems has seen remarkable success, especially during the last decade. A large part of it can be attributed to the availability of large data sets tailored to specific domains. Most training is performed over unordered and assumed independent data samples and more data correlates with better performance. This work considers what we observe from humans as our sample. In hundreds of trials with human subjects, we found that samples are not independent, and ordered sequences are our observation of internal visual functions. We investigate human visuospatial capabilities through a real-world experimental paradigm. Previous literature posits that comparison represents the most rudimentary form of psychophysical tasks. As an exploration into dynamic visual behaviours, we employ the same-different task in 3D: are two physical 3D objects visually identically? Human subjects are presented with the task while afforded freedom of movement to inspect two real objects within a physical 3D space. The experimental protocol is structured to ensure that all eye and head movements are oriented toward the visual task. We show that no training was needed to achieve good accuracy, and we demonstrate that efficiency improves with practice on various levels, contrasting with modern computational learning. Extensive use is made of eye and head movements to acquire visual information from appropriate viewpoints in a purposive manner. Furthermore, we exhibit that fixations and corresponding head movements are well-orchestrated, encompassing visual functions, which are composed dynamically and tailored to task instances. We present a set of triggers that we observed to activate those functions. Furthering the understanding of this intricate interplay plays an essential role in developing human-like computational learning systems. The "why" behind all the functionalities - unravelling their purpose - poses an exciting challenge. While human vision may appear effortless, the intricacy of visuospatial functions is staggering.

Acknowledgements: This research was supported by grants to the senior author (John K. Tsotsos) from the following sources: Air Force Office of Scientific Research USA, The Canada Research Chairs Program, and the NSERC Canadian Robotics Network.