Poster Sessions

Wednesday Morning Posters, Pavilion

Poster Session: Wednesday, May 22, 2024, 8:30 am – 12:30 pm, Pavilion

Abstract#

Poster Title

First Author

Session

63.448

Emotional Consequences of Expending Perceptual Effort

Wiedenmann, Emma

Attention: Reward, motivation, emotion

63.416

Bayesian adaptive estimation of high-dimensional psychometric functions: A particle filtering approach

Reining, Lars

Face and Body Perception: Models

63.427

An online replication of the association between face perception abilities and the amount of visual information required to identify a face

Côté, Laurianne

Face and Body Perception: Disorders, individual differences

63.401

How to estimate noise ceilings for computational models of visual cortex

Chen, Zirui

Object Recognition: Models

63.456

Deconstructing the task-evoked pupillary response

O'Bryan, Sean R.

Attention: Exogenous, endogenous, gaze

63.435

"I felt successful!" Assessing autistic adolescent game usability from randomized control trial to improve sensitivity to eye-gaze cues

Muhammad, Sumaiya

Face and Body Perception: Social cognition

63.402

3D shape recognition in humans and deep neural networks

Fu, Shuhao

Object Recognition: Models

63.457

Endogenous attention samples rhythmically under spatial uncertainty

Liu, Xiaoyi

Attention: Exogenous, endogenous, gaze

63.449

The influence of feedback and risk on learning to link stimulus features to reward

Taha, Hana

Attention: Reward, motivation, emotion

63.428

Exploring Spatial Frequency and Orientation Tunings for Face Recognition in Eight Cultural Groups

Gingras, Francis

Face and Body Perception: Disorders, individual differences

63.417

Efficient Inverse Graphics with Differentiable Generative Models Explains Trial-level Face Discriminations and Robustness of Face Perception to Unusual Viewing Angles

Yilmaz, Hakan

Face and Body Perception: Models

63.436

Examining effectiveness of a randomized controlled trial to enhance understanding of eye gaze cues in autism: Incorporating an active control game in SAGA

Mattern, Hunter

Face and Body Perception: Social cognition

63.450

The Influence of Adult Relationship Attachment Style on the Networks of Attention

Redden, Ralph

Attention: Reward, motivation, emotion

63.429

Individual differences in fusing the face identification decisions of humans and machines

Phillips, P. Jonathon

Face and Body Perception: Disorders, individual differences

63.403

Characteristics of the emergence of category selectivity in convolutional neural networks

Verosky, Niels J.

Object Recognition: Models

63.437

A crowd amplification effect in the perception of social status

Myat, Phyu Sin

Face and Body Perception: Social cognition

63.458

Gaze patterns modeled with a LLM can be used to classify autistic vs. non-autistic viewers

Haskins, Amanda J

Attention: Exogenous, endogenous, gaze

63.418

Evidence for efficient inverse graphics in the human brain using large-scale ECoG data

Calbick, Daniel

Face and Body Perception: Models

63.451

Can foreknowledge of distractor type reduce the emotion-induced blindness effect?

Chan, Ho Ming

Attention: Reward, motivation, emotion

63.459

The distinct role of human PIT in attention control

huang, Siyuan

Attention: Exogenous, endogenous, gaze

63.430

Masked-face recognition leads to learning of new perceptual abilities

Kim, Hyerim

Face and Body Perception: Disorders, individual differences

63.419

FaReT 2.1: Anatomically precise manipulation of race in 3D face models and a pipeline to import real face scans

Martin, Emily

Face and Body Perception: Models

63.438

Continued preference for reversed images of self

Huang, Jessica

Face and Body Perception: Social cognition

63.404

Differential sensitivity of humans and deep networks to the amplitude and phase of shape features

Baker, Nicholas

Object Recognition: Models

63.460

Idiosyncratic Search: Biases in the deployment of covert attention.

Trinkl, Nathan

Attention: Exogenous, endogenous, gaze

63.420

From Perception to Algorithm: Quantifying Facial Distinctiveness with a Deep Convolutional Neural Network

Boutet, Isabelle

Face and Body Perception: Models

63.452

You see first what you like most: Visually prioritizing positive over negative semantic stimuli

He, Sihan

Attention: Reward, motivation, emotion

63.431

Color robustly affects the intensity of facial distortions in two cases of prosopometamorphopsia

Mello, Antônio

Face and Body Perception: Disorders, individual differences

63.439

Facial expressions of apology comprise complex social signals

Wu, Yichen

Face and Body Perception: Social cognition

63.405

Quantifying the Quality of Shape and Texture Representations in Deep Neural Network Models

Doshi, Fenil R.

Object Recognition: Models

63.440

Gaze allocation towards contextual information predicts performance in a dynamic emotion perception task

Ortega, Jefferson

Face and Body Perception: Social cognition

63.453

Approach and Avoidance Visual Cues Are Processed Similarly In the Brain

Ni, Yuqian

Attention: Reward, motivation, emotion

63.461

Measuring individual differences in multitasking ability

Oksama, Lauri

Attention: Exogenous, endogenous, gaze

63.432

False-alarm rate and inter-trial priming predict hallucination proneness in the Signal Detection Pareidolia Test

Heller, Nathan H.

Face and Body Perception: Disorders, individual differences

63.421

Modeling face-Identity “likeness” with a convolutional neural network trained for face identification

Parde, Connor J.

Face and Body Perception: Models

63.406

Geometric properties of object manifolds in neural network models of visual cortex

Bonner, Michael

Object Recognition: Models

63.454

Opposite polarities in alpha-band power in EEG were induced by reward and arousal: an initial discovery in the psychophysiological realm that distinctly dissociates reward from arousal

Nakashima, Yusuke

Attention: Reward, motivation, emotion

63.433

Person Colors

Reeves, Adam

Face and Body Perception: Disorders, individual differences

63.407

A biologically inspired framework for contrastive learning of visual representations: BioCLR

Han, Zhixian

Object Recognition: Models

63.422

Norm-referenced Encoding Supports Transfer Learning of Expressions across Strongly Different Head Shapes

Giese, Martin A.

Face and Body Perception: Models

63.462

Mind-wandering during encoding impairs recognition for both forgettable and memorable complex scenes

Shelat, Shivang

Attention: Exogenous, endogenous, gaze

63.441

Inferential Trustworthiness Tracking Reveals Fast Context-Based Trustworthiness Perception

Fang, Yifan

Face and Body Perception: Social cognition

63.442

Picture a Scientist: Classification Images of Scientists are seen as White, Male, and Socially Inept

Shakil, Maheen

Face and Body Perception: Social cognition

63.455

Reduced Attentional Capture Following More Variable Rewards

Youn, Sojung

Attention: Reward, motivation, emotion

63.434

The Eyes Still Have It: Eye Processing is a Distinct Deficit in Developmental Prosopagnosia

DeGutis, Joseph

Face and Body Perception: Disorders, individual differences

63.423

Reading minds in the eyes with GPT4-vision

Murray, Scott

Face and Body Perception: Models

63.408

Evaluating the Alignment of Machine and Human Explanations in Visual Object Recognition through a Novel Behavioral Approach

Kashef Alghetaa, Yousif

Object Recognition: Models

63.443

Social Interactions cause Spatial Distortions in Visual Memory, not Perception

Vestner, Tim

Face and Body Perception: Social cognition

63.409

Interpreting distributed population codes with feature-accentuated visual encoding models

Prince, Jacob S.

Object Recognition: Models

63.424

Training deep learning algorithms for face recognition with large datasets improves performance but reduces similarity to human representations

Guy, Nitzan

Face and Body Perception: Models

63.425

View-symmetric representations of faces in human and artificial neural networks

Andrews, Tim

Face and Body Perception: Models

63.444

Unveiling Mental Self-Images from Face Perception and Memory

De, Arijit

Face and Body Perception: Social cognition

63.410

Investigating power laws in neural network models of visual cortex

Townley, Keaton

Object Recognition: Models

63.426

Visualizing the Other-Race Effect with GAN-based Image Reconstruction

Shoura, Moaz

Face and Body Perception: Models

63.411

Sparse components distinguish visual pathways and their alignment to neural networks

Marvi, Ammar

Object Recognition: Models

63.445

Visualizing Face Representations after Adaptation

MINEMOTO, KAZUSA

Face and Body Perception: Social cognition

63.412

Spatial filters in neural network models of visual cortex do not need to be learned

Passi, Ananya

Object Recognition: Models

63.446

Who you lookin' at? Perception of gaze direction in group settings depends on naturalness of gaze behavior and clutter

Rosenholtz, Ruth

Face and Body Perception: Social cognition

63.447

Don’t Look at the Camera: Achieving eye contact in video conferencing platforms

Jayakumar, Samyukta

Face and Body Perception: Social cognition

63.413

Spatial Frequency Decoupling: Bio-inspired strategy for Network Robustness

Arslan, Suayb

Object Recognition: Models

63.414

When Machines Outshine Humans in Object Recognition, Benchmarking Dilemma

Darvishi Bayazi, Mohammad Javad

Object Recognition: Models

63.415

Visual and auditory object recognition in relation to spatial abilities

Smithson, Conor J. R.

Object Recognition: Models

Undergraduate Just-In-Time Poster Submissions

VSS 2024 is pleased to announce that the “Just-In-Time” poster sessions for undergraduate students working on independent research projects are now open for submissions. Posters will be presented in person at the annual meeting in one of two sessions, either Saturday, May 18 or Monday, May 20.

VSS welcomes and encourages submissions from a diverse group of eligible students across the globe. To help accomplish this goal we are asking that you share this information with any programs within your institutions that sponsor or promote research for undergraduate students.

Eligibility

The submissions to these sessions are limited to students who:

  • Are currently enrolled in a 3-year or 4-year program leading to the bachelor’s degree. Or,
  • Have earned a bachelor’s degree in a 3-year program and are currently in their first year of study in a program leading to a master’s degree. (Students studying in European universities may fall into this category). Those who already have an abstract accepted for VSS 2024 are not eligible.

Space is limited. The window for submissions will open on March 1 and submissions will be accepted through April 1. Presenters will be informed of acceptance by April 11.

You must be a current student member (for 2024) to submit an abstract.

A limited number of travel grants are available for undergraduate students who submit abstracts during the Just-in-Time submission period. Travel application information will be available upon submission of the student’s abstract.

VSS welcomes and encourages submissions from a diverse group of eligible students across the globe. To help accomplish this goal we are asking that you share this information with any programs within your institutions that sponsor or promote research for undergraduate students. For details and to submit an abstract, go to Undergraduate Just-In-time Poster Submission Guidelines.

Submission Policies

  • A student may submit only one abstract to the Just-In-Time session.
  • The student must be a current VSS member (for 2024).
  • The student must be registered to attend VSS.
  • Those who already have an abstract accepted for VSS 2024 are not eligible to submit to the Just-In-Time session.
  • Abstracts must be work that has not been accepted for publication or published at the time of submission.
  • Poster presenter substitutions are not permitted.

Abstract Format

Abstracts are limited to 300 words. This does not include title, authors, and affiliations. Additional space is provided for funding acknowledgments and for declaration of commercial interests and conflicts.

Your abstract should consist of an introduction, methods and results sections, and a conclusion. It is not required that the sections be explicitly labeled as such. It is, however, important that each abstract contains sufficiently detailed descriptions of the methods and the results. Please do not submit an abstract of work that you are planning to do or work without sufficient results to reach a clear conclusion. Such abstracts will not be accepted.

Per the VSS Disclosure of Conflict of Interest Policy, authors must reveal any commercial interests or other potential conflicts of interest that they have related to the work described. Any conflicts of interest must be declared on your poster or talk slides.

Please complete your submission carefully. All abstracts must be in final form. Abstracts are not proofread or corrected in any way prior to publication. Typos and other errors cannot be corrected after the deadline. You may edit your abstract as much as you like until the submission deadline.

Given the just-in-time deadline, some aspects will differ from regular VSS submissions. Submissions will be reviewed by members of the VSS Board of Directors and designates. Accepted abstracts will appear in the VSS 2024 program, but unlike submissions accepted following the December review, “Just-In-Time” abstracts will not appear in the Journal of Vision.

If you have any questions, please contact our office at .

Submission Schedule

Submissions Open: March 1, 2024
Submissions Close: April 1, 2024
Undergraduate Travel Award Application Deadline: April 5, 2024
Notification of Accepted Abstracts: April 11, 2024

How to Submit

Undergraduate Just-in-Time Poster Submissions are Closed.