Poster Sessions

Sunday Afternoon Posters, Banyan Breezeway

Poster Session: Sunday, May 19, 2024, 2:45 – 6:45 pm, Banyan Breezeway

Abstract#

Poster Title

First Author

Session

36.323

Can people determine object distance from its visual size and position in a correctly scaled 2D scene displayed on a large screen with aligned ground plane?

Kim, Jong-Jin

Scene Perception: Virtual environments, intuitive physics

36.363

Can multiple equally salient distractors be suppressed simultaneously?

Drisdelle, Brandi Lee

Visual Search: Eye movements, suppression

36.301

Finetuning primate visual representations with word recognition

Agrawal, Aakash

Object Recognition: Reading

36.314

Contrasting learning dynamics: Immediate generalisation in humans and generalisation lag in deep neural networks

Huber, Lukas S.

Object Recognition: Acquisition of categories

36.353

Exploring the limits of relational guidance using categorical and non-categorical text cues

Ford, Steven

Visual Search: Cueing, context, scene complexity, semantics

36.337

Effects of NORDIC denoising on population receptive field maps

Windischberger, Christian

Spatial Vision: Models

36.302

Task-based modulation of higher-order lexical statistics in the ventral and dorsal visual streams

Woolnough, Oscar

Object Recognition: Reading

36.364

Can people suppress salient visual distractors without foreknowledge of their colors?

McDonald, John

Visual Search: Eye movements, suppression

36.338

Comparing pRF Mapping Estimates for Words and Checker Patterns

Linhardt, David

Spatial Vision: Models

36.315

A neural network model of how category learning alters perceptual similarity

Rosedahl, Luke

Object Recognition: Acquisition of categories

36.324

Enhancing wayfinding in simulated prosthetic vision through semantic segmentation and rastering

LeVier, Tori N.

Scene Perception: Virtual environments, intuitive physics

36.354

Exogenous cues make search less effortful

Lee, Sangji

Visual Search: Cueing, context, scene complexity, semantics

36.355

Contextual cueing is not restricted to a local context, when the local context cannot be easily segregated

Zheng, Aner

Visual Search: Cueing, context, scene complexity, semantics

36.365

Distractor suppression in primary visual cortex

Richter, David

Visual Search: Eye movements, suppression

36.339

The influence of attentional load on population receptive field properties

Sheikh Abdirashid, Sumiya

Spatial Vision: Models

36.303

Quantification of reading circuits in the ventral occipitotemporal cortex

Lei, Yongning

Object Recognition: Reading

36.325

FlyingObjects: Testing and aligning humans and machines in gamified object vision tasks

Peters, Benjamin

Scene Perception: Virtual environments, intuitive physics

36.316

Decoding Contextual Effects in Vision: A Cross-Species Behavioral Approach

Zafer, Anaa

Object Recognition: Acquisition of categories

36.304

Cortical activations for symmetry effect on visual word form perception in developmental dyslexia

Hung, Shao-Chin

Object Recognition: Reading

36.317

Acceleration of visual object categorization in the first year of life

Spriet, Celine

Object Recognition: Acquisition of categories

36.340

A novel approach for population-receptive field mapping using high-performance computing

Mittal, Siddharth

Spatial Vision: Models

36.356

Contextual cueing in highly complex real-world stimuli

Tomshe, Tom

Visual Search: Cueing, context, scene complexity, semantics

36.326

Predictive processing of upcoming scene views in immersive environments: evidence from continuous flash suppression

Mynick, Anna

Scene Perception: Virtual environments, intuitive physics

36.366

Revisiting the timing of salient-signal suppression

Tay, Daniel

Visual Search: Eye movements, suppression

36.327

Scene semantic and gaze effects on allocentric coding in naturalistic (virtual) environments

Baltaretu, Bianca

Scene Perception: Virtual environments, intuitive physics

36.341

Population receptive field models capture event-related MEG responses

Eickhoff, Kathi

Spatial Vision: Models

36.305

Measure letter recognition performance: A subjective evaluation method

Yu, Deyue

Object Recognition: Reading

36.367

Effects of search priority on working memory-guided search for real objects: Evidence from eye-movements

Ramzaoui, Hanane

Visual Search: Eye movements, suppression

36.318

Comparison Training Improves Perceptual Learning of Skin Cancer Diagnoses

Jacoby, Victoria L.

Object Recognition: Acquisition of categories

36.357

Examining differential effects of target and context repetition in visual search: Insights from a big data approach

Siritzky, Emma M.

Visual Search: Cueing, context, scene complexity, semantics

36.342

Spatial Frequency Tuning in Early Visual Cortex is Not Scale Invariant

Klimova, Michaela

Spatial Vision: Models

36.306

Parallel processing of written words as a function of visual field position

Firoozan, Kimya

Object Recognition: Reading

36.328

Find the Orange: How rich and accurate is the visual percept that guides action?

Zoroufi, Aryan

Scene Perception: Virtual environments, intuitive physics

36.358

Task complexity and onset of visual information influence action planning in a natural foraging task

Kuhn, Danilo A.

Visual Search: Cueing, context, scene complexity, semantics

36.368

Evaluating the contributions of top-down and bottom-up processing on eye movements during parallel visual search

Tan, Howard Jia He

Visual Search: Eye movements, suppression

36.319

Shifting Perceptions: The Effects of Subordinate Level Training on Category Restructuring

Lawrance, Anna K.

Object Recognition: Acquisition of categories

36.369

Viewpoint selection in active visual search

Wu, Tiffany

Visual Search: Eye movements, suppression

36.343

Hierarchical Gaussian Process Model for Human Retinotopic Mapping

Waz, Sebastian

Spatial Vision: Models

36.307

Redundant target effects reveal capacity limits for recognizing words as a function of visual field position

Hossain, Jannat

Object Recognition: Reading

36.359

The Interaction of Clutter and Scene Size on Visual Search in Natural Scenes

Si, Wentao

Visual Search: Cueing, context, scene complexity, semantics

36.329

Visual Cues in Nonvisual Cooking: Assessing the Role of Tactile and AI-Assisted Technologies

Turkstra, Lily M.

Scene Perception: Virtual environments, intuitive physics

36.320

The influence of expertise and individual differences on psychological embeddings

Mah, Eric

Object Recognition: Acquisition of categories

36.308

The processing of spatial frequencies through time in visual word recognition

Bertrand Pilon, Clémence

Object Recognition: Reading

36.330

No effect of reducing visual realism on motion sickness in virtual reality

Saunders, Jeffrey

Scene Perception: Virtual environments, intuitive physics

36.360

Robust Target-Related Clutter Metrics for Natural Images

Zhou, Elizabeth

Visual Search: Cueing, context, scene complexity, semantics

36.321

Unveiling the origin of the word-specific area with the object space model

Yang, Jia

Object Recognition: Acquisition of categories

36.344

Diffeomorphic Registration Enhances Retinotopic Mapping in 3T

Jalili Mallak, Negar

Spatial Vision: Models

36.322

Cross-Species and Cross-Modality Studies of Food-Specific Brain Regions

GONG, Baoqi

Object Recognition: Acquisition of categories

36.361

Assessing The Effect of Stimuli Complexity in Web-Based Visual Foraging

Velazquez, Enilda

Visual Search: Cueing, context, scene complexity, semantics

36.345

Behavioral and neural signatures of efficient sensory encoding in the tilt illusion

Zhang, Ling-Qi

Spatial Vision: Models

36.331

From the flow of liquids to the flow of time: Granularity of spontaneous liquid flow predictions in visual perception impacts experienced time

Zhang, Yuting

Scene Perception: Virtual environments, intuitive physics

36.309

A psychophysical approach for investigating format readability online

Küçük, Kurtuluş Mert

Object Recognition: Reading

36.332

Learning or doing? Visual recognition of epistemic vs. pragmatic intent

Croom, Sholei

Scene Perception: Virtual environments, intuitive physics

36.346

A modular image-computable psychophysical spatial vision model

Reichert, Jannik

Spatial Vision: Models

36.362

VowelWorld 2.0: Using artificial scenes to study semantic and syntactic scene guidance

Markov, Yuri

Visual Search: Cueing, context, scene complexity, semantics

36.310

Typeface Matters: Psychophysical Insights into Readability Across Different Reading Tasks

Atilgan, Nilsu

Object Recognition: Reading

36.311

Light or Bold? Navigating Font Weights and Grades for Enhanced Readability

Rashid, Md Mamunur

Object Recognition: Reading

36.347

Analytic model of response statistics in noisy neural populations with divisive normalization

Herrera-Esposito, Daniel

Spatial Vision: Models

36.333

Social and Perceptual Attributions Derived from Moving Shapes: A Language Model Analysis

Grossman, Emily D

Scene Perception: Virtual environments, intuitive physics

36.348

A highly replicable model of achromatic contrast sensitivity based on individual differences in optics and spatial channels: robust consistency in factor structure across >6 very different datasets

Peterzell, David Henry

Spatial Vision: Models

36.334

Testing mental computations of center of mass using real-world stability judgments

Bucci-Mansilla, Giuliana

Scene Perception: Virtual environments, intuitive physics

36.312

The effects of variable fonts on sentence-level reading

Guidi, Silvia

Object Recognition: Reading

36.335

Velocity– not Perceived as such: The Role of Perceived Mass on Motion Estimation

Deeb, Abdul-Rahim

Scene Perception: Virtual environments, intuitive physics

36.313

Language-universal and script-specific factors in the recognition of letters in visual crowding: The effects of lexicality, hemifield, and transitional probabilities in a right-to-left script

Yashar, Amit

Object Recognition: Reading

36.349

A Robust Co-variation of the Stimulus-specific Bias and Variability across Different Viewing Conditions and Observers, and Its Implication on the Bayesian Account of Orientation Estimation

LEE, SANG HUN

Spatial Vision: Models

36.350

Limits on Human Contrast Sensitivity Imposed by the Initial Visual Encoding

Hong, Fangfang

Spatial Vision: Models

36.336

Perceiving animacy through schematic intuitive physics: Shared conceptual structure of animacy between vision and language

Tang, Ning

Scene Perception: Virtual environments, intuitive physics

36.351

An internal representation of contrast based on magnitude estimation compatible with discrimination

Rodríguez Arribas, Cristina

Spatial Vision: Models

36.352

Spatially-specific feature tuning drives response properties of macaque IT cortex

Jagadeesh, Akshay V

Spatial Vision: Models

Undergraduate Just-In-Time Poster Submissions

VSS 2024 is pleased to announce that the “Just-In-Time” poster sessions for undergraduate students working on independent research projects are now open for submissions. Posters will be presented in person at the annual meeting in one of two sessions, either Saturday, May 18 or Monday, May 20.

VSS welcomes and encourages submissions from a diverse group of eligible students across the globe. To help accomplish this goal we are asking that you share this information with any programs within your institutions that sponsor or promote research for undergraduate students.

Eligibility

The submissions to these sessions are limited to students who:

  • Are currently enrolled in a 3-year or 4-year program leading to the bachelor’s degree. Or,
  • Have earned a bachelor’s degree in a 3-year program and are currently in their first year of study in a program leading to a master’s degree. (Students studying in European universities may fall into this category). Those who already have an abstract accepted for VSS 2024 are not eligible.

Space is limited. The window for submissions will open on March 1 and submissions will be accepted through April 1. Presenters will be informed of acceptance by April 11.

You must be a current student member (for 2024) to submit an abstract.

A limited number of travel grants are available for undergraduate students who submit abstracts during the Just-in-Time submission period. Travel application information will be available upon submission of the student’s abstract.

VSS welcomes and encourages submissions from a diverse group of eligible students across the globe. To help accomplish this goal we are asking that you share this information with any programs within your institutions that sponsor or promote research for undergraduate students. For details and to submit an abstract, go to Undergraduate Just-In-time Poster Submission Guidelines.

Submission Policies

  • A student may submit only one abstract to the Just-In-Time session.
  • The student must be a current VSS member (for 2024).
  • The student must be registered to attend VSS.
  • Those who already have an abstract accepted for VSS 2024 are not eligible to submit to the Just-In-Time session.
  • Abstracts must be work that has not been accepted for publication or published at the time of submission.
  • Poster presenter substitutions are not permitted.

Abstract Format

Abstracts are limited to 300 words. This does not include title, authors, and affiliations. Additional space is provided for funding acknowledgments and for declaration of commercial interests and conflicts.

Your abstract should consist of an introduction, methods and results sections, and a conclusion. It is not required that the sections be explicitly labeled as such. It is, however, important that each abstract contains sufficiently detailed descriptions of the methods and the results. Please do not submit an abstract of work that you are planning to do or work without sufficient results to reach a clear conclusion. Such abstracts will not be accepted.

Per the VSS Disclosure of Conflict of Interest Policy, authors must reveal any commercial interests or other potential conflicts of interest that they have related to the work described. Any conflicts of interest must be declared on your poster or talk slides.

Please complete your submission carefully. All abstracts must be in final form. Abstracts are not proofread or corrected in any way prior to publication. Typos and other errors cannot be corrected after the deadline. You may edit your abstract as much as you like until the submission deadline.

Given the just-in-time deadline, some aspects will differ from regular VSS submissions. Submissions will be reviewed by members of the VSS Board of Directors and designates. Accepted abstracts will appear in the VSS 2024 program, but unlike submissions accepted following the December review, “Just-In-Time” abstracts will not appear in the Journal of Vision.

If you have any questions, please contact our office at .

Submission Schedule

Submissions Open: March 1, 2024
Submissions Close: April 1, 2024
Undergraduate Travel Award Application Deadline: April 5, 2024
Notification of Accepted Abstracts: April 11, 2024

How to Submit

Undergraduate Just-in-Time Poster Submissions are Closed.