Exploring the limits of relational guidance using categorical and non-categorical text cues

Poster Presentation 36.353: Sunday, May 19, 2024, 2:45 – 6:45 pm, Banyan Breezeway
Session: Visual Search: Cueing, context, scene complexity, semantics

Steven Ford1 (), Younha Collins1, Daniel Go1, Joseph Schmidt1; 1University of Central Florida

Objects in the environment do not exist in isolation; they exist relative to other objects (your wallet may be to the right of your keys). Recent work suggests that following a pictorial target preview, spatial relationships between objects do guide search, as measured by the proportion of trials in which the target pair is fixated first (Ford et al., 2021; Ford et al., In revision). To parameterize this finding, we conducted three experiments to assess the oculomotor guidance of attention generated by spatial relationships in response to text cues. In all three experiments, participants searched for arbitrary object pairs in particular spatial arrangements (e.g., "fish above car"), amongst other pairs of random objects and we assessed performance between matched (target pairs matched the cued spatial relationship) and swapped (target pairs relationship was reversed) search displays. Experiment one investigated relational guidance using categorical text cues, with one or both objects cued. The second also used categorical text cues, but two objects were always cued, and the search array contained both, one, or neither of the cued objects in matched or swapped arrangements. Relational guidance did not emerge in either experiment, suggesting that relational guidance might rely on highly specific visual features. To test this, we conducted a final experiment in which participants memorized a limited set of targets so that they could verbally describe each object’s specific visual features. They were then given text cues pertaining to the specific targets they memorized. In this case, relational information impacted oculomotor search guidance. The findings suggest that relational guidance can be extended beyond pictorial previews, but depends on well-learned visual features that can be precisely coded. Variance in visual features that result from a category of objects may eliminate relational guidance.