Attention: Selection, modulation, resource competition

Talk Session: Saturday, May 18, 2024, 2:30 – 4:15 pm, Talk Room 1
Moderator: Stefan Van der Stigchel, Utrecht University

Talk 1, 2:30 pm, 24.11

When processing relationships, visual processing capacity is far less than four

Steven Franconeri1 (), Tal Boger2; 1Northwestern University, 2Johns Hopkins University

Vision can provide rapid and powerful processing for some tasks, and encounter strong capacity constraints for others, with a typical limit of processing 4 objects at once. But some evidence suggests an even lower capacity limit when processing relationships between objects. We asked people to explore data visualizations with only 4 values, and found that *half* of viewers easily missed surprising improbable relationships (e.g., a child’s height *decreasing* over time, or a better product costing *less*) in these trivially small datasets. The graph’s design used spatial grouping cues to implicitly deprioritize an improbable relationship, and when the design instead implicitly prioritized those relationships, they were noticed 1.8x-3.4x more often. These demonstrations support an emerging view of a divide between capacity limits on visual processing: When tracking or memorizing a set of objects, capacity hovers around 4. But when computing relationships that require linking features (e.g., object heights or verbal labels) to particular objects, estimated capacity drops to 1-2. The present experiment is consistent with models that predict that surprisingly low level of ability. On the practical side, the results provide immediate guidance to the scientific community (as well as those in education and in organizations) as producers and consumers of data visualization. Graphs should be designed so that certain relationships are more intuitively recovered, and that ‘data storytelling’ techniques – highlighting and annotating data visualizations to help viewers quickly see the ‘right’ pattern – are critical, even within visualizations of trivially simple datasets.

Acknowledgements: NSF IIS-CHS-1901485

Talk 2, 2:45 pm, 24.12

Effort minimization drives saccade selection

Christoph Strauch1 (), Damian Koevoet1, Laura Van Zantwijk1, Sebastiaan Mathôt2, Marnix Naber1, Stefan Van der Stigchel1; 1Utrecht University, 2University of Groningen

What determines where we move our eyes? We here hypothesized saccade costs to determine saccade selection. We first mapped saccade costs across directions by cueing participants to make a saccade towards a specific direction but to withhold the saccade until after the cue had disappeared. During this phase, we measured pupil size - an indicator of noradrenaline release and mental effort - to index cost. Next, we mapped saccade preferences by presenting any two of the previously cost-mapped saccade directions as a two alternative free choice task. For the first time, we here demonstrate that this cost critically underpins saccade selection: When participants chose between the two possible directions, low-effort options were strongly preferred (R2=0.58). Notably, saccades curved away from high-cost directions, suggesting an active weighing of costs and inhibition of costly alternatives. This general principle held when participants searched in natural scenes: cost remained a predictor of saccade direction preferences. Strikingly, effortful saccade directions were disproportionally avoided as soon as overall load was increased by introducing a secondary auditory counting task (R2=0.50). This implies that cognitive resources are flexibly (dis)allocated from and to oculomotor processes as resource demands change. Together, this shows that even the most subtle differences in cost are actively weighed to tune for resource-efficient behavior. Beyond stimulus material and goals, we therefore argue that eye-movement behavior is largely determined by a distinct and equally fundamental factor: effort.

Talk 3, 3:00 pm, 24.13

Recognition memory fluctuates with the floodlight of attentional state

Anna Corriveau1 (), Alfred Chao1, Megan T. deBettencourt1,2, Monica D. Rosenberg1,2,3; 1Department of Psychology, The University of Chicago, 2Institute for Mind and Biology, The University of Chicago, 3Neuroscience Institute, The University of Chicago

Attentional state fluctuates across time and influences what we remember. However, it is not yet understood whether fluctuations in attention affect memory for task-relevant and task-irrelevant information similarly. One possibility is that increased attentional state heightens the roving spotlight of selective attention, resulting in better filtering of irrelevant stimuli. Alternatively, better attentional state may act like a flickering floodlight, with increased attentional capacity allowing for greater processing of irrelevant stimuli. These hypotheses make opposite predictions for the subsequent memory of irrelevant stimuli. We collected two online samples (N1=188; N2=185) in which participants viewed a stream of trial-unique stimuli (500 trials) consisting of face images superimposed on scene images and were asked to perform a category judgment on either the faces (males vs. females) or scenes (indoors vs. outdoors) by pressing one key for frequent-category images (e.g., males, 90%) and a different key for infrequent images (e.g., females, 10%). Critically, the other category (scenes or faces) was completely irrelevant for the task. Following the sustained attention task, a surprise test probed recognition memory for both relevant and irrelevant stimuli using a 4-point scale. Logistic models tested whether sustained attention measures predicted memory accuracy. Attention lapses (errors to infrequent stimuli) were preceded by established RT signatures of sustained attention, speed (b1=.640, b2=.617) and variance (b1=-.296, b2=-.223; all ps<.001). As expected, memory was better for task-relevant items (b1=.722, b2=1.37; all ps<.001). Furthermore, correct performance on infrequent trials predicted memory for both task-relevant (b1=.134, p<.001; b2=.201, p<.001) and task-irrelevant (b1=.127, p<.001; b2=.111, p=.033) stimuli in both experiments. These results support the flickering floodlight view of attentional state, such that moments of high attention improve memory of relevant and irrelevant stimuli.

Acknowledgements: National Science Foundation BCS-2043740 (M.D.R.)

Talk 4, 3:15 pm, 24.14

Spatiotemporal regularities guide motor predictions in a dynamic visual search

Nir Shalev1,2 (), Noam Tzionit3, Danielle Filmon3, Anna C. Nobre2,4, Ayelet N. Landau3; 1Haifa University, Israel, 2University of Oxford, UK, 3Hebrew University of Jerusalem, IL, 4Yale University, USA

Attention allows us to prioritise relevant information and ignore distraction in our sensory environment. Since natural scenes are constantly changing, it is important for us to adapt our attentional priorities accordingly. Predictable signals, like traffic lights, allow for anticipation and help us control attention in time and space. In this study, we explore how prediction-led attention affects how we guide the motor and oculomotor system in time and space. We used a dynamic variation of a visual search task, with trials lasting 14 seconds. Each trial included eight targets that faded in and out of the display, among visual distractors. Participants moved their eyes freely and used the mouse pointer to click on the targets. Critically, we embedded in each trial spatiotemporal regularities by presenting four out of eight targets at the same time and approximate location throughout the experiment. The remaining four targets could appear at any time and location. We also manipulated the distraction load by varying the number of irrelevant stimuli appearing in each trial. Our results offer a detailed description of the learning dynamics and prediction formation. Participants were faster and more accurate at detecting predictable targets compared to unpredictable ones. In line with the visual search literature, we also found that increasing the number of visual distractors reduced accuracy and slowed down responses. By tracking mouse and eye movements, we discovered that predictions enabled earlier and faster movements towards targets. Interestingly, we also observed earlier and more pronounced movements of the hand and eyes away from predictable targets once they were selected. These findings enhance our understanding of the real-time impact of prediction formation. In our presentation, we will provide a detailed description of these patterns under varying levels of visual distraction and discuss how they emerge during the task as a consequence of learning.

Acknowledgements: ANL: JSMF Scholar Award, ISF (958/16 & 1899/21), TIMECODE ERC Starting Grant (852387), Joy Ventures, and the Product Academy Award. NS: Daniel Turnberg Fellowship, Academy of Medical Sciences. ACN: Wellcome Investigator Award (104571/Z/14/Z) and the JSMF Collaborator Award (220020448)

Talk 5, 3:30 pm, 24.15

Attention robustly dissociates objective performance and subjective visibility reports

Karen Tian1 (), Brian Maniscalco2, Michael Epstein1, Angela Shen2, Olenka Graham Castaneda2, Giancarlo Arzu2, Taiga Kurosawa2, Jennifer Motzer1, Emil Olsson2, Lizbeth Romero2, Emily Russell1, Meghan Walsh1, Juneau Wang1, Tugral Bek Awrang Zeb2, Richard Brown3, Victor Lamme4, Hakwan Lau5, Biyu He6, Jan Brascamp7, Ned Block6, David Chalmers6, Megan Peters2, Rachel Denison1; 1Boston University, 2University of California Irvine, 3City University of New York, 4University of Amsterdam, 5RIKEN Center for Brain Science, 6New York University, 7Michigan State University

Background: Findings of subjective inflation, in which subjective reports of unattended, peripheral stimuli are stronger than the accuracy of sensory processing would suggest, have motivated higher-order theories of consciousness. However, empirical tests of subjective inflation have been surprisingly limited. Generally they have used a single pair of near-threshold stimulus strengths–weaker for attended and stronger for unattended–to equate objective performance, leaving it unclear whether inflation arises from decision biases and whether inflation extends beyond threshold perception. Goal: In a preregistered adversarial collaboration, we rigorously tested whether attention dissociates subjective reports and objective performance across a range of stimulus strengths and types. Methods: In three experiments, human observers (n=30/experiment) performed a spatial attentional cueing task. On each trial, observers viewed up to four peripheral targets, which varied independently across 7 stimulus strengths. A central precue (60% valid, 20% neutral, 20% invalid) directed attention to one or all target locations. A response cue instructed observers to simultaneously make 1) an objective orientation report and 2) a subjective visibility report. Targets were texture-defined figure-ground ovals (Experiments 1 and 2) or contrast-defined gratings (Experiment 3), presented at threshold (Experiments 1 and 3) or suprathreshold (Experiment 2) stimulus strengths. To assess subjective inflation, we developed an area-under-the-curve approach to quantitatively relate objective and subjective reports across stimulus strengths for matched levels of orientation discriminability. Results: We found strong and consistent subjective inflation under inattention across all experiments. Across a range of threshold and suprathreshold stimulus strengths, and different stimulus types, subjective visibility was reported as higher for unattended vs. attended stimuli when orientation discriminability was equated. Conclusion: Inattention robustly inflates subjective visibility reports, and inflation is not confined to threshold vision. Whether sensory signals are sufficient for explaining subjective visibility reports when they come apart from objective performance may help arbitrate between competing theories of consciousness.

Acknowledgements: Templeton World Charity Foundation Accelerating Research on Consciousness initiative TWCF 0567 (to BH, JB, NB, DC, RD, MP)

Talk 6, 3:45 pm, 24.16

Pupil size reveals presaccadic attentional shifts up and downward: A possible dissociation between the where and how of attention

Damian Koevoet1, Christoph Strauch1, Marnix Naber1, Stefan Van der Stigchel1; 1Utrecht University

Humans frequently move their eyes to foveate relevant information in the world. It is dominantly assumed that attentional shifts must precede saccades to prepare the brain for postsaccadic retinal input, allowing for perceptual continuity across eye movements. A recent surge of studies have investigated visual anisotropies around the visual field, including presaccadic attention. Such studies demonstrated benefits of presaccadic attention on task performance for horizontal and downward, but not for upward saccades. This contrasts the dominant view: if attention is not moved prior to upward saccades, presaccadic attention may not be necessary to facilitate perceptual continuity. Here we capitalized on the fact that the pupil light response robustly tracks attention to investigate whether presaccadic attention moves up and downwards. We crucially manipulated whether presaccadic attention could shift toward the background brightness of the ensuing saccade target by presenting the brightness throughout the trial, or by presenting the brightness upon saccade onset. In two experiments, we observed acceleration of the onset of the pupil light response for both upward and downward saccades when the landing brightness could be prepared prior to the saccade. This shows that presaccadic attention is deployed, and can facilitate perceptual continuity along the vertical meridian. In combination with previous work, these results suggest that presaccadic attention can be shifted in space without enhancing specific facets (e.g. contrast sensitivity) of visual processing at the deployed location. The known underrepresentation of the upper visual field in early visual cortex may underlie the dissociation between where attention is deployed and how it affects visual processing. However, more work is necessary to identify when, and how, such dissociations occur.

Talk 7, 4:00 pm, 24.17

Attentional sampling between eye channels

Ayelet Landau1 (), Daniele Re1; 1The Department of Psychology and the Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem

Eye channels refer to processing of visual information from each eye before integration in V1 (Hubel & Wiesel, 1977). During development, inputs from both eyes initially overlap in the visual cortex. However, through competitive interactions between neurons with different ocular preferences, the inputs become segregated into distinct columns (Hensch, 2005). This process involves competition between neurons representing the left and right eyes (Tagawa et al., 2005). When the visual system processes several inputs, competitive and suppressive interactions are foundational to the neuronal response. Attention, which is the biasing selection towards relevant parts a scene, was previously found to be implemented through rhythmic brain activity. Similar to brain rhythms, also performance, fluctuates over time. Specifically, when more than one object is attended, objects are selected in alternation. In this study we sought to investigate whether this phenomenon, called attentional sampling, also emerges in the unconscious selection process among eye channels. We presented a display with a single object to both eyes and manipulated the presentation of a cue and a detection target to either both eyes or to the different eyes. We assume that presenting a cue to one eye biases the selection process to content presented to that eye. Target detection fluctuated at 8 Hz under the binocular condition, and at 4 Hz when the dominant eye was cued. This is consistent with findings reporting that competition between receptive fields leads to sampling. The findings also demonstrates that sampling in light of competition does not rely on aware processes.

Acknowledgements: The Brain Attention and Time Lab (ANL) is grateful for the support of the James McDonnell Scholar Award in Understanding Human Cognition, ISF Grants 958/16 and 1899/21, TIMECODE ERC Starting Grant No. 852387 as well as Joy Ventures Research Grant and the Product Academy Award.