The structure of visual working memory
Time/Room: Friday, May 10, 1:00 - 3:00 pm, Royal 1-3
Organizer: Wei Ji Ma, Baylor College of Medicine
Presenters: Steven J. Luck, Wei Ji Ma, Paul M. Bays, George Alvarez, Robert Jacobs
TWO THEORETICAL ISSUES Working memory is an essential component of perception, cognition, and action. The past eight years have seen a surge of activity aimed at understanding the structure of visual working memory. This symposium brings together some of the leading thinkers in this field to discuss two central theoretical issues: slots versus resources, and the role of context. SLOTS VERSUS RESOURCES Working memory is widely believed to be subject to an item limit: no more than a fixed number of items can be stored and any additional items are forgotten. In 2004, Wilken and Ma challenged this notion and advocated for an alternative framework in which a continuous memory resource is divided over all items and errors are explained in terms of the quality of encoding rather than the quantity of remembered items. Since then, arguments have been made on both sides, notably by speakers in this symposium (Luck, Bays, Alvarez, Ma). New concepts that have been introduced in this debate include variable precision, non-target reports, Bayesian inference, and the neural substrate of memory resource. Intriguingly, all speakers have recently used the same visual working memory paradigm – delayed estimation – to draw sometimes conflicting conclusions. Therefore, we expect a vivid exchange of thoughts. THE ROLE OF CONTEXT In the slots-versus-resources debate, items are routinely assumed to be encoded independently in working memory. This assumption is likely to be wrong, but how wrong? Recent work has pointed out the large effects of the context in which an item is presented. Items seem to be remembered in groups or ensembles organized by space or feature, and this introduces predictable biases. Hierarchical Bayesian models have been proposed by the groups of Alvarez and Jacobs to quantify context effects. They will both be speaking about these data and models. TARGET AUDIENCE The symposium aims to present current debates and open questions in the study of visual working memory to a broad audience. We believe this symposium will be of interest to students, postdocs, and faculty. The contents should be useful to a very large VSS audience: anyone studying multiple-object working memory or attention using psychophysics, electrophysiology, modeling, neuroimaging, or EEG/MEG. The symposium could benefit them by suggesting new theoretical frameworks to think about data, as well as new experimental paradigms.
Continuous versus discrete models of visual working memory capacity
Speaker: Steven J. Luck, University of California, Davis
Authors: Weiwei Zhang, University of California, Davis
Working memory plays a key role in visual cognition, allowing the visual system to span the gaps created by blinks and saccades and providing a major source of control over attention and eye movements. Moreover, measurements of visual working memory capacity for simple visual features are strongly correlated with individual differences in higher cognitive abilities and are related to psychiatric and neurological disorders. It is therefore critically important that we understand the nature of capacity limits in visual working memory. Two major classes of theories have been proposed, discrete theories in which a limited number of items can be concurrently stored with high resolution, and continuous theories in which a potentially limitless number of items can be stored by reducing the precision of the representations. In this talk, we will review 15 years of research on the nature of visual working memory representations and present new evidence that favors discrete representations.
Continuous resources and variable precision in working memory
Speaker: Wei Ji Ma, Baylor College of Medicine
Authors: Ronald van den Berg, Baylor College of Medicine; Hongsup Shin, Baylor College of Medicine
In comparisons between item-limit and continuous-resource models of working memory, the continuous-resource model tested is usually a stereotyped one in which memory resource is divided equally among items. This model cannot account for human behavior. We recently introduced the notion that resource (mnemonic precision) is variable across items and trials. This model provides excellent fits to data and outperforms item-limit models in explaining delayed-estimation data. When studying change detection, a model of memory is not enough, since the task contains a decision stage. Augmenting the variable-precision model of memory with a Bayesian decision model provides the best available account of change detection performance across set sizes and change magnitudes. Finally, we argue that variable, continuous precision has a plausible neural basis in the gain of a neural population. Our results and those of other groups overhaul long-held beliefs about the limitations of working memory.
Working memory capacity and allocation reflect noise in neural storage
Speaker: Paul M. Bays, University College London
A key claim differentiating "resource" from "slot" models of WM is that resources can be allocated flexibly, enhancing the mnemonic precision of some visual elements at a cost to others. While salient visual events are found to have a short-lived influence on WM that is rapidly suppressed, informative cues lead to a long-lasting reallocation of resources. We argue that resource limits in working memory are a direct consequence of stochasticity (noise) in neural representations. A model based on population coding reproduces the empirical relationship between error distributions and memory load and demonstrates that observers allocate limited neural resources in a near-optimal fashion.
Beyond Slots vs. Resources
Speaker: George Alvarez, Harvard University
Authors: Timothy Brady, Harvard University; Daryl Fougnie, Harvard University; Jordan Suchow, Harvard University
Slot and resource models have been influential in the study of visual working memory capacity. However, several recent empirical findings are not explicitly predicted by either model. These findings include: (1) a shared limit on the fidelity of working memory and long-term memory, (2) stochastic variability in working memory that is not explained by uneven allocation of a commodity such as slots or resources, and (3) the existence of structured representations. Together, these findings demand either significant modification of existing slot and resource models, or the introduction of a new framework for understanding visual working memory capacity.
A Probabilistic Clustering Theory of the Organization of Visual Short-Term Memory
Speaker: Robert Jacobs, University of Rochester
Authors: A. Emin Orhan, University of Rochester
Some models of visual short-term memory (VSTM) assume that memories for individual items are independent. Recent experimental evidence indicates that this assumption is false. People's memories for individual items are influenced by the other items in a scene. We develop a Probabilistic Clustering Theory (PCT) for modeling the organization of VSTM. PCT states that VSTM represents a set of items in terms of a probability distribution over all possible clusterings or partitions of those items. Because PCT considers multiple possible partitions, it can represent an item at multiple granularities or scales simultaneously. Moreover, using standard probabilistic inference, it automatically determines the appropriate partitions for the particular set of items at hand, and the probabilities or weights that should be allocated to each partition. A consequence of these properties is that PCT accounts for experimental data that have previously motivated hierarchical models of VSTM, thereby providing an appealing alternative to hierarchical models with pre-speciﬁed, ﬁxed structures.