Vision and Visualization: Inspiring Novel Research Directions in Vision Science

Time/Room: Friday, May 18, 2018, 12:00 – 2:00 pm, Talk Room 2
Organizer(s): Christie Nothelfer, Northwestern University; Madison Elliott, UBC, Zoya Bylinskii, MIT, Cindy Xiong, Northwestern University, & Danielle Albers Szafir, University of Colorado Boulder
Presenters: Ronald A. Rensink, Aude Oliva, Steven Franconeri, Danielle Albers Szafir

< Back to 2018 Symposia

Symposium Description

Data is ubiquitous in the modern world, and its communication, analysis, and interpretation are critical scientific issues. Visualizations leverage the capabilities of the visual system, allowing us to intuitively explore and generate novel understandings of data in ways that fully-automated approaches cannot. Visualization research builds an empirical framework around design guidelines, perceptual evaluation of design techniques, and a basic understanding of the visual processes associated with viewing data displays. Vision science offers the methodologies and phenomena that can provide foundational insight into these questions. Challenges in visualization map directly to many vision science topics, such as finding data of interest (visual search), estimating data means and variance (ensemble coding), and determining optimal display properties (crowding, salience, color perception). Given the growing interest in psychological work that advances basic knowledge and allows for immediate translation, visualization provides an exciting new context for vision scientists to confirm existing hypotheses and explore new questions. This symposium will illustrate how interdisciplinary work across vision science and visualization simultaneously improves visualization techniques while advancing our understanding of the visual system, and inspire new research opportunities at the intersection of these two fields.

Historically, the crossover between visualization and vision science relied heavily on canonical findings, but this has changed significantly in recent years. Visualization work has recently incorporated and iterated on newer vision research, and the results has been met with great excitement from both sides (e.g., Rensink & Baldridge, 2010; Haroz & Whitney, 2012; Harrison et al., 2014; Borkin et al., 2016; Szafir et al., 2016). Unfortunately, very little of this work is presented regularly at VSS, and there is currently no dedicated venue for collaborative exchanges between the two research communities. This symposium showcases the current state of vision science and visualization research integration, and aspires to make VSS a home for future exchanges. Visualization would benefit from sampling a wider set of vision topics and methods, while vision scientists would gain a new real-world context that simultaneously provokes insight about the visual system and holds translational impact.

This symposium will first introduce the benefits of collaboration between vision science and visualization communities, including the discussion of a specific example: correlation perception (Ronald Rensink). Next, we will discuss the properties of salience in visualizations (Aude Oliva), how we extract patterns, shapes, and relations from data points (Steven Franconeri), and how color perception is affected by the constraints of visualization design (Danielle Albers Szafir). Each talk will be 25 minutes long. The speakers, representing both fields, will demonstrate how studying these topics in visualizations has uniquely advanced our understanding of the visual system, as well as what research in these cross-disciplinary projects looks like, and propose open questions to propel new research in both communities. The symposium will conclude with an open discussion about how vision science and visualization communities can mutually benefit from deeper integration. We expect these topics to be of interest to VSS members from a multitude of vision science topics, specifically: pattern recognition, salience, shape perception, color perception, and ensemble coding.

Presentations

Information Visualization and the Study of Visual Perception

Speaker: Ronald A. Rensink, Departments of Psychology and Computer Science, UBC

Information visualization and vision science can interact in three different (but compatible) ways. The first uses knowledge of human vision to design more effective visualizations. The second adapts measurement techniques originally developed for experiments to assess performance on given visualizations. And a third way has also been recently proposed: the study of restricted versions of existing visualizations. These can be considered as “fruit flies”, i.e., systems that exist in the real world, but are still simple enough to study. This approach can help us discover why a visualization works, and can give us new insights into visual perception as well. An example of this is the perception of Pearson correlation in scatterplots. Performance here can be described by two linked laws: a linear one for discrimination and a logarithmic one for perceived magnitude (Rensink & Baldridge, 2010). These laws hold under a variety of conditions, including when properties other than spatial position are used to convey information (Rensink, 2014). Such behavior suggests that observers can infer probability distributions in an abstract two-dimensional parameter space (likely via ensemble coding), and can use these to estimate entropy (Rensink, 2017). These results show that interesting aspects of visual perception can be discovered using restricted versions of real visualization systems. It is argued that the perception of correlation in scatterplots is far from unique in this regard; a considerable number of these “fruit flies” exist, many of which are likely to cast new light on the intelligence of visual perception.

Where do people look on data visualizations?

Speaker: Aude Oliva, Massachusetts Institute of Technology
Additional Authors: Zoya Bylinskii, MIT

What guides a viewer’s attention when she catches a glimpse of a data visualization? What happens when the viewer studies the visualization more carefully, to complete a cognitively-demanding task? In this talk, I will discuss the limitations of computational saliency models for predicting eye fixations on data visualizations (Bylinskii et al., 2017). I will present perception and cognition experiments to measure where people look in visualizations during encoding to, and retrieval from, memory (Borkin, Bylinskii, et al., 2016). Motivated by clues that eye fixations give about higher-level cognitive processes like memory, we sought a way to crowdsource attention patterns at scale. I will introduce BubbleView, our mouse-contingent interface to approximate eye tracking (Kim, Bylinskii, et al., 2017). BubbleView presents participants with blurred visualizations and allows them to click to expose “bubble” regions at full resolution. We show that up to 90% of eye fixations on data visualizations can be accounted for by the BubbleView clicks of online participants completing a description task. Armed with a tool to efficiently and cheaply collect attention patterns on images, which we call “image importance” to distinguish from “saliency”, we collected BubbleView clicks for thousands of visualizations and graphic designs to train computational models (Bylinskii et al., 2017). Our models run in real-time to predict image importance on new images. This talk will demonstrate that our models of attention for natural images do not transfer to data visualizations, and that using data visualizations as stimuli for perception studies can open up fruitful new research directions.

Segmentation, structure, and shape perception in data visualizations

Speaker: Steven Franconeri, Northwestern University

The human visual system evolved and develops to perceive scenes, faces, and objects in the natural world, and this is where vision scientists justly focus their research. But humans have adapted that system to process artificial worlds on paper and screens, including data visualizations. I’ll demonstrate two examples of how studying the visual system within such worlds can provide vital cross-pollination for our basic research. First a complex line or bar graph can be alternatively powerful, or vexing, for students and scientists. What is the suite of our available tools for extracting the patterns within it? Our existing research is a great start: I’ll show how the commonly encountered ‘magical number 4’ (Choo & Franconeri, 2013) limits processing capacity, and how the literature on shape silhouette perception could predict how we segment them. But even more questions are raised: what is our internal representation of the ‘shape’ of data – what types of changes to the data can we notice, and what changes would leave us blind? Second, artificial displays require that we recognize relationships among objects (Lovett & Franconeri, 2017), as when you quickly extract two main effects and an interaction from a 2×2 bar graph. We can begin to explain these feats through multifocal attention or ensemble processing, but soon fall short. I will show how these real-world tasks inspire new research on relational perception, highlighting eyetracking work that reveals multiple visual tools for extracting relations based on global shape vs. contrasts between separate objects.

Color Perception in Data Visualizations

Speaker: Danielle Albers Szafir, University of Colorado Boulder

Many data visualizations use color to convey values. These visualizations commonly rely on vision science research to match important properties of data to colors, ensuring that people can, for example, identify differences between values, select data subsets, or match values against a legend. Applying vision research to color mappings also creates new questions for vision science. In this talk, I will discuss several studies that address knowledge gaps in color perception raised through visualization, focusing on color appearance, lightness constancy, and ensemble coding. First, conventional color appearance models assume colors are applied to 2° or 10° uniformly-shaped patches; however, visualizations map colors to small shapes (often less than 0.5°) that vary in their size and geometry (e.g., bar graphs, line charts, or maps), degrading difference perceptions inversely with a shape’s geometric properties (Szafir, 2018). Second, many 3D visualizations embed data along surfaces where shadows may obscure data, requiring lightness constancy to accurately resolve values. Synthetic rendering techniques used to improve interaction or emphasize aspects of surface structure manipulate constancy, influencing people’s abilities to interpret shadowed colors (Szafir, Sarikaya, & Gleicher, 2016). Finally, visualizations frequently require ensemble coding of large collections of values (Szafir et al., 2016). Accuracy differences between different visualizations for value identification (e.g., extrema) and summary tasks (e.g., mean) suggest differences in ensemble processing for color and position (Albers, Correll, & Gleicher, 2014). I will close by discussing open challenges for color perception arising from visualization design, use, and interpretation.

< Back to 2018 Symposia