Sample-Oriented Task-Driven Visualizations: Allowing Users To Make Better, More Confident Decisions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

Sample-Oriented Task-Driven Visualizations: Allowing


Users to Make Better, More Confident Decisions
Nivan Ferreira Danyel Fisher Arnd Christian König
New York University Microsoft Research Microsoft Research-XCG
Six MetroTech Center 1 Microsoft Way 1 Microsoft Way
Brooklyn, NY 11201 USA Redmond, WA 98052 USA Redmond, WA 98052 USA
[email protected] [email protected] [email protected]

ABSTRACT We suspect there to be several reasons for this neglect. Many


We often use datasets that reflect samples, but many users are unaware of the importance of seeing their data as a
visualization tools treat data as full populations. Uncertain sample. While it is common to generate boxplots to show
visualizations are good at representing data distributions error bars, and to run statistical tests, these usually are
emerging from samples, but are more limited in allowing prepared only at the end of an analysis process. Many
users to carry out decision tasks. This is because tasks that analysts simply explore their data based on the sample
are simple on a traditional chart (e.g. “compare two bars”) available, looking at averages or sums without taking into
become a complex probabilistic task on a chart with account uncertainty. Including statistics and uncertainty in an
uncertainty. We present guidelines for creating visual analysis can add a great deal of complexity to the process and
annotations for solving tasks with uncertainty, and an slow it down, but data analysts prioritize rapid iteration for
implementation that addresses five core tasks on a bar chart. exploration.
A preliminary user study shows promising results: that users
have a justified confidence in their answers with our system. Even for knowledgeable users, reasoning in the presence of
probabilities and uncertainty can be very challenging [3]. In
Author Keywords order to think about samples properly, users need to interpret
Incremental visualization; uncertainty visualization; user all questions and conclusions about the data in a probabilistic
study; boxplot; error bars. manner: “is A greater than B?” changes to “what are the
ACM Classification Keywords chances that A is greater than B?” Even with the aid of
H.5.m. Information interfaces and presentation (e.g., HCI): specialized visualizations, this task can still be very hard, as
Miscellaneous. Micallef et al showed in their work on visualizing Bayesian
probability [15].
INTRODUCTION
The goal of data analysis is, in general, to describe attributes Part of the challenge is that showing an uncertain value does
of a population based on quantifiable properties. Yet we not necessarily help reason about uncertain values. Many
often interact with samples of data, rather than the full visualizations have been adapted for showing uncertainty,
population. Sometimes, samples are employed because ranging from error bars to more exotic tools [21]. These
processing the entire data set places unacceptable overhead visualizations often focus on specifically showing
on storage or computing [8, 11]. More often, only a subset of uncertainty ranges [18]. However, there are many tasks that
a much larger real-life distribution is available: because the we understand how to accomplish on non-uncertain charts
data is a sample by its very nature, such as the results from a [1, 2], such as comparing bars to each other, or finding the
survey, or because the instrumentation to obtain the data can largest and smallest values; these uncertain visualizations do
only capture a small subset of the data universe [17], such as not directly support them. While it is easy to compare the
when only a subset of nodes in a data center run potentially heights of two bars, it can be difficult to compute the
expensive telemetry instrumentation. Despite the ubiquity of probability of a nearly-overlapping set of uncertainty
samples in data analysis, far too many visualization tools regions. Previous work has shown that even experts trained
neglect the fact that the data is a sample. in statistics make mistakes when interpreting confidence
intervals [6, 7]. All of this suggests the need for a better
integration of statistical techniques and interactive visual
Permission to make digital or hard copies of all or part of this work for interfaces to enable data analysts to understand the meaning
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear of sampled data.
this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
In this paper, we take a first step in this direction: we
permission and/or a fee. investigate how to adapt the data analysis process to respect
CHI 2014, April 26 - May 01 2014, Toronto, ON, Canada. samples. In order to do so, we modify analysis tools to allow
Copyright is held by the owner/author(s). Publication rights licensed to ACM. users to carry out tasks based on quantified uncertainty.
ACM 978-1-4503-2473-1/14/04…$15.00.
http://dx.doi.org/10.1145/2556288.2557131

571
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

More precisely, we design visual encodings and interactions Even experts have difficulty using confidence intervals for
with the goal of allowing data analysts not only to identify tasks beyond reading confidence levels. For example, a
the presence and magnitude of uncertainty, but to carry out common rule of thumb suggests that two distributions are
common data exploration tasks. We discuss the design space distinct if their 95% confidence intervals just barely overlap.
for such visualizations and describe our approach. Yet, as Belia et al [3] point out, this corresponds to a t-test
value of a p < 0.006—the correct interval allows much more
We focus on two common visualizations used in exploratory
overlap. Cummings and Finch [7] further note that most
data analysis, bar charts and ranked lists. For each of these,
researchers misuse confidence intervals; they discuss “rules
we identify common tasks that are performed on these charts
of eye” for reading and comparing confidence intervals on
in exploratory data analysis. Users can interact with these
printed bar charts. While their suggestions are effective, they
charts with task-specific queries; these are shown as
require training, and are limited to comparing pairs of
annotations and overlays [13] that allow users to carry out
independent bars.
these tasks easily and rapidly. Finally, we perform a
preliminary user study to assess how our visualizations While it may be complex, representing uncertainty can help
compare to standard approaches, and to establish whether users understand the risk and value of making decisions with
users are better able to carry out these tasks with uncertain data [14]. For example, long-running computations on
data. We find that our annotations help users to be more modern “big data” systems can be expensive; Fisher et al [8]
confident in their analyses. show that analysts can use uncertainty ranges, in the form of
confidence intervals on bar charts, to help decide when to
BACKGROUND AND RELATED LITERATURE
We discuss common visual analysis tools, including those terminate an incremental computation.
that do not currently handle uncertainty. Various tools have The idea of visualization techniques that can handle
been suggested that visualize uncertainty; we compare these uncertainty is a popular one in the visualization field. Skeels
tools to our approach. Last, we discuss the idea of ‘task- et al [16] provide a taxonomy of sources of uncertainty; in
driven’ visualization. this paper, we refer specifically to quantitative uncertainty
Visual Data Analysis Ignores Uncertainty derived from examining samples of a population. Olston and
Major exploratory visualization tools available today—such Mackinlay [18] suggest a number of different visualizations
as Tableau, Spotfire, and Microsoft Excel—do not have a for quantitative uncertainty, but do not carry out a user study.
built in concept of samples or uncertainty. Rather, they treat Three recent user studies [5, 19, 23] examined ways that
the data presented within the system as the whole population, users understand uncertainty representations. All three
and so present any numbers computed from the data— studies examine only the tasks of identifying the most certain
sample sums and averages, for example—as precise. (or uncertain) values, and do not ask about the underlying
However, as Kandel et al note [12], data analysts often deal data.
with samples or selections of data.
Annotating Visualizations to Address Tasks
Statistical software, such as SPSS and SAS, do have a more Beyond identifying the existence of uncertainty, we also
sophisticated concept that the data introduced is a sample, want users to be able to carry out basic tasks with charts. To
and draw their visualizations with error bars and confidence identify what those tasks should be, we turn to Amar et al [1,
intervals as appropriate. However, these visualizations are 2], who identify ten different tasks that can be carried out
usually produced in the process of running an explicit with basic charts. Their tasks include comparing values to
statistical test; by the time this test has been run, the user other, discovering the minimum value of a set of data points,
usually knows what questions they wish to investigate. This and even adding several points together. All of these tasks
is highly effective for hypothesis-testing, but less useful are very quick operations on a standard bar chart without
when the user wishes to explore their data. uncertainty: comparing two bars, for example, is as easy as
There is an opportunity, then, to provide lightweight data deciding which one is higher.
exploration techniques combined with statistical sampling. To make chart-reading tasks easier, Kong and Agrawala [13]
Visualization Techniques that Handle Uncertainty suggest using overlays to help users accomplish specific
It can be difficult for users to reason in the presence of tasks on pie charts, bar charts, and line charts. Their overlays
probabilistic data: Tversky and Kahanen [21] show that are optimized for presentation; they are useful to highlight a
people make incorrect decisions when presented with specific data point in a chart. In contrast, our approach allows
probabilistic choices. It is possible to make more accurate users to read information that would have been very difficult
decisions about data analysis when provided with confidence to extract.
intervals and sample size information [6]. Unfortunately, the UNCERTAIN VISUALIZATIONS FROM SAMPLED DATA
classic visual representations of uncertainty—such as Quantitatively uncertain data can come from many different
drawing confidence intervals or error bars—do not directly sources [16]. In this paper, we focus on computations based
map to statistical precision.

572
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

However, when comparing representations of probability


distributions, it may not be so simple to extract this
information. Instead of comparing fixed values, the user
needs to perform statistical inferences based on the given
distributions [7]. Furthermore, a change in mindset is
required: instead of asking whether or not a particular fact is
true, the analysts can only estimate the likelihood of a fact
being true or not.
For example, for the extreme value tasks, the question
changes to be “what aggregates are likely to be the maximum
or minimum?” These cannot be read directly off of a set of
bars with uncertain ranges: a user would need to estimate
how much uncertainty is represented by error bars, and how
likely that makes a maximum or minimum measure. In
Figure 1, we can be quite confident that 1995 represents the
highest aggregate value; but while it is likely that 1992 is the
Figure 1: A bar chart with 95% confidence intervals lowest, there are several other possibilities, too. Several
representing the mean value over a dataset. Note the different bars might have overlapping confidence intervals,
overlapping regions in 1992-1994.
and so the correct answer might not be a single value, but a
on samples; however, many of these techniques could be distribution.
applied more broadly. We use aggregates because they are The visualizations that we discuss in upcoming sections (and
common in exploratory data analysis: a core operation in shown in Figures 2 and 3) are designed to allow users to
understanding a dataset is examining the filtered and grouped answer these questions directly and visually, rather than by
average, sum, and count of a column. Indeed, visualization making mathematical inferences.
tools like Tableau are based largely around carrying out these
aggregate operations against different groupings. THE VISUAL ANALYSIS ENVIRONMENT
To begin our design, we selected two core data
In sample-based analyses, we carry out approximate versions visualizations: the bar chart and the ranked list. Bar charts,
of these queries: we estimate the expected average, sum, or of course, are ubiquitous; they are a core of every
count of a dataset based on the sample, and infer a visualization toolkit, and are used to represent many sorts of
distribution on this expected value. Hellerstein et al provide data. Ranked lists are used to represent sorted elements, and
a simple overview of how to use the Central Limit Theorem often show just the top few bars of a broad histogram. For
[11] to estimate error bounds based on these estimators. example, when exploring search logs with millions of
As a result, the aggregate value and confidence interval entries, a researcher might wish to see the top 10 most-
represent a distribution of possible values. One use for this frequent queries. These lists, truncated to the top values, are
is in incremental analysis [8, 11], in which the system sees particularly relevant when the number of distinct results is
cumulative samples from a large dataset, and generates too high to be shown on a single chart.
converging estimates of the final value. The distribution for Ranked lists are particularly interesting because they can be
each value represents the possible values once all of the data unstable in an incremental analysis environment. As an
has been seen. For example, consider the bar chart shown in incremental system processes increasing amounts of data, its
Figure 1. This chart is based on a sample from a large dataset estimate for the top few items can change, sometimes
of sales by year. The 95% confidence intervals mean that we radically. As more data arrives, the top few items gradually
expect—with probability 0.95—the mean value for sales in stabilize; one at a time, additional items would also stay in
1992 to be somewhere between 27,000 and 39,000. place. Gratzl et al [10] present a visual treatment for showing
In this scenario, the analyst’s task is to extract information how a ranked list changes across different attributes; their
from the probability distributions modeled from the sample. mechanism does not address uncertain rankings.
Amar et al [1, 2] collect a series of different tasks that are Uncertain ranked lists can be seen as having a partial order:
commonly performed during the exploratory data analysis we are certain that some items will be greater than others, but
process. Their list includes low-level tasks like retrieve may be uncertain about other pairwise relationships. Soliman
value, find extrema (minimum and maximum), sort values, and Ilyas [20] provide a mathematical basis for rapidly
and compare values. In a representation without uncertainty, evaluating rankings as a partial order; they do not present a
such as an ordinary bar chart, these tasks have direct user interface for interacting with rankings.
interpretations: to find the minimum value in the bar chart,
for example, the users simply finds the shortest (or most Other visualizations, such as line charts, scatterplots, and
negative) bar. parallel coordinates, might also be interesting to examine; we
leave those for future work.

573
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

Tasks for Visual Analysis we find in practice that this computation can be done
Our goal was to design a visual data analysis environment interactively.
containing summaries for bar charts and ranked lists that
Table 1: Evaluating the probability of D1 > D2, where
supported sample based analysis. We selected some
D1~𝓝(𝟓, 𝟗) and D2~𝓝(𝟒, 𝟏𝟔), from on random samples
particularly relevant tasks from Amar et al [1, 2]. For the bar (S1..S6). The resulting approximation is p (D1>D2) ≈ 4/6.
chart, we support compare pair of bars; find extrema;
compare values to a constant; and compare to a range. Amar 𝑺𝟏 𝑺𝟐 S3 S4 S5 S6
et al also suggest the task sort values. For the ranked list,
we selected two tasks based on sorting a list: identify which D1 2.92 7.92 4.38 4.16 12.1 5.15
item is likely to fall at a given rank, and identify which items
D2 5.16 2.26 0.69 3.77 3.43 7.23
are likely to fall between a given pair of rankings. This latter
task includes identifying all objects that fall in the top 3, but D1>D2 FALSE TRUE TRUE TRUE TRUE FALSE
also every item ranked between 10 and 20.
Computational Framework THE DESIGN OF SAMPLE-BASED VISUALIZATIONS
It can be challenging to compute the statistical tests required Our goal is to assist data analysts in making decisions about
to compare distributions. If we assume independent normal uncertain data. We expect those analysts to be at least
distributions, the simplest operations—such as comparing a familiar with bar charts with confidence intervals, and so our
distribution with a constant, or comparing two design extends existing familiar visual representations. Our
distributions—can be computed using standard techniques system should allow them to carry out the tasks listed above.
such as t-tests. However, there is no simple closed form for Design Goals
many other distributions and tasks. After reviewing literature in visualization and interface
To address this problem, we have constructed a two-phase design, we settled on these design goals:
computational framework that applies to all of the Easy to Interpret: Uncertainty is already a complex concept
visualizations. The first phase is an uncertainty for users to interpret; our visualizations should add minimal
quantification phase, in which we estimate the probability additional complexity. One useful test is whether the
distribution from the aggregate we are interested in. As a visualization converges to a simple form when all the data
heuristic, we use the Central Limit Theorem to estimate has arrived.
confidence intervals based on the count, standard deviation,
and running average of items we have seen so far. We create Consistency across Task: One elegant aspect of the classic
one distribution for each aggregate on the chart; we will later bar chart is that users can carry out multiple tasks with it.
interpret these distributions as bars with confidence intervals. While we may not be able to maintain precisely the same
visualization for different uncertain tasks, we would like a
In the second phase, we use these distributions to compute user to be able to change between tasks without losing
probabilities using a Monte-Carlo approach. (This method is context on the dataset.
adapted from a technique in the statistical simulation
community [9]). We represent each task by a corresponding Spatial Stability across Sample Size: In the case of
non-probabilistic predicate (that is, an expression that has a incremental analysis [8, 11], where samples grow larger over
true or false value) that refers to samples. For example, the time, the visualizations should be change as little as possible.
task ‘is the value of the distribution D1 likely to be greater In particular, it should be possible to smoothly animate
than D2’ corresponds to the predicate ‘a sample from D1 is between the data at two successive time intervals: changes in
greater than a sample from D2.’ the visualization should be proportionate to the size of the
change in the data. This reduces display changes that would
From each distribution, we repeatedly draw samples and distract the user for only minor data updates.
evaluate the predicate against the samples. We repeat this
process a large number of times—in this paper, 10,000 times. Minimize Visual Noise: We would like to ensure that the
We approximate the probability of an event as the fraction of visualization is not confusing. If the base data is displayed as
those iterations in which the predicate is true. Table 1 shows a bar chart, showing a second bar chart of probabilities is
an example of this process for two normal distributions D1 likely to be more confusing than a different visual
and D2 and the predicate D1 > D2. In the simplified example, representation.
we take six samples; the predicate is evaluated on each. To fulfill these criteria, we apply interactive annotations [13]
Although this approach computes only approximate to the base visualizations. The annotations will show the
probabilities, it is able to compute general predicates for any results of task-based queries against the dataset. We select
probability distributions, with the only requirements that we particular annotations that we believe will minimize
can draw samples from the distributions and can assume the confusion.
distributions are independent. While many iterations are
needed for precision, given the speed of computing systems,

574
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

Visual Annotations Identify Minimum and Maximum


In this section, we outline the five different task-based The Extrema tool is used to quantify the probability that any
annotations that we have created. Each annotation bar would be either the maximum or minimum among all the
corresponds to a task or group of closely-related tasks. In our distributions. We compute the probability that each bar
prototype interface, a user can select from these annotations; represents the minimum; separately, we compute the
the display adapts appropriately. probability it represents the maximum. The total probability
across all bars must equal 100%, and so we map the data to
Compare Bars to Each Other
The Compare Bars tool is used to directly compare the a pair of pie charts. Pie charts avoid the confusion of
distributions in the plot. The user selects one of the presenting a second, different bar chart.
distributions; the system compares all the distributions A qualitative color mapping is used to identify bars and the
against the selected one. Each bar is colored by the regions in the pie charts. We note that this color map would
probability that its distribution is larger than the selected bar. not scale to large numbers of bars. In those cases, we could
A divergent color scale ranges from 0% likely—that is, “is consider coloring only bars that are candidates for the top
definitely smaller”—to 100%, “definitely larger.” At the position. When even that is infeasible, the ranked list
center, we use white coloring to represent “unknown”. This visualization, below, is a better choice. This tool is illustrated
tool is illustrated in Figure 2(a). in Figure 2(b).

(a) Comparing bars to each other. We compare the white (b) Identify minimum and maximum: the pie charts show the
bar to the others; dark blue means “certainly below”, while probability that any given bar could be the maximum or minimum
dark red means “certainly above.” value.

(c) Compare each bar to a fixed value. The user can move (d) Compare each bar to a range. Dark colors mean “likely to be
the line. inside the range”, light ones mean “outside the range.”

Figure 2. Four of the tasks and their visual representations. All data is the same as in Figure 1.

575
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

Compare to Constant the set of items that have some probability of having that
This annotation enables users to compare a given value to the rank. The height, width, and color of each rectangle are
probability distributions represented by the error bars. Users mapped to the probability of that ranking. Very unlikely
drag a horizontal line representing a constant value; the results, therefore, shrink to nothing; likely results take up
probability that the distribution is larger than this constant almost all the space. The bars are sorted in a stable order, and
value is mapped as a color to the corresponding bar. As with so are easier to find between levels. We use the single-ended
the bin comparison, a divergent color scale is used to color scale to highlight regions of certainty (see Figure 3(d)).
represent the full space from “definitely lower” to “definitely
higher”. The tool is illustrated in Figure 2(c). Unlike the other annotations discussed here, this view can
also be used in a standalone setting, without being displayed
Compare to Range next to a bar chart. This is particularly useful when the
The Range tool is similar to comparing to a constant. It is number of distributions being ranked is large. This tool is
used to evaluate the probability of a distribution’s value illustrated in Figure 3(b).
falling within a range. Users can drag and scale a horizontal
strip. The probability that the distribution represented by the Find Items within Ranks
error bar is contained in the region is mapped as a color to The Ranked List tool is also used to find what items fall
the corresponding bar. Unlike the comparison tools, which within a range of ranks. This would allow a user to learn the
map to a divergent color scheme, this uses a single-ended set of items that are likely to fall in the top five—without
palette; it only tests whether the value is likely to be inside regard for individual rank. That set might be very large when
or outside the range. This tool is illustrated in Figure 2(d). sample sizes are small and uncertainty ranges are high. A
user can select the rows to be merged and click the “merge”
Find Items at Given Rank button. At that point, the system displays the probability that
The Ranked List tool is used for ranking probability the bars will fall within the range (Figure 3(c)).
distributions. Without uncertainty, a ranked list has a
straightforward presentation. Therefore, to maintain the Design Discussion
visual analogy, the visual representation resembles a list. These visual representations share a number of design
Each line of the list is a single rank; the line is populated by concepts and themes. In a standard bar chart, these tasks can

(c) Ranked List tool after using the


(a) Standard confidence interval bars (b) The Ranked List visualization
merge operation to compute top-3
for a dataset. corresponding to the bar chart in (a)
probabilities.

(d) Ranked List tool row schematic. Height, width, and color are proportional to the probability that this item will fall in
this bin. It is nearly certain that 1992 and 1993 will fall in the first three items; 1994 and 1995 divide the rest.
Figure 3: The Ranked List tool shows the probability of rank orders.

576
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

largely be addressed at a glance; in a probabilistic scenario, and potentially valuable. Both qualitative and quantitative
it requires more work. feedback would help assess whether these annotations would
enable users to make better decisions with greater confidence
All the interactions are lightweight: users need only select
under uncertainty. Because current charting techniques often
the tool, and choose the relevant value. With these simple
neglect confidence intervals, it would be important to allow
mechanisms, users can interactively perform complex
users to compare our annotations to both plain bar charts, and
queries in the data. While “compare bar to bar” and “compare
to charts that had traditional confidence intervals.
bar to bin” can be visually approximated [7], the other tasks
simply cannot be done visually. Our working hypotheses are that users with our system will
be (H1) more accurate in their answers to these questions,
Our design process considered several alternative
and be (H2) more confident in their answers. We do not
visualizations for these tasks. For example, we considered
expect them to be faster to respond, as our method requires
having matrix-like visualizations to compare each bin
additional interaction.
against the others. While this would reduce the amount of
interaction needed, it would massively increase the Study Design
complexity of the visualization. Our study was designed to explore a broad space of
possibilities in order to understand the use of each of our
The Sort tool has a more complex design compared to the annotations. We ask about five different question types:
others, although it is conceptually still very simple. It is compare-to-constant, compare-to-bar, find-minimum, find-
basically a list, in which every row represents all the possible maximum, and top-k.
values of that row. The redundant mapping—probability
maps to height, width, and color—is meant to address three Our study design compares three visual conditions. In the
distinct problems. By mapping to width, very small bars fall first condition, the user can see only a basic bar chart with
off the chart. By mapping to height, a user can easily read neither error bars nor annotations. In the second, we present
across to find high bars: comparing lengths is much harder. a bar chart with confidence intervals. In the third, users begin
Finally, colors help to highlight regions of the list where the with confidence intervals, but may also turn on the
rank is certain. annotations using a menu. The study apparatus is shown in
Figure 5. In all conditions, users can see the amount of data
All the color scales were obtained from ColorBrewer [4]. that this question represents.
EVALUATION
We wished to select a scenario that would be closely
We conducted an initial user study in order to evaluate the
resemble the ways that users might really deal with this
effectiveness of our design. In particular, we wanted to
system. Thus, we wanted queries that a user might
confirm that our techniques were learnable, interpretable,
realistically run, at a reasonable scale, and based on realistic

Figure 5: The study apparatus. This user is being asked a question in the error bar condition. The bar at top right shows that
this question is based on 20% of the data.

577
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

data. We selected TPC-H1, a standard decision support be valuable to non-experts, we wanted to understand the
benchmark designed to test performance of very large value they provided over confidence intervals.
databases with realistic characteristics. To generate realistic
For this preliminary study, we recruited seven participants.
data, we generated skewed data (with a Zipfian skew factor
All were male graduate students in computer science; all
of z=1, using a standard tool2). Part of TPC-H is a series of
were generally familiar with reading charts and interacting
testing queries with many sample parameters. Different
with data. All had at least basic statistical training, have some
parameters to the query produce different results. We
familiarity with confidence intervals and error bars, and had
selected one query, Q13, which produces a bar chart of four
used analytics systems.
or five bars. The raw Q13 data table carries 13 million rows.
RESULTS
To simulate an analysis scenario, we randomly sampled the
TPC-H tables at five different fractions, from 10% of the data Comments and Feedback from Users
through 50% of the data. Because the Q13 query is very During the training before the study, all of our subjects
restrictive, each bar only represented a couple of dozen or learned the system and visualizations quickly and reported
hundred (and not several million) data points. that they felt comfortable using them. Users had no difficulty
understanding the purpose for the enhancements.
A single question, then, is a combination of a question type
(see Figure 6), a visual condition (PLAIN, ERROR BARS, After the study, we debriefed the users. Our users understood
or ENHANCED), a sample size, and a parameter to the all of the annotations. User 2, for example, had avoided
question. dealing with confidence intervals before, as he found them
difficult; using our system, he said, “It is good that I
Our study uses a repeated-measures design. Each user don't need to do much thinking.” Users were least happy with
answered 75 questions in random order. We balanced within the sort tool; several complained that it was too complex to
users by question type, and randomly assigned the other use easily. While it was designed to be a variant on a
values. Questions were roughly balanced: no user answered traditional list, it may have added too much material.
fewer than 19 questions in any condition, nor more than 30.
We wanted to better understand how users made decisions
Is the bin 1995 larger than 47000? (True/False)
about their confidence in a visualization. In the baseline
Is the bin 1994 greater than the bin 1995? (True/False) PLAIN condition, users had very few cues to guess how
Which bar is most likely to be the minimum? (Choice of four) broad the confidence intervals were; several reported that
they eyeballed their confidence by looking at the progress
What are the most probable top 3 items? (Choice of four) bar in the top right: they felt more confident with larger
dataset sizes, and less confident with smaller ones.
Figure 6: Sample questions from the user study illustrate the
tasks: compare to value, compare bars, find extrema, and In the annotated condition, in contrast, users had several
ranked list. different cues to judge confidence. Indeed, user 4
We also wanted to understand how certain users were about complained that in the annotated condition, he had “too many
their answers: we expected the system to make more of a things to consider:” sample size, error bars and annotations.
difference in marginal cases where confidence intervals were Another user said he did not feel confident in any answer
broad; when confidence intervals are narrow, certainty is less when the sample size was small. This is an interesting
interesting. Users rated confidence on a five-point Likert misperception: in theory, the sample size should not matter
scale from “completely uncertain” to “completely certain.” at all to the analysis. Confidence intervals should provide at
least as much information as the progress bar would have;
For each question, user selected an answer, self-rated their our annotations should override confidence intervals. Users
certainty in that answer, and then pressed “next question.” still attempted to juggle all three.
We logged the answer, their confidence in the answer, and
the time it took to answer. After the experiment users were Quantitative Results
presented with a questionnaire that to assess their overall Because accuracy and confidence are on ordered, categorical
user experience. data, we carried out non-parametric Kruskal-Wallis chi-
squared test to compare accuracy and confidence across
Participants conditions.
As described earlier, our techniques are designed to enhance
traditional confidence intervals for data analysts with at least Overall, our users were very accurate, getting 84% of all
basic training in statistics. While our annotations might also questions right. There was no difference in overall accuracy
between the three conditions, and so H1 was not supported
(χ2 = 2.2968, df = 2, p = 0.3171). We see, however, that users

1 2
http://www.tpc.org/tpch Program for TPC-H Generation with Skew:
ftp://ftp.research.microsoft.com/users/viveknar/TPCDSkew

578
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

made fewer mistakes with larger samples—virtually no one may be worth looking at techniques that would generate
got questions wrong with the larger sample set, but many did questions with more ambiguity.
get them wrong with small samples. Figure 7 looks at
We have shown how these annotations could be applied to a
accuracy by sample size across the three conditions.
bar chart with error bars; however, our design principles are
very general: almost any aggregate chart type could
presumably be adapted to show task annotations. Indeed, we
suspect that more complex charts would benefit even more
from our techniques.

Figure 7: Average accuracy by sample size. Across all


conditions, users are more accurate with access to more data.
We now turn to confidence. As Figure 8 suggests, users in
the ENHANCED condition largely felt more confident in
their results than the other users. H2 was supported (χ 2 =
32.9335, df = 2, p << 0.001).

Figure 9: In the three conditions, tallies of confidence against


Figure 8: Confidence by condition, across all sample sizes and accuracy. Trials in the ‘Enhanced’ condition with high confidence
tasks. Users in the ENHANCED condition were more were more likely to be correct than in the ‘Plain’ condition.
confident in their answers.
Similarly, the Monte-Carlo framework that we outline is
We wanted to understand the interaction between confidence highly adaptable to other tasks. It could be incorporated into
and accuracy—we wanted to ensure we did not deliver a variety of tasks beyond those in this paper. For example,
confidence without accuracy. However, we do not expect our multiple range tools could be combined to test the likelihood
system to deliver accuracy at all levels: we expect our system of being within a disjoint union of ranges.
to provide justified confidence. That is, a user using our
system should be confident when they are right, and We are currently incorporating the system discussed in this
conversely feel unsure when they do not have sufficient paper within a progressive data processing framework; we
information. hope to make interacting with uncertainty and samples an
everyday part of its users’ experiences.
To explore this idea, in Figure 9, we bucket confidence into
CONCLUSION
three categories. In the PLAIN condition, users maintain
Many data systems use sampled data, either for progressive
approximately the same level of confidence: in other words,
computation or because sample data is the available or
being right and being confident are unrelated. In contrast, in
affordable subset. Drawing confidence intervals can help as
the ENHANCED condition, the highly-confident users were
a static view, but cannot help users handle more sophisticated
very likely to be right; the less-confident users were
queries against their visualizations data.
comparatively more likely to be wrong. Not only that, but
from the test for H2, we know that users are more likely to Tasks involving probability and confidence intervals have
be confident with our system. We believe this is good been shown to be difficult, even for experts. Past work has
preliminary evidence that our visualization helps encourage looked mainly at interpreting whether a given point was
justified confidence. uncertain, and how uncertain it is. In this work, we have
DISCUSSION & FUTURE WORK
expanded that to look at techniques that will allow users to
Our annotations did not increase raw accuracy. Instead, we make use of that uncertainty—to predict when one value is
have suggested that they increase what we call “justified likely to be higher than another, or to look at the ranked
confidence.” To pursue this further, though, we would need sequence of values. These techniques allow users to directly
more ambiguous questions: as is reflected by the high read the answers to these tasks off of the chart, analogously
accuracy rates, a number of the questions were too easy for to the way that non-probabilistic data can be read directly off
users. In future tests of user interaction with uncertainty, it a bar chart without confidence intervals.

579
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada

Our experiment suggests that enhancing bar charts with task- 10. S. Gratzl, A. Lex, N. Gehlenborg. LineUp: Visual
specific annotations may indeed help users make decisions Analysis of Multi-Attribute Rankings. IEEE Trans. on
about samples. While we did not show in this context that Vis. and Comp. Graphics 2013
users would be more accurate, we did show that they would 11. J. Hellerstein, R. Avnur, A. Chou, C. Olston, V.
be more confident in their accurate responses (and, Raman, T. Roth, C. Hidber, P. Haas. Interactive Data
conversely, would know when not to be confident.) This Analysis with CONTROL. IEEE Computer, 32(8), 51-
seems a desirable trait in a system based on partial data: we 59, 1999.
would like analysts to be able to make decisions about when
to terminate expensive and slow queries. 12. S. Kandel, A. Paepcke, J. M. Hellerstein, J. Heer.
Enterprise data analysis and visualization: An interview
The current reliance on variations of the box plot is study. IEEE Trans. on Vis. and Comp. Graphics, 18(12),
insufficient for real data fluency—we need to broaden our 2917-2926.
tools for visualizing uncertainty, not only of individual
13. N. Kong, M. Agrawala. Graphical Overlays: Using
levels, but of complex operations on data.
Layered Elements to Aid Chart Reading. IEEE Trans.
ACKNOWLEDGEMENTS on Vis. and Comp. Graphics, 18(12), 2631-2638.
Our thanks to the MSR Big Sky team, who are applying these
14. A. M. MacEachren, A. Robinson, S. Hopper, S.
concepts, and the participants of our study. The first author
Gardner, R. Murray, M. Gahegan, E. Hetzler.
was partially supported by the National Science Foundation
Visualizing geospatial information uncertainty: What we
grant MRI-1229185.
know and what we need to know. Cartography and
REFERENCES Geographic Information Science, 32(3), 139-160.
1. R. Amar and J. Stasko. A knowledge task-based 15. L. Micallef, P. Dragicevic, J. D. Fekete. Assessing the
framework for design and evaluation of information Effect of Visualizations on Bayesian Reasoning through
visualizations. IEEE Symp. on Information Crowdsourcing. IEEE Trans. on Vis. and Comp.
Visualization, 2004. INFOVIS 2004. (pp. 143-150). Graphics, 18.12 (2012): 2536-2545.
2. R. Amar, J. Eagan, J. Stasko. Low-level components of 16. M. Skeels, B. Lee, G. Smith, and G. Robertson.
analytic activity in information visualization. IEEE Revealing Uncertainty for Information Visualization. In
Symp. on Information Visualization, 2005. INFOVIS Proc. of the Working Conf. on Advanced Visual
2005. (pp. 111-117). Interfaces. ACM, New York, NY, USA. 2008, 376-379.
3. S. Belia, F. Fidler, J. Williams, G. Cumming. 17. F. Olken, D. Rotem. Random sampling from database
Researchers misunderstand confidence intervals and files: a survey. In Proc. of the 5th Int’l Conf. on
standard error bars. Psychological methods, 10(4), 389- Statistical and Scientific Database Management
396, 2005. (SSDBM'1990), Zbigniew Michalewicz (Ed.). Springer-
4. C. Bewer., G. W. Hatchard and Mark A. Harrower, Verlag, London, UK, 92-111. 1990.
2003, ColorBrewer in Print: A Catalog of Color 18. C. Olston, J. Mackinlay. Visualizing data with bounded
Schemes for Maps, Cartography and Geographic uncertainty. IEEE Symp. on Information Visualization,
Information Science 30(1): 5-32. 2002. INFOVIS 2002. (pp. 37-40).
5. N. Boukhelifa, A. Bezerianos, T. Isenberg, J. D. Fekete. 19. J. Sanyal, S. Zhang, G. Bhattacharya, P. Amburn, and
Evaluating Sketchiness as a Visual Variable for the R. Moorhead. A User Study to Compare Four
Depiction of Qualitative Uncertainty. IEEE Trans. on Uncertainty Visualization Methods for 1D and 2D
Vis. and Comp. Graphics, 18(12), 2769–2778, 2012. Datasets. IEEE Trans. on Vis. and Comp. Graphics
6. G. Cumming. Understanding the new statistics: Effect 15(6), 1209-1218.
sizes, confidence intervals, and meta-analysis. New 20. M. A. Soliman, I. F. Ilyas. Ranking with uncertain
York, Routledge, 2012. scores. Data Engineering, 2009. ICDE'09. IEEE 25th
7. G. Cumming, S. Finch. Inference by eye: Confidence International Conference on. IEEE, 2009.
intervals and how to read pictures of data. American 21. A. Tversky, D. Kahneman. Judgment under
Psychologist, 60(2), 170–18, 2005. Uncertainty: Heuristics and Biases. Science, 185 (1974).
8. D. Fisher, I. Popov, S. M. Drucker, and mc schraefel. 1124-1131.
Trust Me, I'm Partially Right: Incremental Visualization 22. H. Wickham, L. Stryjewski. 40 Years of Boxplots.
Lets Analysts Explore Large Datasets Faster. ACM Technical Report from http://vita.had.co.nz/. 2012.
Conf. on Human Factors in Comp. Systems. CHI 2012.
(pp. 1673-1682). 23. T. Zuk, S. Carpendale. Visualization of Uncertainty and
Reasoning. In Proceedings of the 8th Int’l Symp. on
9. D. Goldsman, B. Nelson, and B. Schmeiser. Methods Smart Graphics (SG '07). Springer-Verlag, Berlin,
for Selecting the Best System. Proceedings of the 1991 Heidelberg. 2007
Winter Simulation Conf. 177-186.

580

You might also like