Sample-Oriented Task-Driven Visualizations: Allowing Users To Make Better, More Confident Decisions
Sample-Oriented Task-Driven Visualizations: Allowing Users To Make Better, More Confident Decisions
Sample-Oriented Task-Driven Visualizations: Allowing Users To Make Better, More Confident Decisions
571
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
More precisely, we design visual encodings and interactions Even experts have difficulty using confidence intervals for
with the goal of allowing data analysts not only to identify tasks beyond reading confidence levels. For example, a
the presence and magnitude of uncertainty, but to carry out common rule of thumb suggests that two distributions are
common data exploration tasks. We discuss the design space distinct if their 95% confidence intervals just barely overlap.
for such visualizations and describe our approach. Yet, as Belia et al [3] point out, this corresponds to a t-test
value of a p < 0.006—the correct interval allows much more
We focus on two common visualizations used in exploratory
overlap. Cummings and Finch [7] further note that most
data analysis, bar charts and ranked lists. For each of these,
researchers misuse confidence intervals; they discuss “rules
we identify common tasks that are performed on these charts
of eye” for reading and comparing confidence intervals on
in exploratory data analysis. Users can interact with these
printed bar charts. While their suggestions are effective, they
charts with task-specific queries; these are shown as
require training, and are limited to comparing pairs of
annotations and overlays [13] that allow users to carry out
independent bars.
these tasks easily and rapidly. Finally, we perform a
preliminary user study to assess how our visualizations While it may be complex, representing uncertainty can help
compare to standard approaches, and to establish whether users understand the risk and value of making decisions with
users are better able to carry out these tasks with uncertain data [14]. For example, long-running computations on
data. We find that our annotations help users to be more modern “big data” systems can be expensive; Fisher et al [8]
confident in their analyses. show that analysts can use uncertainty ranges, in the form of
confidence intervals on bar charts, to help decide when to
BACKGROUND AND RELATED LITERATURE
We discuss common visual analysis tools, including those terminate an incremental computation.
that do not currently handle uncertainty. Various tools have The idea of visualization techniques that can handle
been suggested that visualize uncertainty; we compare these uncertainty is a popular one in the visualization field. Skeels
tools to our approach. Last, we discuss the idea of ‘task- et al [16] provide a taxonomy of sources of uncertainty; in
driven’ visualization. this paper, we refer specifically to quantitative uncertainty
Visual Data Analysis Ignores Uncertainty derived from examining samples of a population. Olston and
Major exploratory visualization tools available today—such Mackinlay [18] suggest a number of different visualizations
as Tableau, Spotfire, and Microsoft Excel—do not have a for quantitative uncertainty, but do not carry out a user study.
built in concept of samples or uncertainty. Rather, they treat Three recent user studies [5, 19, 23] examined ways that
the data presented within the system as the whole population, users understand uncertainty representations. All three
and so present any numbers computed from the data— studies examine only the tasks of identifying the most certain
sample sums and averages, for example—as precise. (or uncertain) values, and do not ask about the underlying
However, as Kandel et al note [12], data analysts often deal data.
with samples or selections of data.
Annotating Visualizations to Address Tasks
Statistical software, such as SPSS and SAS, do have a more Beyond identifying the existence of uncertainty, we also
sophisticated concept that the data introduced is a sample, want users to be able to carry out basic tasks with charts. To
and draw their visualizations with error bars and confidence identify what those tasks should be, we turn to Amar et al [1,
intervals as appropriate. However, these visualizations are 2], who identify ten different tasks that can be carried out
usually produced in the process of running an explicit with basic charts. Their tasks include comparing values to
statistical test; by the time this test has been run, the user other, discovering the minimum value of a set of data points,
usually knows what questions they wish to investigate. This and even adding several points together. All of these tasks
is highly effective for hypothesis-testing, but less useful are very quick operations on a standard bar chart without
when the user wishes to explore their data. uncertainty: comparing two bars, for example, is as easy as
There is an opportunity, then, to provide lightweight data deciding which one is higher.
exploration techniques combined with statistical sampling. To make chart-reading tasks easier, Kong and Agrawala [13]
Visualization Techniques that Handle Uncertainty suggest using overlays to help users accomplish specific
It can be difficult for users to reason in the presence of tasks on pie charts, bar charts, and line charts. Their overlays
probabilistic data: Tversky and Kahanen [21] show that are optimized for presentation; they are useful to highlight a
people make incorrect decisions when presented with specific data point in a chart. In contrast, our approach allows
probabilistic choices. It is possible to make more accurate users to read information that would have been very difficult
decisions about data analysis when provided with confidence to extract.
intervals and sample size information [6]. Unfortunately, the UNCERTAIN VISUALIZATIONS FROM SAMPLED DATA
classic visual representations of uncertainty—such as Quantitatively uncertain data can come from many different
drawing confidence intervals or error bars—do not directly sources [16]. In this paper, we focus on computations based
map to statistical precision.
572
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
573
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
Tasks for Visual Analysis we find in practice that this computation can be done
Our goal was to design a visual data analysis environment interactively.
containing summaries for bar charts and ranked lists that
Table 1: Evaluating the probability of D1 > D2, where
supported sample based analysis. We selected some
D1~𝓝(𝟓, 𝟗) and D2~𝓝(𝟒, 𝟏𝟔), from on random samples
particularly relevant tasks from Amar et al [1, 2]. For the bar (S1..S6). The resulting approximation is p (D1>D2) ≈ 4/6.
chart, we support compare pair of bars; find extrema;
compare values to a constant; and compare to a range. Amar 𝑺𝟏 𝑺𝟐 S3 S4 S5 S6
et al also suggest the task sort values. For the ranked list,
we selected two tasks based on sorting a list: identify which D1 2.92 7.92 4.38 4.16 12.1 5.15
item is likely to fall at a given rank, and identify which items
D2 5.16 2.26 0.69 3.77 3.43 7.23
are likely to fall between a given pair of rankings. This latter
task includes identifying all objects that fall in the top 3, but D1>D2 FALSE TRUE TRUE TRUE TRUE FALSE
also every item ranked between 10 and 20.
Computational Framework THE DESIGN OF SAMPLE-BASED VISUALIZATIONS
It can be challenging to compute the statistical tests required Our goal is to assist data analysts in making decisions about
to compare distributions. If we assume independent normal uncertain data. We expect those analysts to be at least
distributions, the simplest operations—such as comparing a familiar with bar charts with confidence intervals, and so our
distribution with a constant, or comparing two design extends existing familiar visual representations. Our
distributions—can be computed using standard techniques system should allow them to carry out the tasks listed above.
such as t-tests. However, there is no simple closed form for Design Goals
many other distributions and tasks. After reviewing literature in visualization and interface
To address this problem, we have constructed a two-phase design, we settled on these design goals:
computational framework that applies to all of the Easy to Interpret: Uncertainty is already a complex concept
visualizations. The first phase is an uncertainty for users to interpret; our visualizations should add minimal
quantification phase, in which we estimate the probability additional complexity. One useful test is whether the
distribution from the aggregate we are interested in. As a visualization converges to a simple form when all the data
heuristic, we use the Central Limit Theorem to estimate has arrived.
confidence intervals based on the count, standard deviation,
and running average of items we have seen so far. We create Consistency across Task: One elegant aspect of the classic
one distribution for each aggregate on the chart; we will later bar chart is that users can carry out multiple tasks with it.
interpret these distributions as bars with confidence intervals. While we may not be able to maintain precisely the same
visualization for different uncertain tasks, we would like a
In the second phase, we use these distributions to compute user to be able to change between tasks without losing
probabilities using a Monte-Carlo approach. (This method is context on the dataset.
adapted from a technique in the statistical simulation
community [9]). We represent each task by a corresponding Spatial Stability across Sample Size: In the case of
non-probabilistic predicate (that is, an expression that has a incremental analysis [8, 11], where samples grow larger over
true or false value) that refers to samples. For example, the time, the visualizations should be change as little as possible.
task ‘is the value of the distribution D1 likely to be greater In particular, it should be possible to smoothly animate
than D2’ corresponds to the predicate ‘a sample from D1 is between the data at two successive time intervals: changes in
greater than a sample from D2.’ the visualization should be proportionate to the size of the
change in the data. This reduces display changes that would
From each distribution, we repeatedly draw samples and distract the user for only minor data updates.
evaluate the predicate against the samples. We repeat this
process a large number of times—in this paper, 10,000 times. Minimize Visual Noise: We would like to ensure that the
We approximate the probability of an event as the fraction of visualization is not confusing. If the base data is displayed as
those iterations in which the predicate is true. Table 1 shows a bar chart, showing a second bar chart of probabilities is
an example of this process for two normal distributions D1 likely to be more confusing than a different visual
and D2 and the predicate D1 > D2. In the simplified example, representation.
we take six samples; the predicate is evaluated on each. To fulfill these criteria, we apply interactive annotations [13]
Although this approach computes only approximate to the base visualizations. The annotations will show the
probabilities, it is able to compute general predicates for any results of task-based queries against the dataset. We select
probability distributions, with the only requirements that we particular annotations that we believe will minimize
can draw samples from the distributions and can assume the confusion.
distributions are independent. While many iterations are
needed for precision, given the speed of computing systems,
574
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
(a) Comparing bars to each other. We compare the white (b) Identify minimum and maximum: the pie charts show the
bar to the others; dark blue means “certainly below”, while probability that any given bar could be the maximum or minimum
dark red means “certainly above.” value.
(c) Compare each bar to a fixed value. The user can move (d) Compare each bar to a range. Dark colors mean “likely to be
the line. inside the range”, light ones mean “outside the range.”
Figure 2. Four of the tasks and their visual representations. All data is the same as in Figure 1.
575
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
Compare to Constant the set of items that have some probability of having that
This annotation enables users to compare a given value to the rank. The height, width, and color of each rectangle are
probability distributions represented by the error bars. Users mapped to the probability of that ranking. Very unlikely
drag a horizontal line representing a constant value; the results, therefore, shrink to nothing; likely results take up
probability that the distribution is larger than this constant almost all the space. The bars are sorted in a stable order, and
value is mapped as a color to the corresponding bar. As with so are easier to find between levels. We use the single-ended
the bin comparison, a divergent color scale is used to color scale to highlight regions of certainty (see Figure 3(d)).
represent the full space from “definitely lower” to “definitely
higher”. The tool is illustrated in Figure 2(c). Unlike the other annotations discussed here, this view can
also be used in a standalone setting, without being displayed
Compare to Range next to a bar chart. This is particularly useful when the
The Range tool is similar to comparing to a constant. It is number of distributions being ranked is large. This tool is
used to evaluate the probability of a distribution’s value illustrated in Figure 3(b).
falling within a range. Users can drag and scale a horizontal
strip. The probability that the distribution represented by the Find Items within Ranks
error bar is contained in the region is mapped as a color to The Ranked List tool is also used to find what items fall
the corresponding bar. Unlike the comparison tools, which within a range of ranks. This would allow a user to learn the
map to a divergent color scheme, this uses a single-ended set of items that are likely to fall in the top five—without
palette; it only tests whether the value is likely to be inside regard for individual rank. That set might be very large when
or outside the range. This tool is illustrated in Figure 2(d). sample sizes are small and uncertainty ranges are high. A
user can select the rows to be merged and click the “merge”
Find Items at Given Rank button. At that point, the system displays the probability that
The Ranked List tool is used for ranking probability the bars will fall within the range (Figure 3(c)).
distributions. Without uncertainty, a ranked list has a
straightforward presentation. Therefore, to maintain the Design Discussion
visual analogy, the visual representation resembles a list. These visual representations share a number of design
Each line of the list is a single rank; the line is populated by concepts and themes. In a standard bar chart, these tasks can
(d) Ranked List tool row schematic. Height, width, and color are proportional to the probability that this item will fall in
this bin. It is nearly certain that 1992 and 1993 will fall in the first three items; 1994 and 1995 divide the rest.
Figure 3: The Ranked List tool shows the probability of rank orders.
576
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
largely be addressed at a glance; in a probabilistic scenario, and potentially valuable. Both qualitative and quantitative
it requires more work. feedback would help assess whether these annotations would
enable users to make better decisions with greater confidence
All the interactions are lightweight: users need only select
under uncertainty. Because current charting techniques often
the tool, and choose the relevant value. With these simple
neglect confidence intervals, it would be important to allow
mechanisms, users can interactively perform complex
users to compare our annotations to both plain bar charts, and
queries in the data. While “compare bar to bar” and “compare
to charts that had traditional confidence intervals.
bar to bin” can be visually approximated [7], the other tasks
simply cannot be done visually. Our working hypotheses are that users with our system will
be (H1) more accurate in their answers to these questions,
Our design process considered several alternative
and be (H2) more confident in their answers. We do not
visualizations for these tasks. For example, we considered
expect them to be faster to respond, as our method requires
having matrix-like visualizations to compare each bin
additional interaction.
against the others. While this would reduce the amount of
interaction needed, it would massively increase the Study Design
complexity of the visualization. Our study was designed to explore a broad space of
possibilities in order to understand the use of each of our
The Sort tool has a more complex design compared to the annotations. We ask about five different question types:
others, although it is conceptually still very simple. It is compare-to-constant, compare-to-bar, find-minimum, find-
basically a list, in which every row represents all the possible maximum, and top-k.
values of that row. The redundant mapping—probability
maps to height, width, and color—is meant to address three Our study design compares three visual conditions. In the
distinct problems. By mapping to width, very small bars fall first condition, the user can see only a basic bar chart with
off the chart. By mapping to height, a user can easily read neither error bars nor annotations. In the second, we present
across to find high bars: comparing lengths is much harder. a bar chart with confidence intervals. In the third, users begin
Finally, colors help to highlight regions of the list where the with confidence intervals, but may also turn on the
rank is certain. annotations using a menu. The study apparatus is shown in
Figure 5. In all conditions, users can see the amount of data
All the color scales were obtained from ColorBrewer [4]. that this question represents.
EVALUATION
We wished to select a scenario that would be closely
We conducted an initial user study in order to evaluate the
resemble the ways that users might really deal with this
effectiveness of our design. In particular, we wanted to
system. Thus, we wanted queries that a user might
confirm that our techniques were learnable, interpretable,
realistically run, at a reasonable scale, and based on realistic
Figure 5: The study apparatus. This user is being asked a question in the error bar condition. The bar at top right shows that
this question is based on 20% of the data.
577
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
data. We selected TPC-H1, a standard decision support be valuable to non-experts, we wanted to understand the
benchmark designed to test performance of very large value they provided over confidence intervals.
databases with realistic characteristics. To generate realistic
For this preliminary study, we recruited seven participants.
data, we generated skewed data (with a Zipfian skew factor
All were male graduate students in computer science; all
of z=1, using a standard tool2). Part of TPC-H is a series of
were generally familiar with reading charts and interacting
testing queries with many sample parameters. Different
with data. All had at least basic statistical training, have some
parameters to the query produce different results. We
familiarity with confidence intervals and error bars, and had
selected one query, Q13, which produces a bar chart of four
used analytics systems.
or five bars. The raw Q13 data table carries 13 million rows.
RESULTS
To simulate an analysis scenario, we randomly sampled the
TPC-H tables at five different fractions, from 10% of the data Comments and Feedback from Users
through 50% of the data. Because the Q13 query is very During the training before the study, all of our subjects
restrictive, each bar only represented a couple of dozen or learned the system and visualizations quickly and reported
hundred (and not several million) data points. that they felt comfortable using them. Users had no difficulty
understanding the purpose for the enhancements.
A single question, then, is a combination of a question type
(see Figure 6), a visual condition (PLAIN, ERROR BARS, After the study, we debriefed the users. Our users understood
or ENHANCED), a sample size, and a parameter to the all of the annotations. User 2, for example, had avoided
question. dealing with confidence intervals before, as he found them
difficult; using our system, he said, “It is good that I
Our study uses a repeated-measures design. Each user don't need to do much thinking.” Users were least happy with
answered 75 questions in random order. We balanced within the sort tool; several complained that it was too complex to
users by question type, and randomly assigned the other use easily. While it was designed to be a variant on a
values. Questions were roughly balanced: no user answered traditional list, it may have added too much material.
fewer than 19 questions in any condition, nor more than 30.
We wanted to better understand how users made decisions
Is the bin 1995 larger than 47000? (True/False)
about their confidence in a visualization. In the baseline
Is the bin 1994 greater than the bin 1995? (True/False) PLAIN condition, users had very few cues to guess how
Which bar is most likely to be the minimum? (Choice of four) broad the confidence intervals were; several reported that
they eyeballed their confidence by looking at the progress
What are the most probable top 3 items? (Choice of four) bar in the top right: they felt more confident with larger
dataset sizes, and less confident with smaller ones.
Figure 6: Sample questions from the user study illustrate the
tasks: compare to value, compare bars, find extrema, and In the annotated condition, in contrast, users had several
ranked list. different cues to judge confidence. Indeed, user 4
We also wanted to understand how certain users were about complained that in the annotated condition, he had “too many
their answers: we expected the system to make more of a things to consider:” sample size, error bars and annotations.
difference in marginal cases where confidence intervals were Another user said he did not feel confident in any answer
broad; when confidence intervals are narrow, certainty is less when the sample size was small. This is an interesting
interesting. Users rated confidence on a five-point Likert misperception: in theory, the sample size should not matter
scale from “completely uncertain” to “completely certain.” at all to the analysis. Confidence intervals should provide at
least as much information as the progress bar would have;
For each question, user selected an answer, self-rated their our annotations should override confidence intervals. Users
certainty in that answer, and then pressed “next question.” still attempted to juggle all three.
We logged the answer, their confidence in the answer, and
the time it took to answer. After the experiment users were Quantitative Results
presented with a questionnaire that to assess their overall Because accuracy and confidence are on ordered, categorical
user experience. data, we carried out non-parametric Kruskal-Wallis chi-
squared test to compare accuracy and confidence across
Participants conditions.
As described earlier, our techniques are designed to enhance
traditional confidence intervals for data analysts with at least Overall, our users were very accurate, getting 84% of all
basic training in statistics. While our annotations might also questions right. There was no difference in overall accuracy
between the three conditions, and so H1 was not supported
(χ2 = 2.2968, df = 2, p = 0.3171). We see, however, that users
1 2
http://www.tpc.org/tpch Program for TPC-H Generation with Skew:
ftp://ftp.research.microsoft.com/users/viveknar/TPCDSkew
578
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
made fewer mistakes with larger samples—virtually no one may be worth looking at techniques that would generate
got questions wrong with the larger sample set, but many did questions with more ambiguity.
get them wrong with small samples. Figure 7 looks at
We have shown how these annotations could be applied to a
accuracy by sample size across the three conditions.
bar chart with error bars; however, our design principles are
very general: almost any aggregate chart type could
presumably be adapted to show task annotations. Indeed, we
suspect that more complex charts would benefit even more
from our techniques.
579
Session: Designing and Understanding Visualizations CHI 2014, One of a CHInd, Toronto, ON, Canada
Our experiment suggests that enhancing bar charts with task- 10. S. Gratzl, A. Lex, N. Gehlenborg. LineUp: Visual
specific annotations may indeed help users make decisions Analysis of Multi-Attribute Rankings. IEEE Trans. on
about samples. While we did not show in this context that Vis. and Comp. Graphics 2013
users would be more accurate, we did show that they would 11. J. Hellerstein, R. Avnur, A. Chou, C. Olston, V.
be more confident in their accurate responses (and, Raman, T. Roth, C. Hidber, P. Haas. Interactive Data
conversely, would know when not to be confident.) This Analysis with CONTROL. IEEE Computer, 32(8), 51-
seems a desirable trait in a system based on partial data: we 59, 1999.
would like analysts to be able to make decisions about when
to terminate expensive and slow queries. 12. S. Kandel, A. Paepcke, J. M. Hellerstein, J. Heer.
Enterprise data analysis and visualization: An interview
The current reliance on variations of the box plot is study. IEEE Trans. on Vis. and Comp. Graphics, 18(12),
insufficient for real data fluency—we need to broaden our 2917-2926.
tools for visualizing uncertainty, not only of individual
13. N. Kong, M. Agrawala. Graphical Overlays: Using
levels, but of complex operations on data.
Layered Elements to Aid Chart Reading. IEEE Trans.
ACKNOWLEDGEMENTS on Vis. and Comp. Graphics, 18(12), 2631-2638.
Our thanks to the MSR Big Sky team, who are applying these
14. A. M. MacEachren, A. Robinson, S. Hopper, S.
concepts, and the participants of our study. The first author
Gardner, R. Murray, M. Gahegan, E. Hetzler.
was partially supported by the National Science Foundation
Visualizing geospatial information uncertainty: What we
grant MRI-1229185.
know and what we need to know. Cartography and
REFERENCES Geographic Information Science, 32(3), 139-160.
1. R. Amar and J. Stasko. A knowledge task-based 15. L. Micallef, P. Dragicevic, J. D. Fekete. Assessing the
framework for design and evaluation of information Effect of Visualizations on Bayesian Reasoning through
visualizations. IEEE Symp. on Information Crowdsourcing. IEEE Trans. on Vis. and Comp.
Visualization, 2004. INFOVIS 2004. (pp. 143-150). Graphics, 18.12 (2012): 2536-2545.
2. R. Amar, J. Eagan, J. Stasko. Low-level components of 16. M. Skeels, B. Lee, G. Smith, and G. Robertson.
analytic activity in information visualization. IEEE Revealing Uncertainty for Information Visualization. In
Symp. on Information Visualization, 2005. INFOVIS Proc. of the Working Conf. on Advanced Visual
2005. (pp. 111-117). Interfaces. ACM, New York, NY, USA. 2008, 376-379.
3. S. Belia, F. Fidler, J. Williams, G. Cumming. 17. F. Olken, D. Rotem. Random sampling from database
Researchers misunderstand confidence intervals and files: a survey. In Proc. of the 5th Int’l Conf. on
standard error bars. Psychological methods, 10(4), 389- Statistical and Scientific Database Management
396, 2005. (SSDBM'1990), Zbigniew Michalewicz (Ed.). Springer-
4. C. Bewer., G. W. Hatchard and Mark A. Harrower, Verlag, London, UK, 92-111. 1990.
2003, ColorBrewer in Print: A Catalog of Color 18. C. Olston, J. Mackinlay. Visualizing data with bounded
Schemes for Maps, Cartography and Geographic uncertainty. IEEE Symp. on Information Visualization,
Information Science 30(1): 5-32. 2002. INFOVIS 2002. (pp. 37-40).
5. N. Boukhelifa, A. Bezerianos, T. Isenberg, J. D. Fekete. 19. J. Sanyal, S. Zhang, G. Bhattacharya, P. Amburn, and
Evaluating Sketchiness as a Visual Variable for the R. Moorhead. A User Study to Compare Four
Depiction of Qualitative Uncertainty. IEEE Trans. on Uncertainty Visualization Methods for 1D and 2D
Vis. and Comp. Graphics, 18(12), 2769–2778, 2012. Datasets. IEEE Trans. on Vis. and Comp. Graphics
6. G. Cumming. Understanding the new statistics: Effect 15(6), 1209-1218.
sizes, confidence intervals, and meta-analysis. New 20. M. A. Soliman, I. F. Ilyas. Ranking with uncertain
York, Routledge, 2012. scores. Data Engineering, 2009. ICDE'09. IEEE 25th
7. G. Cumming, S. Finch. Inference by eye: Confidence International Conference on. IEEE, 2009.
intervals and how to read pictures of data. American 21. A. Tversky, D. Kahneman. Judgment under
Psychologist, 60(2), 170–18, 2005. Uncertainty: Heuristics and Biases. Science, 185 (1974).
8. D. Fisher, I. Popov, S. M. Drucker, and mc schraefel. 1124-1131.
Trust Me, I'm Partially Right: Incremental Visualization 22. H. Wickham, L. Stryjewski. 40 Years of Boxplots.
Lets Analysts Explore Large Datasets Faster. ACM Technical Report from http://vita.had.co.nz/. 2012.
Conf. on Human Factors in Comp. Systems. CHI 2012.
(pp. 1673-1682). 23. T. Zuk, S. Carpendale. Visualization of Uncertainty and
Reasoning. In Proceedings of the 8th Int’l Symp. on
9. D. Goldsman, B. Nelson, and B. Schmeiser. Methods Smart Graphics (SG '07). Springer-Verlag, Berlin,
for Selecting the Best System. Proceedings of the 1991 Heidelberg. 2007
Winter Simulation Conf. 177-186.
580