Changing Minds To Changing The World: Mapping The Spectrum of Intent in Data Visualization and Data Arts
Changing Minds To Changing The World: Mapping The Spectrum of Intent in Data Visualization and Data Arts
2014
Recommended Citation
Murray, S., (2014). Changing Minds to Changing the World: Mapping the Spectrum of Intent in Data Visualization and Data Arts. In
D. Bihanic (Ed.), New Challenges for Data Design (322–347). London: Springer-Verlag.
This Book Chapter is brought to you for free and open access by the College of Arts and Sciences at USF Scholarship: a digital repository @ Gleeson
Library | Geschke Center. It has been accepted for inclusion in Art + Architecture by an authorized administrator of USF Scholarship: a digital
repository @ Gleeson Library | Geschke Center. For more information, please contact [email protected].
Changing Minds to Changing the World
Mapping the Spectrum of Intent in Data Visualization and Data Arts
8 December 2013
Scott Murray
DRAFT
This is an early draft of an essay that was included in New Challenges for Data
Design, a collection edited by David Bihanic and published in February 2015. For
the final version, please contact your library or purchase the book from Springer:
http://www.springer.com/us/book/9781447165958
Murray, Scott. “Changing Minds to Changing the World: Mapping the Spectrum of
Intent in Data Visualization and Data Arts.” New Challenges for Data Design. Ed.
David Bihanic. London: Springer-Verlag London Editions, 2015. 322–347. Print.
Introduction
The recent explosion in available data sources and data-processing tools has both
scientists and artists diving into the world of data visualization. The result is a diverse,
interdisciplinary field of practice, in which practitioners cultivate knowledge in other
areas: Statisticians are learning about design, while designers are learning about
statistics. All of these people are producing visualizations of data — objects of visual
communication — but with widely varying intentions and goals for their creations.
One of the most exciting aspects of visualization today is the ease with which
practitioners from different backgrounds collaborate and engage with each other. By
examining the discourse adopted by these practitioners, we can identify what processes
they all have in common, and then map where practices overlap and where they
diverge.
Getting Started
“How should I get started with data visualization?” This increasingly common question
turns out to be quite difficult to answer. Visualization is inherently interdisciplinary; a true
mastery of all its forms would require expertise in:
• visual design
• interaction design
• data analytics
• statistics
• mathematics
• psychology
• computer science
That list doesn’t even include the technical skills required for implementing a project
with specific tools, such as Excel, Tableau, Illustrator, Processing, R, D3, or — more
commonly — some combination of tools, each optimized for a different step in the
process.
Yet none of the practitioners I know are experts in all of the subjects and tools
mentioned above. Many have formal backgrounds in one subject, then dabble in
others. A computer scientist by training may “have a knack” for visual design, or a
designer may discover she also excels at statistics. Thus, we pick and choose, and
draw from whatever skill sets we are inclined to cultivate within the limits of our available
time, interest, and abilities. I find that most people in data visualization are, by nature,
very curious; we would prefer to learn everything and be skilled in all areas, but of
course life gets in the way.
When beginners ask how they can get started, this interdisciplinary quality of the
practice also gets in the way. There is no one best path into visualization; every
practitioner has a different point of entry, such as:
• web design
• graphic design
• industrial design
• architecture
• mathematics
• cognitive science
• computer science
• journalism
With so many possible points of entry, the question is easier to answer on a
personalized, individual level. To someone with a highly technical background, I might
recommend some design books. To a journalist, I could suggest resources on data
analysis and graphical storytelling. But of course even these are generic responses,
and don’t account for the individual’s full range of prior experience. An interdisciplinary
field can be exciting and stimulating for practitioners already who are already fully
engaged. But for those just dipping in their toes, it can be frustrating to ask lots of
questions and frequently hear the same answer: “Well, it depends.”
Common Ground
While searching for this common ground, I also intend to propose an informal taxonomy
of practice. Much prior work has been done to classify visual properties and common
visualization elements (e.g., Bertin, Semiology of Graphics, and Segel and Heer,
Narrative Visualization: Telling Stories with Data, 2010), but here I want to explore the
community of practice itself. As the field grows, it becomes increasingly important to
understand the range of its participants.
It is my sense that visualization practitioners, despite our diverse backgrounds and the
interdisciplinary nature of the field, have quite a bit in common — it’s just that we have a
hard time describing exactly what that is. As evidence, I observe that many of the same
people speak at or otherwise attend the following conferences:
• Eyeo Festival
• Resonate
• See Conference
• Strata
• Visualized
• IEEE VIS (formerly VisWeek, includes VAST, InfoVis, and SciVis)
The fact that I’ve placed “creative coder,” “artistic,” and “practical” in quotation marks
indicates that we have a language problem on our hands. This begins with how we
identify ourselves. I have seen practitioners refer to themselves by the following titles:
• data visualizer
• data designer
• designer
• artist
• data artist
• code artist
• generative artist
• creative coder
• creative technologist
• graphics editor
• cartographer
• researcher
• storyteller
Each implies a slightly different emphasis — more fine arts, more code, more data —
but, at gatherings, these people converse freely, communicate well with each other, and
typically avoid using titles altogether. It is a common woe, especially toward the fine
arts end of the spectrum, that these titles are essentially meaningless, except as cues to
other practitioners already “in the know.” The interdisciplinary data artist’s elevator pitch
is often brief and inaccurate, because the nuances of the process are not easily
reducible to summary for outsiders. The result is a more tightly knit (and unintentionally
insular, if still friendly) community.
I witnessed this label-aversion play out at a large scale at the first Eyeo Festival in 2011.
The conference is held in Minneapolis each June, and invites presenters from a range
of fields — data visualization, generative art, installation art, design, computer science.
Its tagline, “Converge to inspire,” is conveniently vague, and as such reflects the event’s
reluctance to pigeonhole its attendees. By the end of the week, I heard many people
describing others not as artists, creative coders, or data visualizers, but as “you know,
the kind of people who would go to Eyeo.” For lack of a better umbrella term, we
resorted to self-reference. I think we can explore this phenomenon, look at the
principles and practices shared in common, and identify a clearer way of describing
ourselves to others.
Mapping the Field
To frame the discussion, I will propose a series of ranges or spectra upon which
practitioners and projects may be situated. For example, in the field of visual
communication, there is an ongoing tension between the terms “art” and “design.”
Art Design
Work deemed to be on the “art” end of the spectrum may, for example, be considered
purely aesthetic, have little or no “functional” purpose, and have little commercial or
“practical” value (except, of course, as fine art, which, I would argue, is as practical a
purpose as any). Work may be on the “design” end of the spectrum if it has obvious
commercial value, communicates a specific message, and functions with an explicit
purpose. Yet “design” is not without aesthetic value, and so shares that element with
“art.” And “art,” such as illustration, may be employed within a “design” context, to
communicate a message larger than the art itself.
At what point does an image cross over between art and design, or vice versa? While
this distinction is in some sense arbitrary, it nevertheless carries value, at least by
forcing us to struggle with the language we use to describe our work and the values we
ascribe to it.
To me, the most meaningful way to make this distinction is to identify the goal or intent
of the creator. For art, the intent may be to elicit a purely aesthetic or emotional
experience from the viewer/participant. For design, the intent is typically to
communicate a specific message to the viewer/participant. So, regardless of the
medium and context, an image made with intent to communicate a particular message
or meaning falls near the “design” end of the spectrum. (This assessment is made
independent of whether or not the design is successful in achieving its creator’s goals.)
An image with intent to elicit an emotional experience (without a specific message), can
be called art (though, to further muddy the waters, art often has a message). Perhaps
this could be simplified even further to say that a work’s position on the spectrum
indicates only the specificity of its intended message. The more open the message, the
more artistic; the more specific, the closer it is to design.
The art/design spectrum, as well as the others I propose below, is presented as two-
dimensional, but of course, reality is more complex and not suited to such clean
definitions. (This is particularly true given the current rate of change in visualization
practice, and the rapid development of new forms.) Please take these proposals as
tools for framing discourse about the current state of the field, not attempts to define it in
fixed terms.
The first spectrum can be used to evaluate either practitioners or projects, while the
others are specific to individual projects.
Avenues of Practice
To offer an example, I would file Memo Akten into the data arts end of this spectrum.
Akten’s project “Forms,” done with the artist Quayola, is intensely data-driven or data-
derived, yet it is more evocative than explicitly communicative.
Still image from “Forms,” Memo Akten and Quayola, 2012
That said, Akten has done projects for corporate clients, such as the “McLaren P1 Light
Painting,” which I would classify as a data illustration: it functions primarily as an
advertisement for a new automobile, and thus, the communications intent is different
from that of “Forms.”
Still image from “McLaren P1 Light Painting,” Memo Akten and James Medcraft, 2012
Continuing further still to the right edge of this spectrum, we can look at geographic
maps, a visualization of practice that has undergone massive changes in the past ten
years. Stamen Design in San Francisco, which refers to itself as “a design and
technology studio,” is known for their wide array of explorations in maps. Their “Toner”
tiles are intended for use when a map will be printed and photocopied. As such, they
don’t use any color, and gray areas are rendered with a halftone screen, to improve
reproducibility by analog means. Yet, even with this constraint, the design functions
effectively as a guide for orientation and directions — that is, as a traditional map.
Toner-style map, from MapStack, by Stamen Design, http://maps.stamen.com/m2i/image/
20131117/toner_uwnIcEfPDtw
This particular intent and specificity of communication places the “Toner” map squarely
on the data visualization end of the spectrum. Contrast that with Stamen’s “Watercolor”
tiles, which represent the same underlying data in a completely different form.
Watercolor-style map, from MapStack, by Stamen Design, http://maps.stamen.com/m2i/image/
20131117/watercolor__Q2AD5HJgMk
The “Watercolor” maps are less precise by design, evoking an abstract sense of place
for those already familiar with the place, as opposed to helping orient new visitors to
specific locations within a place (e.g., cities, streets, addresses). So I would file the
“Watercolor” maps on the data arts end of the spectrum.
But what about Stamen as an entity? Where do its designers and technologists fall,
given their influential contributions all along the spectrum?
Contexts
Jonathan Harris and Sep Kamvar’s “I Want You To Want Me” is a computationally
intensive installation created for the Museum of Modern Art’s Design and the Elastic
Mind exhibit in 2008, curated by Paola Antonelli. As a commissioned piece, it was
designed from the beginning for the gallery context. While the artists posted
documentation online, it isn’t feasible to adapt the project for the web, so it remains a
gallery-only, in-person experience.
Still image from “I Want You To Want Me,” Jonathan Harris and Sep Kamvar, interactive touch-
screen installation commissioned for The Museum of Modern Art’s Design and the Elastic Mind,
February 14, 2008.
In contrast, Santiago Ortiz’s innovative portfolio interface was designed specifically for
the online context, and wouldn’t make sense in any other medium. It begins as a grid of
project image thumbnails, but visitors can drag a round cursor to adjust the visual
weight given to projects across three axes: recent, favorites, or all projects. At a glance,
we can watch projects resize to reflect, for example, which ones were completed most
recently versus which ones Ortiz himself enjoys. This form of representation, unlike
Stamen’s “Toner” maps, isn’t intended for print, and would cause confusion in paper
form, due to clipped images and abbreviated text.
Continuing toward the print end of the spectrum, while many visualizations are designed
primarily for print output, the dual approach taken by the New York Times’ Graphics
Desk is slowly becoming more common. At the Times, every graphic must work in both
print and online. Typically, this means designing a default view that communicates the
story. The default view works in the print edition of the paper, and also serves as the
initial view of the online version. Interactivity can be used to make the piece explorable,
enabling readers to dig deeper into specific data values. For example, this graphic on
drought in the US includes annotations that highlight key trends in the data, but the
online version also allows readers to mouse over any section of the graphic to reveal
specific drought levels.
HOME PAGE TODAY'S PAPER VIDEO MOST POPULAR U.S. Edition shm... Help
WORLD U.S. N.Y. / REGION BUSINESS TECHNOLOGY SCIENCE HEALTH SPORTS OPINION ARTS STYLE TRAVEL JOBS REAL ESTATE AUTOS
1990 1991 1992 1993 1994 1995 1996 1997 1998 1999
1970 1971 1972 1973 1974 1975 1976 1977 1978 1979
1960 1961 1962 1963 1964 1965 1966 1967 1968 1969
1940 1941 1942 1943 1944 1945 1946 1947 1948 1949
1920 1921 1922 1923 1924 1925 1926 1927 1928 1929
1910 1911 1912 1913 1914 1915 1916 1917 1918 1919
1900 1901 1902 1903 1904 1905 1906 1907 1908 1909
http://www.nytimes.com/interactive/2012/08/11/sunday-review/drought-history.html
Correction: An earlier version of the graphic overstated the amount of the United States that was in moderate to extreme drought in July 2012. The correct percentage is 64 percent, not 77 percent.
Related Links
The Long, Dry History of the West
Media © 2013 The New York Times Company Site Map Privacy Your Ad Choices Advertise Terms of Sale Terms of Service Work With Us RSS Help Contact Us
Site Feedback
Visualizations are often designed with at least one primary target medium in mind.
Those media may be considered along a spectrum of static to interactive.
http://www.nytimes.com/interactive/2012/08/11/sunday-review/drought-history.html?_r=0
Static Interactive
U.S.
“Drought’s Footprint” was only ever intended to be a static image, both for print and on
the website.
WORLD U.S. N.Y. / REGION BUSINESS TECHNOLOGY SCIENCE HEALTH SPORTS OPINION ARTS STYLE TRAVEL JOBS REAL ESTATE AUTOS
Drought’s Footprint
More than half of the country was under moderate to extreme drought in June, the largest area of the contiguous United States affected by such dryness in
nearly 60 years. Nearly 1,300 counties across 29 states have been declared federal disaster areas. Areas under moderate to extreme drought in June of
each year are shown in orange below. Related Article »
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009
1990 1991 1992 1993 1994 1995 1996 1997 1998 1999
1980 1981 1982 1983 1984 1985 1986 1987 1988 1989
1970 1971 1972 1973 1974 1975 1976 1977 1978 1979
1960 1961 1962 1963 1964 1965 1966 1967 1968 1969
1950 1951 1952 1953 1954 1955 1956 1957 1958 1959
1940 1941 1942 1943 1944 1945 1946 1947 1948 1949
1930 1931 1932 1933 1934 1935 1936 1937 1938 1939
1920 1921 1922 1923 1924 1925 1926 1927 1928 1929
1910 1911 1912 1913 1914 1915 1916 1917 1918 1919
1900 1901 1902 1903 1904 1905 1906 1907 1908 1909
“Drought’s Footprint,” Haeyoun Park and Kevin Quealy, July 19, 2012. http://www.nytimes.com/
Source: National Climatic Data Center, National Oceanic and Atmospheric Administration
interactive/2012/07/20/us/drought-footprint.html
Home World U.S. N.Y. / Region Business Technology Science Health Sports Opinion Arts Style Travel Jobs Real Estate Autos Site Map
© 2012 The New York Times Company Privacy Your Ad Choices Terms of Service Terms of Sale Corrections RSS Help Contact Us Work With Us Advertise
Work intended for the screen can be dynamic without being interactive, such as IBM’s
“THINK Exhibit,” which included a large-scale “data visualization wall.” During its
temporary installation in New York, the wall displayed real-time visualizations of data
about the city, such as traffic flows, air quality, and water use. Since the display was
generated live, it was dynamic, although not directly interactive. This stands in contrast
to a pre-recorded video loop, which remains static in the sense that it merely repeats
itself and its imagery does not change over time.
Photo documentation of the IBM THINK Exhibit at Lincoln Center in New York City, developed
by Mirada, opened September 23, 2011. http://mirada.com/stories/ibm
Many screen-based visualizations are interactive, of course, but on the far end of the
spectrum are works that are so dependent upon interaction with participants that,
without it, they essentially cease to exist. “Shadow Monsters,” an installation by Philip
Worthington, for example, begins as nothing but a silent room with a plain white screen.
Only when participants enter the space does the system spring to life, interpreting their
shadows as horned, fanged creatures with creepy hair and nails.
Shadow Monsters, by Philip Worthington, 2004–ongoing. http://moma.org/exhibitions/2008/
elasticmind/#/229
Conceptual Structures
Exploratory tools are used to visualize data for the purposes of discovering what is
interesting and valuable about that data. Explanatory visualizations take a point of view,
and communicate to the viewer some pattern, trend, or discovery already observed.
Exploratory Explanatory
Exploratory visualizations are often interactive, and many tools are designed primarily
for this purpose, such as Tableau and R with ggplot2. Since exploratory designs are
geared toward producing insights, they tend to be more literal and specific than purely
A map of the District of Columbia area is displayed on the left. The might encounter this tool in a touchscreen kiosk at a real-estate office
homes that fulfill the criteria set by the user’s current query are shown or at the student union. She selects the location where she will be
as yellow dots on the map. Users perform queries, using the mouse, working by dragging the ‘A’ on the map. Next. she selects where her
by setting the values of the sliders and buttons in the control panel to husband will be working. downtown, near the capitol, by dragging
the right. The query result is determined by ANDing all sliders and the ‘B’. Figure 2 shows the interface after Dr. Jones has dragged the
buttons. ‘A’ and ‘B’ indicators to her desired locations (the indicators are
more visible in Figure 4).
aesthetic. (A “data arts” visualization would not be likely to produce valuable insights.)
“The Dynamic HomeFinder” was one of the very first such exploratory visualization
tools.
Explanatory visualizations are more focused, with limitations imposed on the viewer and
design elements to increase the specificity of the communications value. Discussions of
visualization as storytelling are referring to explanatory images and interfaces.
Journalistic graphics are typically very strong
340
in this regard, such as in “The Cost of
Water,” a piece I worked on for the Texas Tribune, which explores why, despite record
droughts, water is relatively inexpensive in Texas.
Still image from “The Cost of Water,” a collaboration between Scott Murray, Geoff McGhee, and
Kate Galbraith of The Texas Tribune, June 8, 2012. http://www.texastribune.org/library/data/
cheap-water-in-texas/
Some visualizations or tools, of course, try to serve both exploratory and explanatory
functions. Often, this employs a structure of a default explanatory view, followed by the
use of interactivity to enable independent exploration.
Goals
Finally, each project is created with different goals, which may be placed somewhere on
the spectrum of inspire to inform.
Inspire Inform
On the inspiring end of the spectrum, we may find “Tape Recorders, Subsculpture 12,”
an installation by Rafael Lozano-Hemmer. Sensors tracks the presence of visitors to
the space, and the length of their visits is expressed through the lengths of tape
measures. The work is data-driven, but the individual data values are meaningless; the
aesthetic and emotional experiences are what matter.
On the informative end of the spectrum, we find wholly uninspiring charts and graphs,
like this example from The Economist. This is in no way to pick on The Economist;
when communicating specific data values, it is not necessary to inspire or delight. The
chart below efficiently communicates the rise of text messaging, and includes several
annotations, offering context of historically relevant moments. This chart is intended to
inform, and it does so successfully.
“OMG! Texting turns twenty,” Economist.com, December 3, 2012. http://www.economist.com/
blogs/graphicdetail/2012/12/daily-chart
Many projects, especially in data journalism, aim for a balance of inspiring and informing
— such as when informing is essential, but achieving that end requires also engaging
the reader on an emotional level.
One such landmark project is “We Feel Fine,” another piece by Jonathan Harris and
Sep Kamvar. Made in 2005, “We Feel Fine” is one of the early, online interactive
visualizations. Still just as potent almost a decade later, it doesn’t hurt that the data
behind the project are themselves all about emotions and the human experience.
Still image from “We Feel Fine,” Jonathan Harris and Sep Kamvar, 2005. http://
www.wefeelfine.org
More recently, Periscopic’s “US Gun Deaths” interactive visualization poetically and
powerfully documents lives lost to gun violence — and projects an alternative future in
which victims live out the rest of their lives (as algorithmically projected). The work
performs a dual role, both informing us of the scale of tragedy as well as inspiring us to
reflect upon and debate the significance of so many lives cut short.
Still image from “US Gun Deaths,” Periscopic, 2013. http://guns.periscopic.com/
Having converted our collective assessments to data, we could (of course) visualize the
results. I would recommend a parallel coordinates plot, with each axis oriented
vertically, and horizontal lines connecting the values for each project.
“Ordinal Parallel Axis” example, Kai Chang, 2012, http://bl.ocks.org/syntagmatic/3731388
Through interactivity, we could filter the view to show only projects by a particular
creator, or by people from a specific subfield (say, only “researchers” or “artists”). This
could enable us to discover places in the field where practices converge or diverge,
either conforming to or challenging our expectations.
Independent of the visualization, after scoring all projects by a single creator, we could
then calculate a “career average” with which to place them along the data arts / data
visualization spectrum of practice. While acknowledging that we all move between
many roles, it could be useful to see how heavily the field skews toward the arts or the
other direction. (My sense is that there is so much interest in the field right now, from a
diversity of perspectives, that there is a fair balance.)
This approach reminds me of two recent projects. First, a recent visualization by Pitch
Interactive which visualized artists’ careers with colorful star diagrams.
Star diagrams from “McKnight Artist Fellows: Visualizing Artists' Careers,” Pitch Interactive with
The McKnight Foundation, XXXX, http://diagrams.stateoftheartist.org/gallery
arambartholl
fffffat jamiew
helvetica
fi5e qdot edburton
nealen indiecade
obviousjim
jvcleave
zachlieberman creatorsproject rhizomedotorg
bre
essl
juliaxgulia
newmuseum strangenative
jamesbridle
chrisoshea veen
jk_keller
jamesalliban typekit jasonsantamaria
theowatson
stevelambert
makerbot starwarsmodern
shawn_sims_
bobulate itscolossal
160b
nicolasnova
svaixd
th0ma5 moleitau
memotv
openframeworks
museummodernart h_fj
artur0castro atduskgreg zeldman
julian0liver talktome2011
instagram
theandproject
factoryfactory driesderoeck fwa deepspeedmedia spot slavin_fpo gt
xuv bldgblog
koser design_io
alexanderchen berglondon lbm
mikecreighton
bellinspace
adafruit tobybarnes
craigmod
jezburrows
genelu laughingsquid
quidquid
liasomething
q_says
sermad gerwitz even
sdw
golan
jtnimoy
writtenimages snibbe
kickstarter
imalorg
alphavillefest
sansumbrella aigadesign helenwalters stringbot
evhan55 mike_ftw
plopesresearch
coudal jkottke
marcinignac swissmiss warrenellis
jonobr1 neb
onedotzero paolon allartburns
bwycz
katehartman
marumushi
brysonian greatdismal oatmeal
chr1sa waxpancake
creativeapps
field_io
bytezen
jasoneppink
universalevery
dcuartielles
rhyme_andreason quilime leeb
core77
dvsch
ceonyc
heatherknight
drawnline softwareandart daytum
mattigray mathpunk khoi
acaryn rdio pomeranian99 stevenbjohnson
andrew_b_berg
ursonate
cooper_smith
vormplus wblut
mflux
notlion tomux cynthialawson cityofsound
nervous_system bbhlabs commarts makerfaire
billautomata malbonnington
ajwshaughnessy
robinsloan
kimasendorf
gaffta odannyboy stop
mariuswatz
nervous_jesse chrisdelbuck
xeni
joshuadavis nervous_jessica joi
boingboing
josettemelchor rga make rr
victamin mrprudence espiekermann irowan square
eyemagazine
io9
valdean
jake_barton benhammersley
natzke medialab
anildash
austinkleon tumblr
flight404 aaronkoblin
sojamo openprocessing jonemo
butdoesitfloat
gestaltennews eeness
gmunk jasonfried
toxiclibs
brendandawes mbanzi etsy tedr heif
toxi processingparis benchun
wireduk
cooperhewitt
ideo gruber mtv
neave janchip
emax
tomcoates
bjarkeingels
twholman liftconference paulmmay pentagramdesign
igorclark agpublic design
unitzeroone catalogtree
jsnell
chromeexp
shiffman
jandersen ev
mannytan wnycradiolab
reas
peterkirn hackernewsbot designobserver cliffkuang kirstinbutler ladygaga
gskinner chloester ayahbdeir frogdesign doctorow
mrdoob azaaza
sosolimited
hakimel databloom
gelada
ellenlupton nicoles personalreport
fastcodesign
alaindebotton
ricardmp
ghostpressbed iwoj sorayadarabi
quasimondo vlh netmag judithd stevesilberman kellan
ginatrapani
jjhnumber27
subblue csik
benhosken openpathscc
scottjanousek iragreenberg
feltron
geotypografika
generatorx
okgo
radicalmap junecohen
codeanticode
eyeofestival
worrydream johnpavlus
wheniwas19 ozke
juerglehni
johnmaeda openp2pdesign
rhjr
localprojects
fastcompany flickr tgoetz
pinterest bryce
ginab
tensafefrogs
cshirky
mocost
bit101
flashfreaker
creativecoding
mtchl seb_ly jonathanpuckey lustnl
graphsic
shawnbot collemcvoy
googlechrome
thisisaaronland
ny
maxgadney
zoecello
nickbilton
jkriss moizsyed kevinrose
paul_irish neiltyson al3x
eskimobloood rachelbinx fathominfo
nelson theonion creativecommons
lennyjpg cowbird techreview
gnrtvdsgn paperjs jason
v3ronique
prixars
sboak
jonahlehrer
humphd aral cms_ ia swissnexsf
ajenglish
alexismadrigal
solaas
processingjs alignedleft hyperakt
openculture eugenephoto jimmy_wales
wired
alper
petapixel
serial_consign zephoria
cedrickiefer
p5berlin
dimitre
corban
resonate_io
jldevicente
sethblanchard
meganerinmiller
zainy
lankybutmacho
mefi
facebook
nycedc allthingsd
dweinberger
rachelsterne
onformative pheinberg
github
stamen eff
naveen berkun
ben_fry
blprnt
f1lt3r kottke
google
nytimesbits
carr2n
peterpaulrubens
armano
senseablecity tpm
nasa kurren reddit amazon
arikan
stevemartintogo guygal
kevinmarks
benfry color
evgenymorozov brady
moe
juzmcmuz
popsci ffranchi jaybee
notthisbody sxsw supdegrave
atul
aplusk
weavetweets sbstrtm hamoid michalmigurski
dropbox evansml
carlacasilli
simplegeo jenny8lee
revkin
skype jeffjarvis
sjcrank
tezcatlipoca
littleark bengreenman
googlemaps
asjs
iaaaan
pahlkadot
thenewdeal
joechin
jason_silva
youtube scobleizer
divya
brainsteen studionand poptech
dwtkns schuyler
whitehouse
rubot
gregmore cleverfranke
smallmultiples
medialabprado standupmaths
stephenanderson
sward13
sbstrm
earlboykins codeforamerica
ibm ebay
brianstelter
benish
viegasf _why
goodfeed
kansandhaus
sensea
bradleyvoytek
nytimes
stanm
idpotsdam newscientist
jeresig cnn
damonoehlman nytlabs mathewi
brainst
cocteau
philogb wattenberg
mahirmyavuz wikileaks lanyrd
nsf
350
datapointed techcrunch
twitterapi
visualthinkmap thisbigcity
info_graphic seedmag
good uxmag edyong209 jackschofield
valdiskrebs
flinklabs spotify
pitchinteractiv annemcx time
9gag
loleg
quora
msli
somebitslinks
slate orgnet
mslima
10ch bryanconnor anitalillie
datatelling
jlivni
benglervis
grahaphics nprnews rajunarisetti gigaom
badbanana
kelsoscorner nytjim
ifrap
andreitr bmajz
moritz_stefaner
cmanning88
herrstucki gkob
jsundram addthis
ripetungi digiphile
digg
timoreilly
fbase
miguelrios
_mql mediatemple
martintom
c_behrens tmcw
prabprab
richard_jong raureif gabrielflorit sciam alexgraul
grossbart indiemaps
azeem
fergle
mapbox wsj
visualizingorg kevinq
colorfuldata
jfix dcabo vizzuality
stefpos
openstreetmap developmentseed iitweets npr
mbostock
wemcn
jonasleist
fabriciotav
akuhn
saleiva okcupid
amandacox
coffeescript
jakeporway pogue mericson snd
oxfam
cjoh
organised uscensusbureau
laurelatoreilly
secviz
mstoll un
lenagroeger
wiederkehr
matthewhurst
ona
chartball
janwillemtulp
awitherspoon borowitzreport
natronics
wnyc
datagraphers
nytgraphics alanmclean
giorgiouboldi
xkcd jessicahagy
nodebox
jbjensson
nathanyau brianbbrian jess3 niemanlab
ikbeneugene
aquigley vp_lab
karlgude
mrgunn
sogrady
jasondavies
delicious christiansenjen
moebio rubenbaetens
chrisnf
ciro gtheel charlesapple
marijerr polimi
oecd xocasgv
rtkrum ajdant linkedin
kennelliott oreillymedia
yeraze
nyviz sergionyt
geovisual danielgm guardiantech readermeter briansuda
kaythaney harrisj toc
caseorganic bigdata
impure140 unglobalpulsergkirkpatrick
jeffclark
jochmann
inciteinsight
erikduval
charles_dickens
monicaulmanu
driven_by_data
usgs
mikeloukides
michelle_borkin
wikiviz
infographic
chiquiesteban
mcgeoff charlesmblow
jashkenas
aallan
mariansteinbach
vis
arusbridger
dpatil
leafletjs
jkeefe
justinmatejka
radar
datastories itoworld
algonpaje thorlakur
cloudmade
datanoborders werner
mathieubastian
seedataltd
flowingdata
jonbruner
infosthetics
mbsmrtic vagabondjack
synopticalchart
vlandham
ap
rjurney
seinecle
mashable medriscoll
oscon
noahi
palewire
thewhyaxis hand_joe
densitydesign
graphicguardian
datavis
stiles
poezn
marlenac
periscopic jgrahamc
hmason tkb
peteskomoroch bitly
mattwaite sarahnovotny
googleft
bengoldacre
pciuccarelli
jonronson
gonzofish spatialanalysis jonathanstray
albertocairo
robertotheron
kennethfield
infobeautiful _dom
hansrosling
digitalurban rww stenev
dataspora
marshallk
filwd ownieu petewarden
oobr
phodgeszoho slangille mikedewar
vpascual lorz
parry_joe
macdiva
dominikus
charltonbrooker fcage
krees
visup gephi
jandot mibi romsson datalicious agbegin
drewconway rhodri
macdivaona
plot_io
raphv wsjgraphics
giner
jsteeleeditor
elmundograficos
noahmarconi
guykawasaki
ilikedata strataconf cloudera
visualisingdata
alark okfn
jcukier
mrubillo
dr_pi
statmodeling
seeingstructure
visually geekstats
jwyg bb_liliana
currybet
misselliemae
pudo
davebowker
alexlundry ndiakopoulos znmeb
acroll
eagereyes
smartdataco
shoshe
lisaczhang
jamesrbuk
instantatlas
zingchart benbendc hadleywickham
datamarket lynda420 datajunkie markmadsen
polychart
brianstaats anna_knight
danielegaliffa
anametrix revodavid
mirkolorenz
chartpornorg
cziemki
vizworld
dataveyes
laguirlande
tssveloso
epicgraphic dr_tj
storywithdata laneharrison juiceanalytics psychemedia
visualoop
weiserjo ddjournalism
moreauchevrolet
camoesjo themba
data_blog
junkcharts datastore siah
mccandelish
scheidegger
nbrgraphs visuallori
zite
roberthempsall
peteforde
buzzdata
dataremixed
jon_peltier
nicolaskb smfrogers paulbradshaw
getkaizer
zachgemignani
momokoprice
alexkerin
tableau
dgm885
matprichardson
vizwizbi
craigbloodworth
freakalytics
biff_bruise joemako
acotgreave
lewandog flyingbinary
russiansphinx
jenstirrup
timcost
Common Elements
Other than a love of data and visuals, what do all visualization practitioners have in
common?
Tools and Media — We all live and work in the same time, and have access to the same
tools and media for sharing our work. Fine artists use Processing, but so do data
journalists. Statisticians use R, but so do increasing numbers of people from other
fields and specialties. Each tool is designed for a different audience or task, but there is
a surprising amount of crossover between sub-fields. Nearly all of these tools involve
code, so basic programming ability is a must. It is possible to create great visualizations
without code, but it is difficult to articulate new visual forms without it.
Process — We all work with data, defined as structured information. It takes a certain
mindset to appreciate a well-structured, honest data set. Ultimately, we encode that
data into visual form, a process that requires another, similar mindset to appreciate. So
we have data and data-visual mapping in common. But governing each of these steps
are many rules, usually documented as algorithms in the software we write: scripts to
parse data, programs to generate charts and graphs, and applications to share beautiful
renderings with our audiences. The algorithm rules every step. Our core value is a love
and appreciation for process itself.
Curiosity — I have never met an incurious practitioner. We love learning and we love
being inspired by discovering things in the world around us, or perceiving old ideas in
new ways. Data visualization is fundamentally about making the invisible visible, a
shared goal for all practitioners. Where our work diverges is in the intent of our process,
and in what means of visual rhetoric are employed to that end.
While the interdisciplinary nature of the practice makes it hard to summarize the field to
outsiders, it is also one of our biggest strengths. By drawing on the discoveries and
expertise of many fields, we can improve our processes and improve our designs. One
concern, of course, is that we may be inclined to learn broadly, but not deeply. Yet, as I
described earlier, many practitioners tend to have formal training in one or two different
areas, but then more loosely explore others.
The interdisciplinary mindset pervades practitioners’ selection and use of tools, methods
(processes), and domains of operation (uses of tools and methods). Data visualization
practitioners are often hired by domain experts (the clients) to interpret and represent
the client’s data. When Pitch Interactive is hired by Popular Science to visualize
historical government projections for energy independence, they are not expected to
have prior knowledge on energy independence. When Stamen partners with the
nonprofit organization Climate Central to map projected sea level rise, no one expects
them to be climate change experts. When Fathom contracts with Thomson Reuters to
map the power dynamics in China’s political sphere, it is the client, not the design firm,
who is expected to bring the domain-specific knowledge (and data, of course) to the
table.
It’s an odd role for a consultant, whose area of expertise is not the specific domain at
hand, but an expertise in the process of exploring — that is, exploring both the new
information provided by the client as well as a range of visual forms for representing and
communicating that data.
Does this mean that a data designer, if inserted into any industry or context, could bring
value to the organization simply through her interdisciplinary process, even without a
specific end goal in mind? Possibly. Designing is problem-solving, and the process
itself may be just as important and valuable as the resulting product.
Finally, this also explains why practitioners struggle to articulate their daily work to
outsiders: A process is much more abstract and difficult to explain than a product. It is
easy to point to images and say “I made these.” It is much harder to say “Through
years of practice, I have developed a process that guides my decisions and actions,
which results in a successful representation of data, more often than not.” Good luck
offering such an accurate, yet uninteresting explanation in a social context, such as at a
cocktail party — you may not be invited back!
Future Challenges
Looking ahead, what are the future challenges for data design? I see several, each
related to the issues addressed above.
Tools — The vast array of tools available will continue to grow and diversify. So the
problem is not a dearth of tools; it’s cataloging them and making efficient use of the
existing tools appropriate to any given task. It seems every week a new framework or
library is introduced that provides an improved solution to a very specific problem. Just
as we can’t all be experts in every field, we can’t all learn how to use every tool (much
as we would like to). We need a better method to identify the best tools for a given task.
Whatever that method is, it needs to fit into existing workflows.
Methods — Speaking of methods, we each have our own working process, and our
challenge is to develop clearer language around those processes. With better
language, we can compare processes and learn what, exactly, certain practitioners do
that makes their work more successful (or not) than others. What are data design’s best
practices? How similar or different are they when making data art, as opposed to data
visualizations? I hope that this essay is a small step toward framing that discussion.
Data Design Literacy — It is essential to clarify our best practices so that we can
educate new practitioners. I began with the question, “How should I get started with
data visualization?” Students and others new to the field deserve better answers to this
important question. We need maps and taxonomies of practice (which this essay seeks
to introduce), and we need more structure and consistency in our training programs.
Although data design has a long history, in this rapidly changing environment, it often
feels like we are just figuring things out for the first time.
Data Image Literacy — Practitioners are not the only ones who need to be educated;
informed audiences are also essential. The consumers of data design must understand
the possibilities and pitfalls of the images we create. Just as media literacy education
seeks to ensure critical awareness of film, television, and radio, data image literacy is
needed to ensure that the inherent biases of data images are well-understood.
Ethics — While there is a tendency to trust data images as fact, practitioners know that
even minor changes to a design can strongly influence how the underlying story or
information is perceived. Given the ease with which charts and maps can be made to
lie, there may be a need for a professional code of visual ethics, a formalization of
already well-known design principles advocating for representations that align with
human perceptual abilities.
Among hammers, there are minor variations in form, weight, and size. Yet all hammers
share a similar fundamental form. Over time, a builder develops a feel for a particular
hammer, sensing how much force is needed to move a nail into position.
Software-based tools are more diverse. They share only fundamental underpinnings,
such as the use of computation and some common interface conventions. Despite
expressing no obvious physical form, they encourage the development of limited muscle
memory, perhaps for common keyboard shortcuts or method patterns. Over months or
years of use, a favorite tool or suite of tools will often emerge, and a data designer will
gradually develop expertise with that tool, having cultivated a practiced sense for how to
strike a particular type of nail.
Yet with so many software tools available, it can be overwhelming to know where to
begin. New practitioners are not yet attached to any particular tool; they want to choose
an approachable tool, the mastery of which will be transferable to other such tools in the
future. Unfortunately, software is not as straightforward as hammers. Learning to code
in one language may familiarize you with core concepts — variables, arrays, logic,
functions — but switching to another language involves different syntax and methods,
different best practices and frameworks, often a very different way of approaching the
problem entirely. (Worst-case scenario: moving from Python to Java. So many
semicolons!) Every time we switch tools, we have to re-learn how to strike the nail.
Even worse, our favorite software-based tools may change themselves right underneath
our noses, auto-updating to add new features, remove old ones, modify syntax rules, or
change operating requirements. For some people, this would be crazy-making, and
certainly, in the physical world, it would be. Imagine a hammer that, after having been
used successfully for years on multiple projects, is considered “trusty” — a reliable
workhorse that has supported the builder in a variety of scenarios. But this hammer is
an open-source hammer, with a core group of five or six dedicated contributors. They
actively patch bugs and introduce new features, so every few months or so we get
another point release — Hammer 1.1, Hammer 1.2, and so on. With each release, our
hammer is still recognizable, but functions a bit differently; we must adjust the angle of
our strike. Hammer 2.0 brings new operating requirements; our old, dingy workshop is
no longer supported, so the hammer just sits there, inoperable, until we repaint the
walls, install better lighting, or move to an entirely different neighborhood. Of course,
Hammer 1.9 is still available for download, and we have a hundred copies sitting around
on shelves, but it doesn’t drive nails as quickly, precisely, or elegantly. Also, there is
market pressure; the hot design firms are not interested in practitioners using old
technology.