Data Visualization in R
Data Visualization in R
Section 0: Introduction
Section 8: Mapping
Section 9: Shiny
Data Visualization in R
Quantitative Methods in Global Health: Mini-Conference
Section 0: Introduction
During the course of your work, you come up with interesting research questions, discuss ways of gathering data with your collaborators to answer your question, and
want to use numbers and charts to communicate your results and simplify complex ideas.
Now you might wonder: how do I actually do this myself? When I gather information, how do I work with it to make sense of things, and how can I produce visuals to
communicate my results with others?
You can do this with programming and data visualization. A common programming language that we will be using is called R .
You can download R from this site, and install it the same way you install any other program:
https://cloud.r-project.org (https://urldefense.com/v3/__https://cloud.r-project.org__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhD
Choose a mirror (webserver) that is close to you geographically.
RStudio shows us our code, outputs and plots. It also makes it easy to get help information about how an R function works. Let’s start by looking at the four parts of
the RStudio interface.
Top Left: This is the editor. It is where you write and save your code. If you highlight a portion of code and press Control + Enter (or, if you’re on a Mac,
Command + Enter ) then the highlighted code will run in the…
Bottom Left: This is the console. It shows you the code you’ve just run and what the result is.
Top Right: This is the environment. If you create or store any objects in your code, they appear here. The history contains all of the commands you sent to the
console.
Bottom Right: This is the plot window. If you create plots, they appear here. It is also where help information appears. Files shows the directory where R thinks
your workspace is, and Help shows you the help files and dcoumentation.
We call it a ‘script’ because, like a script in a play, this file will become a set of ‘instructions’ you give to the computer to tell it what to do. It is best to write and run
code from a script in the editor, so that you keep track of what you’ve done.
Type the following header below into the new R file, and then save the file somewhere you’ll remember.
# [Your Name]
# [The Date]
Notice that typing the hashtag symbol changes the color of the text. This turns the line into a comment, so it will not run like other code. We use comments to organize
and write helpful notes to ourselves in our code.
Below your header, type 2+2 , highlight it, and press Ctrl + Enter . This runs the code, and you should see the result in the console. You can do more calculations
such as the ones below:
2+3*4
2+(3*4)
(2+3)*4
(15/3)*5-2
What if you want to store the results of a calculation, and use them in another calculation? For that, we create what are called variables using either <- or = .
#Assign a number
x <- 5
y <- 3
You should immediately notice that in the top right panel, the variable x has been stored in your environment and has the value 5 , and y has been stored with the
value 3 . Use this panel to keep track of the variables you’ve created!
Now, you can use the stored value to perform additional calculations. Let’s try a few. In this handout, the code we want to run will be surrounded by a little grey
box, and then the output resulting from that code shows up on the following line starting with ## [1] . So, if you’re following along with the examples you’ll only
be typing into the computer what’s in the grey boxes, you do not need to type in any line that starts with ## [1] because that’s what the computer is going to give to
you!
x+y
[1] 8
x-y
[1] 2
x*y
[1] 15
x/y
[1] 1.666667
[1] 5
[1] 3
Notice none of these calculations changed the value of x or y because we did not reassign a value to it. x is still 5, and y is still 3.
However, if we do reassign a different value to x or y , the old value is forgotten and replaced by the new value. It’s important to keep track of changes you make to
variables!
x <- x + 4
[1] 9
x + y
[1] 12
y <- 8
x + y
[1] 17
In addition to numbers, R can also work with and store letters and other characters. We call these strings, and they can be a wide range of values, such as words,
names, sentences, paragraphs, passwords, etc. We do this by putting the string we want to store inside quotation marks like this:
To be clear: the variable is z without quotation marks, and the value that z stores is the string "z is a string variable" which has quotation marks.
Lastly, variables don’t just have to be called x or y , they can be an named by an unbroken set of letters or numbers like height , var3 , or final_value . It’s best
to call them something meaningful so you remember what they represent.
my_location
1. Install the package using install.packages("[name of package]") . You only need to do this the very first time you ever use a new package.
2. Load the package using library("[name of package]") . You need to do this every time you close and reopen RStudio. You can load the package only after
you have installed the package.
3. If you want to follow along with the workshop, you should install the following packages with this line of code: install.packages(c(
'dplyr', 'tidyverse', 'ggplot2', 'dslabs', 'gapminder', 'ggthemes', 'ggrepel', 'gridExtra', 'RColorBrewer',
'rnaturalearth', 'rnaturalearthdata', 'sf'))
For example, let’s load a new package to help with graphics, called ggplot2 .
#install.packages("ggplot2") #We've commented out this line because we already ran it before.
library("ggplot2")
Next, use ?ggplot to open the help file for this package, and click the link that says “Index” at the bottom to look up all of the functions contained in the new
package.
x <- c(1,2,3,4)
y <- c(2,4,6,8)
qplot(x, y)
qplot(1:10, letters[1:10])
Some packages include datasets. For example, the dslabs package contains datasets that we will be using for data visualization.
#install.packages("dslabs") #We've commented out this line because we already ran it before.
#command the very first time you want to use a new package
#then you can start with the 'library' command every time after
library("dslabs")
head(gapminder) #'head' is a function that shows the first few rows of data.
tail(gapminder) #'tail' is a function that shows the last few rows of data.
#?gapminder
We can load an existing dataset gapminder from the dslabs package and look at the first few rows of data using head() or the last few rows of data using
tail() . We can use ?gapminder to open the help file for the gapminder dataset to see what variables the dataset contains.
One final note: packages can be written and uploaded by anyone, so if you are using a new package to do something important, make sure you trust where the
package is coming from. One way you can do this by searching the package name online and finding out about the people who wrote it.
Small datasets are commonly stored as Excel files. Although there are R packages designed to read Excel (xls) format, you generally want to avoid this format and
save files as comma delimited (Comma-Separated Value/CSV) or tab delimited (Tab-Separated Value/TSV/TXT) files. These plain-text formats make it easier to share
data since commercial software is not required for working with the data.
The first step is to find the file containing your data and know its path. When you are working in R it is useful to know your working directory. This is the folder in which
R will save or look for files by default. You can see your working directory by typing:
getwd()
You can also change your working directory using the function setwd . Or you can change it through RStudio by clicking on “Session”.
The functions that read and write files (there are several in R) assume you mean to look for files or write files in the working directory. Our recommended approach for
beginners will have you reading and writing to the working directory. However, you can also type the full path, which will work independently of the working directory.
We have included Covid-19 data from the New York Times (https://urldefense.com/v3/__https://github.com/nytimes/covid-19-
data__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhDhGe6qRvvy_oFb0uopQvIfgusHksD1riMoVU5pFlpnnlabH$) in a CSV file. We recommend
placing your data in your working directory.
You should be able to see the file in your working directory and can check using:
list.files()
[1] "covid-shiny"
[2] "Data-Viz-Figures"
[3] "Data_Viz_Examples.R"
[4] "Data_Viz_Workshop_2022.html"
[5] "Data_Viz_Workshop_2022.rmd"
[6] "Data_Viz_Workshop_2022_files"
[7] "Data_Viz_Workshop_2022_Intro.html"
[8] "Data_Viz_Workshop_2022_Intro.rmd"
[9] "Data_Viz_Workshop_2022_Slides.html"
[10] "Data_Viz_Workshop_2022_Slides.pdf"
[11] "Data_Viz_Workshop_2022_Slides.rmd"
[12] "HarvardChan_logo_center_RGB_Large.png"
[13] "Rstudio_screenshot.png"
[14] "us-states.csv"
[15] "us.csv"
head(covid_states)
1 2020-01-21 Washington 53 1 0
2 2020-01-22 Washington 53 1 0
3 2020-01-23 Washington 53 1 0
4 2020-01-24 Illinois 17 1 0
5 2020-01-24 Washington 53 1 0
6 2020-01-25 California 6 1 0
This table shows the daily number of cases and deaths for each U.S. state, U.S. territories, and the District of Columbia from January 21, 2020 to August 10, 2022.
Cases represent the total number of cases of Covid-19, including both confirmed and probable. Deaths represents the total number of deaths from Covid-19, including
both confirmed and probable. FIPS codes are a standard geographic identifier that allows you to combine this data with other data sets like a map file or population
data.
library("ggplot2")
library("dplyr")
geom_line() +
On March 11, 2020, WHO declared a Covid-19 a global pandemic. Let’s add some annotations for this landmark event
geom_line() +
geom_vline(aes(xintercept=as.Date("2020-03-11")), linetype="dashed") +
Which states or territories in the United States have been hit the hardest?
group_by(state) %>%
top_n(5)
Selecting by total_cases
top_states
# A tibble: 5 x 2
state total_cases
<chr> <int>
1 California 10858351
2 Florida 6892701
3 Illinois 3617765
5 Texas 7563617
Let’s plot these states’ confirmed and probable cases over time compared to Massachusetts.
top5_mass <- covid_states %>% filter(state %in% c("Massachusetts", "California", "Florida", "Illinois", "New York", "Texas"))
geom_line() +
While the legend is helpful, it’s sometimes easier to label the plot directly.
labels <- data.frame(state = c("California", "Texas", "Florida", "New York", "Illinois", "Massachusetts"),
geom_line() +
theme(legend.position = "none") +
library(tidyverse)
library(dslabs)
data(murders)
head(murders)
In contrast, the answer to all the questions above are readily available from examining this plot:
We are reminded of the saying “a picture is worth a thousand words”. Data visualization provides a powerful way to communicate a data-driven finding. In some
cases, the visualization is so convincing that no follow-up analysis is required. People trust what they see. We also note that many widely used data analysis tools
were initiated by discoveries made via exploratory data analysis (EDA). EDA is perhaps the most important part of data analysis, yet is often overlooked.
The principles are mostly based on research related to how humans detect patterns and make visual comparisons. The preferred approaches are those that best fit the
way our brains process visual information.
When deciding on a visualization approach, it is also important to keep our goal in mind. Our goal should guide what type of visualization you create. Our goals may
vary and we may be comparing a viewable number of quantities, describing a distribution for categories or numeric values, comparing the data from two groups, or
describing the relationship between two variables.
No matter our goal, we must always present the data truthfully. The best visualizations are truthful, intuitive, and aesthetically pleasing.
library(tidyverse)
library(gridExtra)
library(dslabs)
ds_theme_set()
library(ggthemes)
theme(axis.text=element_blank(),
axis.ticks = element_blank(),
panel.grid = element_blank()) +
facet_grid(.~Year)
p1
This is a widely used graphical representation of percentages called the pie chart. It’s very popular in Microsoft Excel. The goal of this pie chart is to report the results
from two hypothetical polls regarding browser preference taken in 2000 and then 2015 using percentages.
Here we are representing quantities with both areas and angles since both the angle and area of each pie slice is proportional to the quantity it represents. This turns
out to be a suboptimal choice since, as demonstrated by perception studies, humans are not good at precisely quantifying angles and are even worse when only area
is available.
It is hard to quantify angles and determine how the percentages in the plots above changed from 2000 to 2015. Can you determine the actual percentages and rank
the browsers’ popularity? Can you see how the percentages changed from 2000 to 2015? It is not easy to tell from the plot.
The preferred way to plot quantities is to use length and position since humans are much better at judging linear measure. The bar plot uses bars of length proportional
to the quantities of interest. By adding horizontal lines at strategically chosen values, in this case at every multiple of 10, we ease the quantifying through the position
of the top of the bars.
ggplot(aes(Browser, Percentage)) +
facet_grid(.~Year)
Notice how much easier it is to see the differences in the barplot. We used the grid.arrange function from the gridExtra package to put these two plots side by
side! The gridExtra package arranges multiple plots by specifying number of columns and/or rows. We can now determine the actual percentages by following a
horizontal line to the x-axis.
In general, position and length are the preferred ways to display quantities over angles which are preferred to area.
Brightness and color are even harder to quantify than angles and area but, as we will see later, they are sometimes useful when more than two dimensions are being
displayed.
Here is an illustrative example of more barplots. The goal is to show the number of Southwest border apprehensions in 3 consecutive years.
When using barplots, it is dishonest not to start the bars at 0. This is because, by using a barplot, we are implying the length is proportional to the quantities being
displayed. By avoiding 0, relatively small differences can be made to look much bigger than they actually are. This approach is often used by politicians or media
organizations trying to exaggerate a difference. Do not distort quantities.
From the Fox news plot, it appears that apprehensions have almost tripled when in fact they have only increased by about 16%. Starting the graph at 0 illustrates this
clearly:
ggplot(aes(Year, Southwest_Border_Apprehensions )) +
data(murders)
ggplot(aes(state, murder_rate)) +
geom_bar(stat="identity") +
coord_flip() +
xlab("")
ggplot(aes(state, murder_rate)) +
geom_bar(stat="identity") +
coord_flip() +
xlab("")
Here are more barplots. The goal is to show state murder rates by state. As we can see, you can order the plots differently, such as alphabetically or numerically. We
rarely want to use alphabetical order. Instead we should order by a meaningful value. If our goal is to compare the murder rates across states, we’re probably
interested in the most dangerous and safest states. It makes more sense to order by the actual rate rather than by order alphabetically.
Here is a line graph of gun deaths in Florida over time from Reuters (graphics.thomsonreuters.com/14/02/US-FLORIDA0214.gif). The goal is to show the dramatic
spike in murders by firearm after the “stand your ground” law was enacted in 2005.
However, notice that the y-axis is flipped and if you didn’t pay close attention, you could draw an erroneous conclusion that number of murders have decreased due to
the law. Make your axes intuitive.
Flipping the y-axis makes the graph less misleading and illustrates the increase clearly:
Average height
We have focused on displaying single quantities across categories. We now shift our attention to displaying data, with a focus on comparing groups.
Our next plot includes a barplot where the goal is to compare height between females and males. A commonly seen plot used for comparisons between groups,
popularized by software such as Microsoft Excel, shows the average and standard errors (standard errors are defined in a later lecture, but don’t confuse them with the
standard deviation of the data).
data(heights)
ylab("Height in inches")
p1
The average of each group is represented by the top of each bar and the antennae expand to the average plus two standard errors. If all someone receives is this plot
they will have little information on what to expect if they meet a group of human males and females. The bars go to 0, does this mean there are tiny humans measuring
less than one foot? Are all males taller than the tallest females? Is there a range of heights? Someone can’t answer these questions since we have provided almost no
This brings us to our next principle: show the data. This simple ggplot code already generates a more informative plot than the barplot by simply showing all the data
points:
For example, we get an idea of the range of the data. However, this plot has limitations as well since we can’t really see all the 238 and 812 points plotted for females
and males respectively, and many points are plotted on top of each other. As we have described, visualizing the distribution is much more informative.
The first is to add jitter: adding a small random shift to each point. In this case, adding horizontal jitter does not alter the interpretation, since the height of the points
do not change, but we minimize the number of points that fall on top of each other and therefore get a better sense of how the data is distributed.
A second improvement comes from using alpha blending: making the points somewhat transparent. The more points fall on top of each other, the darker the plot
which also helps us get a sense of how the points are distributed.
Now we start getting a sense that, on average, males are taller than females. We also note dark horizontal lines demonstrating that many reported values are rounded
to the nearest integer. Since there are so many points it is more effective to show distributions, rather than show individual points. In our next example we show the
improvements provided by distributions and suggest further principles.
Height distributions
Earlier we saw this plot used to compare male and female heights. However, what if we have too many points? Since there are so many points it is more effective to
show distributions, rather than show individual points. We therefore show histograms for each group, with the goal to show the distribution of heights between females
and males:
heights %>%
ggplot(aes(height, ..density..)) +
geom_histogram(binwidth = 1, color="black") +
facet_grid(.~sex)
From this plot, it is immediately obvious that males are, on average, taller than females. An important principle here is to keep the axes the same when comparing
data across two plots. Ease comparisons by using common axes.
Align plots vertically to see horizontal changes and horizontally to see vertical changes. In these histograms, the visual cue related to decreases or increases in
height are shifts to the left or right respectively: horizontal changes. Aligning the plots vertically helps us see this change when the axis are fixed:
ggplot(aes(height, ..density..)) +
geom_histogram(binwidth = 1, color="black") +
facet_grid(sex~.)
p2
This plot makes it much easier to notice that men are, on average, taller. If instead of histograms we want the more compact summary provided by boxplots, then we
align them horizontally, since, by default, boxplots move up and down with changes in height.
Country income
For our last principle, we observe a boxplot comparing country income between years in each continent.
When comparing income data between 1972 and 2002 across region we made a figure similar to the one below.
library(gapminder)
data(gapminder)
ggplot(aes(labels, dollars_per_day)) +
geom_boxplot() +
scale_y_continuous(trans = "log2") +
Note that, for each continent, we want to compare the distributions from 1972 to 2002. The default is to order alphabetically so the labels with 1972 come before the
labels with 2002, making the comparisons challenging.
gapminder %>%
ggplot(aes(labels, dollars_per_day)) +
geom_boxplot() +
scale_y_continuous(trans = "log2") +
Comparison is even easier when color is used to denote the two things compared. Ease comparison by using color.
gapminder %>%
geom_boxplot() +
scale_y_continuous(trans = "log2") +
We demonstrate how we can do this using the life expectancy data. We define a data table with the label locations and then use a second mapping just for these
labels:
gapminder %>%
geom_line() +
theme(legend.position = "none")
library(RColorBrewer)
par(mar=c(3,4,2,2))
display.brewer.all(colorblindFriendly = TRUE)
Main Takeaways
Use position and length, rather than angles or area
In general, don’t use pie charts
Do not distort quantities
Order categories in a meaningful way
Make axes intuitive
Show the data
Keep axes the same
Ease comparisons
Use labels instead of legends
Think of the color blind
Tables
head(murders)
We showed this table earlier. While a table is a helpful reference if you want to look up individual values, it is difficult to draw any comparisons or look at trends.
Scatter plot
murders %>% mutate(murder_rate = total / population * 100000) %>%
ggplot(aes(state, murder_rate)) +
geom_point() +
coord_flip() +
xlab("")
A scatter plot allows comparison along a common scale. However, the order should be meaningful.
Histogram
file:///Users/elarson/Downloads/Data_Viz_Workshop_2022 (1).html 23/36
9/6/22, 7:12 PM Data Visualization in R
Earlier we saw this plot used to compare male and female heights. We used histograms for each group, with the goal to show the distribution of heights between
females and males:
heights %>%
ggplot(aes(height, ..density..)) +
geom_histogram(binwidth = 1, color="black") +
facet_grid(.~sex)
A histogram visualizes the distribution of data. Data are grouped into bins or intervals. Histograms shows the shape of the data, and can help identify extreme values
or gaps in the data, but they are not useful for comparisons.
geom_boxplot() +
scale_y_continuous(trans = "log2") +
A box plot shows the maximum, minimum, median, first quartile, and third quartile of the data. Outliers are also identified.
Bar chart
ggplot(aes(state, murder_rate)) +
geom_bar(stat="identity") +
coord_flip() +
xlab("")
A bar chart is helpful for changes over time or comparing different groups. When there are a larger number of categories or category names are long, such as in the
example above that we used earlier, you can switch to a horizontal bar chart.
Line graph
A line graph is useful for showing trends, or how data changes over time. Line graphs are used for quantitative data over a continuous interval or time period, where
the x-axis is often a timescale.
For example, you may want to look at a trend over time by using a time series plots with time on the x-axis and outcome or measurement of interest on the y-axis. An
example below is the United States life expectancy in years over time:
library(gapminder)
data(gapminder)
gapminder %>%
ggplot(aes(year, lifeExp)) +
geom_point()
When the points are regularly and densely spaced, as they are here, we can connect the points to create a curve using the geom_line function to showing that the
data are from a single country.
gapminder %>%
ggplot(aes(year, lifeExp)) +
geom_line()
This is particularly helpful when we look at two countries. Let’s compare the trend in two countries. We can subset the data to include two countries, one from Europe
and one from Asia, and assign colors to different countries:
gapminder %>%
geom_line()
Heatmaps
Sometimes, you may want to use different magnitudes of color such as variations in hue or intensity to show clusters or how a variables change over a space.
heatmaps use colors as a visual cue.
library(RColorBrewer)
cexRow = 0.8,
cexCol = 0.8)
legend(x="bottomright", xpd=T,
fill=colorRampPalette(brewer.pal(11, "PiYG"))(3))
Section 8: Mapping
For geographic data visualization with geospatial data, you may want to do some mapping. sf packages involves spatial classes or objects. The ggplot2 package
takes sf objects. One resource for maps is the rnaturalearth package, which provides a map of countries of the entire world.
library("ggplot2")
theme_set(theme_bw())
library("sf")
library("rnaturalearth")
library("rnaturalearthdata")
class(world)
ggplot(data = world) +
geom_sf(aes(fill = pop_est)) +
However, we can use other plotting packages such as sp , tmap , leaflet . Beyond static maps, there are animated maps and interactive maps, which we won’t go
into too much detail here.
Section 9: Shiny
Now that we understand how to create beautiful and informative plots with ggplot2 , we can go one step further and create interactive visualization applications with
Shiny
(https://urldefense.com/v3/__https://shiny.rstudio.com/gallery/__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhDhGe6qRvvy_oFb0uopQvIfgusHksD1ri
Shiny is an R package that makes it easy to build interactive web apps straight from R. You can host standalone apps on a webpage or embed them in R Markdown
documents or build dashboards. You can also extend your Shiny apps with CSS themes, htmlwidgets, and JavaScript actions. You can create pretty complicated
Shiny apps with no knowledge of HTML, CSS, or JavaScript. On the other hand, Shiny doesn’t limit you to creating trivial or prefabricated apps: its user interface
components can be easily customized or extended, and its server uses reactive programming to let you create any type of backend logic you want. You can even
share Shiny apps publicly on the web for free with shinyapps.io
(https://urldefense.com/v3/__https://www.shinyapps.io/__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhDhGe6qRvvy_oFb0uopQvIfgusHksD1riMoVU5
Simple Example
Here is an example using the Covid-19 dataset. We can choose which state to examine, decide what variable to compare by, and subset the range of dates we’d like
to consider.
Here’s a screenshot of an app “created to help people living and working in Scotland explore how geographical areas have changed over time or how they compare to
other areas, across a range of indicators of health and wider determinants of health” (view it here
(https://urldefense.com/v3/__https://shiny.rstudio.com/gallery/scotpho-
profiles.html__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhDhGe6qRvvy_oFb0uopQvIfgusHksD1riMoVU5pFlktghbLY$)):
Another striking example comes from New York Times (graphics8.nytimes.com/images/2011/02/19/nyregion/19schoolsch/19schoolsch-popup.gif), which summarizes
scores from the NYC Regents Exams. As described in the article, these scores are collected for several reasons, including to determine if a student graduates from
high school. In New York City you need a 65 to pass.
Another good example comes from the Department of Statistics South Africa (https://urldefense.com/v3/__https://www.statssa.gov.za/?
p=15583__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhDhGe6qRvvy_oFb0uopQvIfgusHksD1riMoVU5pFltDAcoAa$) that tracks consumer
inflation surges over the past 13 years. The annual rate for the Consumer Price Index (CPI) shows an upward trajectory in the first half of 2022, and highlights the
highest rate since May 2009.
A striking population pyramid comes from the Zimbabwe National Statistics Agency
(https://urldefense.com/v3/__https://www.zimstat.co.zw/__;!!CvMGjuU!6yRVn83zk0kywMAc24MArIjP_m6xIOrQmBcvzUBlM0zqhDhGe6qRvvy_oFb0uopQvIfgusHksD1riMoVU
Data visualization is the strongest tool of exploratory data analysis. You can use programming to bridge the gap between an idea and an interesting visual to
communicate it.
“The greatest value of a picture is when it forces us to notice what we never expected to see.” - John Tukey, father of
EDA
If you have any questions, feel free to reach out to me at [email protected] (mailto:[email protected]).