04.14 Visualization With Seaborn
04.14 Visualization With Seaborn
ipynb - Colab
A common early complaint, which is now outdated: prior to version 2.0, Matplotlib's color and style defaults were at times poor and looked
dated.
Matplotlib's API is relatively low-level. Doing sophisticated statistical visualization is possible, but often requires a lot of boilerplate code.
Matplotlib predated Pandas by more than a decade, and thus is not designed for use with Pandas DataFrame objects. In order to visualize
data from a DataFrame , you must extract each Series and often concatenate them together into the right format. It would be nicer to
have a plotting library that can intelligently use the DataFrame labels in a plot.
An answer to these problems is Seaborn. Seaborn provides an API on top of Matplotlib that offers sane choices for plot style and color defaults,
defines simple high-level functions for common statistical plot types, and integrates with the functionality provided by Pandas.
To be fair, the Matplotlib team has adapted to the changing landscape: it added the plt.style tools discussed in Customizing Matplotlib:
Configurations and Style Sheets, and Matplotlib is starting to handle Pandas data more seamlessly. But for all the reasons just discussed,
Seaborn remains a useful add-on.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
Let's take a look at a few of the datasets and plot types available in Seaborn. Note that all of the following could be done using raw Matplotlib
commands (this is, in fact, what Seaborn does under the hood), but the Seaborn API is much more convenient.
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 1/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
Rather than just providing a histogram as a visual output, we can get a smooth estimate of the distribution using kernel density estimation
(introduced in Density and Contour Plots), which Seaborn does with sns.kdeplot (see the following figure):
sns.kdeplot(data=data, shade=True);
If we pass x and y columns to kdeplot , we instead get a two-dimensional visualization of the joint density (see the following figure):
We can see the joint distribution and the marginal distributions together using sns.jointplot , which we'll explore further later in this chapter.
We'll demo this with the well-known Iris dataset, which lists measurements of petals and sepals of three Iris species:
iris = sns.load_dataset("iris")
iris.head()
Visualizing the multidimensional relationships among the samples is as easy as calling sns.pairplot (see the following figure):
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 2/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
[^1]: The restaurant staff data used in this section divides employees into two sexes: female and male. Biological sex isn’t binary, but the
following discussion and visualizations are limited by this data.
tips = sns.load_dataset('tips')
tips.head()
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 3/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
The faceted chart gives us some quick insights into the dataset: for example, we see that it contains far more data on male servers during the
dinner hour than other categories, and typical tip amounts appear to range from approximately 10% to 20%, with some outliers on either end.
with sns.axes_style(style='ticks'):
g = sns.catplot(x="day", y="total_bill", hue="sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
with sns.axes_style('white'):
sns.jointplot(x="total_bill", y="tip", data=tips, kind='hex')
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 4/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
The joint plot can even do some automatic kernel density estimation and regression, as shown in the following figure:
planets = sns.load_dataset('planets')
planets.head()
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 5/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
with sns.axes_style('white'):
g = sns.catplot(x="year", data=planets, aspect=2,
kind="count", color='steelblue')
g.set_xticklabels(step=5)
We can learn more by looking at the method of discovery of each of these planets (see the following figure):
with sns.axes_style('white'):
g = sns.catplot(x="year", data=planets, aspect=4.0, kind='count',
hue='method', order=range(2001, 2015))
g.set_ylabels('Number of Planets Discovered')
For more information on plotting with Seaborn, see the Seaborn documentation, and particularly the example gallery.
[^2]: The marathon data used in this section divides runners into two genders: men and women. While gender is a spectrum, the following
discussion and visualizations use this binary because they depend on the data.
# url = ('https://raw.githubusercontent.com/jakevdp/'
# 'marathon-data/master/marathon-data.csv')
# !cd data && curl -O {url}
data = pd.read_csv('data/marathon-data.csv')
data.head()
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 6/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
0 33 M 01:05:38 02:08:51
1 32 M 01:06:26 02:09:28
2 31 M 01:06:49 02:10:42
3 38 M 01:06:16 02:13:45
4 31 M 01:06:32 02:13:59
Notice that Pandas loaded the time columns as Python strings (type object ); we can see this by looking at the dtypes attribute of the
DataFrame :
data.dtypes
age int64
gender object
split object
final object
dtype: object
import datetime
def convert_time(s):
h, m, s = map(int, s.split(':'))
return datetime.timedelta(hours=h, minutes=m, seconds=s)
data = pd.read_csv('data/marathon-data.csv',
converters={'split':convert_time, 'final':convert_time})
data.head()
data.dtypes
age int64
gender object
split timedelta64[ns]
final timedelta64[ns]
dtype: object
That will make it easier to manipulate the temporal data. For the purpose of our Seaborn plotting utilities, let's next add columns that give the
times in seconds:
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 7/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
To get an idea of what the data looks like, we can plot a jointplot over the data; the following figure shows the result:
with sns.axes_style('white'):
g = sns.jointplot(x='split_sec', y='final_sec', data=data, kind='hex')
g.ax_joint.plot(np.linspace(4000, 16000),
np.linspace(8000, 32000), ':k')
The dotted line shows where someone's time would lie if they ran the marathon at a perfectly steady pace. The fact that the distribution lies
above this indicates (as you might expect) that most people slow down over the course of the marathon. If you have run competitively, you'll
know that those who do the opposite—run faster during the second half of the race—are said to have "negative-split" the race.
Let's create another column in the data, the split fraction, which measures the degree to which each runner negative-splits or positive-splits the
race:
Where this split difference is less than zero, the person negative-split the race by that fraction. Let's do a distribution plot of this split fraction
(see the following figure):
sns.displot(data['split_frac'], kde=False)
plt.axvline(0, color="k", linestyle="--");
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 8/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
sum(data.split_frac < 0)
251
Out of nearly 40,000 participants, there were only 250 people who negative-split their marathon.
Let's see whether there is any correlation between this split fraction and other variables. We'll do this using a PairGrid , which draws plots of all
these correlations (see the following figure):
It looks like the split fraction does not correlate particularly with age, but does correlate with the final time: faster runners tend to have closer to
even splits on their marathon time. Let's zoom in on the histogram of split fractions separated by gender, shown in the following figure:
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 9/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
The interesting thing here is that there are many more men than women who are running close to an even split! It almost looks like a bimodal
distribution among the men and women. Let's see if we can suss out what's going on by looking at the distributions as a function of age.
A nice way to compare distributions is to use a violin plot, shown in the following figure:
Let's look a little deeper, and compare these violin plots as a function of age (see the following figure). We'll start by creating a new column in
the array that specifies the age range that each person is in, by decade:
with sns.axes_style(style=None):
sns.violinplot(x="age_dec", y="split_frac", hue="gender", data=data,
split=True, inner="quartile",
palette=["lightblue", "lightpink"]);
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 10/11
7/19/24, 10:02 AM 04.14-Visualization-With-Seaborn.ipynb - Colab
We can see where the distributions among men and women differ: the split distributions of men in their 20s to 50s show a pronounced
overdensity toward lower splits when compared to women of the same age (or of any age, for that matter).
Also surprisingly, it appears that the 80-year-old women seem to outperform everyone in terms of their split time, although this is likely a small
number effect, as there are only a handful of runners in that range:
Back to the men with negative splits: who are these runners? Does this split fraction correlate with finishing quickly? We can plot this very
easily. We'll use regplot , which will automatically fit a linear regression model to the data (see the following figure):
https://colab.research.google.com/drive/1B02RnMG30CWGEA8Zz-tP5j0u1vz1E1lY#printMode=true 11/11