Handling Missing Data - Python Data Science Handbook
Handling Missing Data - Python Data Science Handbook
do) by Jake
VanderPlas; Jupyter notebooks are available on GitHub (https://github.com/jakevdp/PythonDataScienceHandbook).
Open in Colab
(https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/note
Missing-Values.ipynb)
The difference between data found in many tutorials and data in the real world is
that real-world data is rarely clean and homogeneous. In particular, many
interesting datasets will have some amount of data missing. To make matters
even more complicated, different data sources may indicate missing data in
different ways.
In this section, we will discuss some general considerations for missing data,
discuss how Pandas chooses to represent it, and demonstrate some built-in
Pandas tools for handling missing data in Python. Here and throughout the book,
we'll refer to missing data in general as null, NaN, or NA values.
In the masking approach, the mask might be an entirely separate Boolean array,
or it may involve appropriation of one bit in the data representation to locally
indicate the null status of a value.
Pandas could have followed R's lead in specifying bit patterns for each individual
data type to indicate nullness, but this approach turns out to be rather unwieldy.
While R contains four basic data types, NumPy supports far more than this: for
example, while R has a single integer type, NumPy supports fourteen basic
integer types once you account for available precisions, signedness, and
endianness of the encoding. Reserving a specific bit pattern in all available
NumPy types would lead to an unwieldy amount of overhead in special-casing
various operations for various types, likely even requiring a new fork of the
NumPy package. Further, for the smaller data types (such as 8-bit integers),
sacrificing a bit to use as a mask will significantly reduce the range of values it can
represent.
NumPy does have support for masked arrays – that is, arrays that have a separate
Boolean mask array attached for marking data as "good" or "bad." Pandas could
have derived from this, but the overhead in both storage, computation, and code
maintenance makes that an unattractive choice.
With these constraints in mind, Pandas chose to use sentinels for missing data,
and further chose to use two already-existing Python null values: the special
floating-point NaN value, and the Python None object. This choice has some side
effects, as we will see, but in practice ends up being a good compromise in most
cases of interest.
This dtype=object means that the best common type representation NumPy
could infer for the contents of the array is that they are Python objects. While this
kind of object array is useful for some purposes, any operations on the data will
be done at the Python level, with much more overhead than the typically fast
operations seen for arrays with native types:
dtype = object
10 loops, best of 3: 78.2 ms per loop
dtype = int
100 loops, best of 3: 3.06 ms per loop
The use of Python objects in an array also means that if you perform aggregations
like sum() or min() across an array with a None value, you will generally get an
error:
In [4]: vals1.sum()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-4-749fd8ae6030> in <module>()
----> 1 vals1.sum()
/Users/jakevdp/anaconda/lib/python3.5/site-packages/numpy/core/_methods.py i
30
31 def _sum(a, axis=None, dtype=None, out=None, keepdims=False):
---> 32 return umr_sum(a, axis, dtype, out, keepdims)
33
34 def _prod(a, axis=None, dtype=None, out=None, keepdims=False):
This reflects the fact that addition between an integer and None is undefined.
Out[5]: dtype('float64')
Notice that NumPy chose a native floating-point type for this array: this means
that unlike the object array from before, this array supports fast operations
pushed into compiled code. You should be aware that NaN is a bit like a data
virus–it infects any other object it touches. Regardless of the operation, the result
of arithmetic with NaN will be another NaN :
In [6]: 1 + np.nan
Out[6]: nan
In [7]: 0 * np.nan
Out[7]: nan
Note that this means that aggregates over the values are well defined (i.e., they
don't result in an error) but not always useful:
NumPy does provide some special aggregations that will ignore these missing
values:
Out[10]: 0 1.0
1 NaN
2 2.0
3 NaN
dtype: float64
For types that don't have an available sentinel value, Pandas automatically type-
casts when NA values are present. For example, if we set a value in an integer
array to np.nan , it will automatically be upcast to a floating-point type to
accommodate the NA:
In [11]: x = pd.Series(range(2), dtype=int)
x
Out[11]: 0 0
1 1
dtype: int64
Out[12]: 0 NaN
1 1.0
dtype: float64
Notice that in addition to casting the integer array to floating point, Pandas
automatically converts the None to a NaN value. (Be aware that there is a
proposal to add a native integer NA to Pandas in the future; as of this writing, it
has not been included).
While this type of magic may feel a bit hackish compared to the more unified
approach to NA values in domain-specific languages like R, the Pandas
sentinel/casting approach works quite well in practice and in my experience only
rarely causes issues.
The following table lists the upcasting conventions in Pandas when NA values are
introduced:
Keep in mind that in Pandas, string data is always stored with an object dtype.
We will conclude this section with a brief exploration and demonstration of these
routines.
## Detecting null values
Pandas data structures have two useful methods for detecting null data:
isnull() and notnull() . Either one will return a Boolean mask over the data.
For example:
In [14]: data.isnull()
Out[14]: 0 False
1 True
2 False
3 True
dtype: bool
In [15]: data[data.notnull()]
Out[15]: 0 1
2 hello
dtype: object
The isnull() and notnull() methods produce similar Boolean results for
DataFrame s.
In [16]: data.dropna()
Out[16]: 0 1
2 hello
dtype: object
For a DataFrame , there are more options. Consider the following DataFrame :
Out[17]: 0 1 2
01.0 NaN2
12.0 3.0 5
2NaN4.0 6
We cannot drop single values from a DataFrame ; we can only drop full rows or full
columns. Depending on the application, you might want one or the other, so
dropna() gives a number of options for a DataFrame .
By default, dropna() will drop all rows in which any null value is present:
In [18]: df.dropna()
Out[18]: 0 1 2
12.03.05
Alternatively, you can drop NA values along a different axis; axis=1 drops all
columns containing a null value:
In [19]: df.dropna(axis='columns')
Out[19]: 2
02
15
26
But this drops some good data as well; you might rather be interested in
dropping rows or columns with all NA values, or a majority of NA values. This can
be specified through the how or thresh parameters, which allow fine control of
the number of nulls to allow through.
The default is how='any' , such that any row or column (depending on the axis
keyword) containing a null value will be dropped. You can also specify
how='all' , which will only drop rows/columns that are all null values:
Out[20]: 0 1 23
01.0 NaN2NaN
12.0 3.0 5NaN
2NaN4.0 6NaN
Out[21]: 0 1 2
01.0 NaN2
12.0 3.0 5
2NaN4.0 6
For finer-grained control, the thresh parameter lets you specify a minimum
number of non-null values for the row/column to be kept:
In [22]: df.dropna(axis='rows', thresh=3)
Out[22]: 0 1 23
12.03.05NaN
Here the first and last row have been dropped, because they contain only two
non-null values.
Out[23]: a 1.0
b NaN
c 2.0
d NaN
e 3.0
dtype: float64
In [24]: data.fillna(0)
Out[24]: a 1.0
b 0.0
c 2.0
d 0.0
e 3.0
dtype: float64
In [25]: # forward-fill
data.fillna(method='ffill')
Out[25]: a 1.0
b 1.0
c 2.0
d 2.0
e 3.0
dtype: float64
Out[26]: a 1.0
b 2.0
c 2.0
d 3.0
e 3.0
dtype: float64
For DataFrame s, the options are similar, but we can also specify an axis along
which the fills take place:
In [27]: df
Out[27]: 0 1 23
01.0 NaN2NaN
12.0 3.0 5NaN
2NaN4.0 6NaN
Out[28]: 0 1 2 3
01.0 1.02.02.0
12.0 3.05.05.0
2NaN4.06.06.0
Notice that if a previous value is not available during a forward fill, the NA value
remains.
Open in Colab
(https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/note
Missing-Values.ipynb)